Reminder – SMART Session 2 Begins in June

The SMART Competition has two start dates for the competition schedule.  Even though there are no official competition beginning or ending dates for the competition, SMART has divided its judging calendar into two competition sessions per year.

  1. Competition Session 1: December 1 – April 1

      2. Competition Session 2: June 1 – October 15

 Teams planning to compete in Session 2 must complete the registration process by July 15th.

 Each team will still have the flexibility to create their own project plan and team work schedule within the broad limits of the competition schedule.”, said Mark Schneider, key member of the SMART Competition leadership team. “The two-competition session approach enables the global student population to plan the competition around their school calendar.”

 The change will provide the opportunity for teams to be recognized several times for their achievements. 

 The Competition is an excellent educational program that compliments studies in sustainability, LEED design, Smart Cities, digital twins, reality modeling, and renewable power generation.

The SMART Competition (www.smartcompetition.org) is a global STEM and Career and Technology Education (CTE) education program.  The competition is open to all high school and university students.  The competition is designed to attract all students without regard or bias of gender, race, socio-economic or academic performance level.

For additional information, contact Mike Andrews, m.andrews@smartcompetition.org

Nvidia’s digital twin platform will change how scientists and engineers think

Nvidia has announced several significant upgrades to its scientific computing for digital twins platform and released these capabilities for widespread use.  Highlights include the general release of Modulus, a physics-informed AI tool, support for new Omniverse integrations and support for a new 3D AI technique called adaptive Fourier neural operators (AFNO).  Both Modulus and Omniverse are downloadable today.

Image Credit: Edited version of image by yoggy0 via Flickr

These advances promise to change the way engineers think about simulation from an occasional off-line process to operational models baked into ongoing operations, Dion Harris, Nvidia lead product manager of accelerated computing, told VentureBeat.

These recent efforts complement other recent announcements, such as the intention to create Earth 2, ongoing collaborations with climate change researchers and ongoing efforts to simplify engineering design, test and development within the metaverse.  Nvidia has also collaborated with leading climate research supercomputing programs such as the European Centre for Medium-Range Weather Forecast (ECMWF) on Destin-E.

Nvidia digital twin announcement highlights

Nvidia announced Modulus at GTC last fall, which is now live.  It’s a physics-informed neural network model that allows you to train models for complex systems using physics-informed instructions.  This will improve climate simulations and explore physical, mechanical and electrical tradeoffs in designing products and buildings.  It helps accelerate the creation of AI-based surrogate models that abstract physics principles from real-world data.

The new Omniverse integration allows teams to feed the output of these AI physics models into the Omniverse.  This makes it easier to combine better AI models with visualization tools built into Omniverse.  More significantly, these new models are much faster than conventional physics models, making it easier to run them in real-time or explore more variations as part of scenario planning.  “It creates a different operational model for how you would engage with these data sets and simulation workflows,” Harris said.

The integration with Omniverse will make it much easier for engineers to weave digital twins capabilities into existing workflows.  Nvidia is building out a variety of connectors that allow engineers to ingest models from existing product engineering, architectural and simulation tools. Omniverse also helps allow teams to ingest data from AI models as well.

Omniverse provides a centralized hub for collecting data in interactive collaboration across data sets and disciplines. It ingests data from a variety of sources and uses the universal scene description format for organizing data on the platform.  For example, a better model in climate research may involve atmospheric data, geospatial data and human interaction data.  Harris said there is still work to be done in building universal scene description plugins for various platforms, which is one reason Omniverse is free for developers.

Another major upgrade is support for adaptive Fourier neural operators (AFNO).  This scientific term describes training neural networks that reflect 3D spatial states.  AFNO is part of a wider class of new approaches, including Fourier neural operators (FNO) and physics informed neural operators (PNO).  These techniques encode the 3D spatial relationships based on partial differential equation models, allowing teams to create more accurate surrogate AI models.  Traditional AI models that use convolution or other pixel-based approaches that less accurately encode the arrangement of 3D objects.

Better climate models with AI

 Nvidia also announced early results of these tools applied to climate research as part of the FourCastNet project.  This collaboration between Nvidia and leading climate researchers at Purdue, Lawrence Berkeley, the University of Michigan and others.  FourCastNet is an AI surrogate model used to perform midrange climate change forecasts at a global scale.  The research paper describes how the team uses AFNO to produce a very fast yet very accurate model that could be used for some of these midrange models.

In climate and weather research, the resolution is characterized in terms of kilometer squares, which are like pixels. The smaller the squares the better.  The state-of-the-art first-principles models such as the ECMWF’s integrated forecasting system (IFS) can achieve a 9-km resolution.  The state-of-the-art FourCastNet model is faster but less accurate than the state-of-the-art models built using traditional first principle approaches.

Today FourCastNet can achieve an 18-km resolution 45,000-times faster and uses 12,000-times less energy at the same accuracy as IFS.  Prior surrogate models maxed out at 25-km resolution. One factor in improving accuracy is the tremendous data requirements for training surrogate models compared to traditional approaches.  For example, the process of enhancing resolution from 18-km to 9-km will require about 30-times as much data.

There are two scales of weather and research centers, including about 17 larger climate change centers and about 175 smaller regional weather research groups.  The smaller centers have tended to focus on well-defined boundaries that neglected the impact of adjacent weather phenomena.  The new FourCastNet model will enable the smaller weather centers to simulate weather patterns that move across boundaries.

“This will democratize climate change research,” Harris said.

One caveat is that this model was trained on 40-years of climate data, which required a lot of processing time and energy. But once trained, it can be run on low-cost computers.  For example, the FourCastNet researchers were able to run a simulation on a 2-node Nvidia cluster that previously required a 3060-node supercomputer cluster.

Harris expects that first principle models and surrogate models will coexist for some time.  First-principles approaches will form a sort of ground truth, while the surrogate models will allow engineers to iterate on simulation scenarios a lot faster.  Nvidia has been working on ways to improve both.  For example, Nvidia has tuned its software to accelerate weather research and forecasting (WRF) and consortium for small-scale modeling (COSMO) models.

An ensemble of earths

 This FourCastNet work complements the Earth-2 announcement Nvidia made at Fall GTC.  Earth-2 is a dedicated system Nvidia is building to accelerate climate change research.  Earth-2 will combine Modulus, Omniverse and Nvidia hardware advances into a cohesive platform.  Omniverse integration will make it easier to ingest AI models, climate data, satellite data and data for other sources to build more accurate representations using all these inputs.

“Earth-2 system will integrate everything we are building into a cohesive platform,” Harris said.

This will make it easier to combine a variety of scientific disciplines, research techniques and models into a single source of truth.  The collaborative aspect of Omniverse will help researchers, policy planners, executives and citizens work together to solve some of the world’s most pressing problems.

Discovering new unknowns

Faster simulations also mean that researchers can explore the ramifications of simulation with slightly different assumptions within a model.  Climate change researchers use the term ensemble to describe a process of testing multiple models with slight variations.  For example, they might run a simulation 21 times to explore the impact of minute variations of assumptions on the overall projection.  FourCastNet will allow researchers to simulate 1000 member ensembles, providing much higher confidence in the prediction.

Harris said, “it’s not just about being able to run the models faster.  You can also run it more to get a more accurate estimate of the outcome.  You get a new understanding of how to think about it once you’ve seen this complex system in motion in 3D space.”

Siemens had already been running similar kinds of models, but only at the design phase.  These faster simulation techniques allowed them to run similar types of models continuously during operations.  For example, Siemens has used these techniques to model heat transfer systems in a power plant and the performance of wind turbines more efficiently.  A new surrogate wind performance model is expected to lead to optimized wind park layouts capable of producing up to 20% more power than previous designs.

“We see digital twins being adopted in everything from medical to manufacturing, scientific and even entertainment applications,” Harris said.

 

Author: George Lawton

VentureBeat

 

Semi-transparent organic photovoltaic filters for agrivoltaic greenhouses

Researchers in the United States have tested organic photovoltaic filters in a greenhouse hosting lettuce growth and have found the devices’ transmission spectra may help fine-tune the characteristics of the plant.  They used transcriptomic analysis to assess the key modifications of the plants grown under the solar filters.

The three active OSC filters used in the greenhouse. Image: North Carolina State University (NCSU)

 

A group of scientists from the North Carolina State University (NCSU), in the United States, has tested three different filters based on semi-transparent organic solar cells (ST-OSCs) in a greenhouse intended for red oak leaf lettuce growth.

“There is wide spectral tunability with the organic semiconductors to tune the light that is absorbed by the solar cells and the light that gets transmitted to the plants,” NCSU researcher, Brendan O’Connor, told pv magazine.  “Our work shows that this transmission spectrum can not only support healthy plant growth but may be used to promote gene expression in crops for desired characteristics.”

According to him, this spectral tunability, along with relatively high power conversion efficiency, is the key opportunity with organic solar cells.  “Traditional opaque solar cell technologies block too much light,” he further explained.

In the paper “Beyond energy balance in agrivoltaic food production: Emergent crop traits from color selective solar cells,” published in bioRXiv, the research group explained that each filter was made with 12 organic PV units which each measured 20cm by 10cm, and which were arranged in a single layer on a substrate made of PEDOT:PSS coated onto a polyethylene terephthalate (PET) substrate.

The filters were placed on top of boxes designed to house the lettuce within a climate-controlled growth chamber.  The boxes were positioned at the same height from the floor of the growth chamber so that similar amounts of light reached the plants in each treatment.  “As expected, there were fewer differences between treatments when light intensity was controlled,” the scientists explained.  “The spectral differences between the three OSC filters alone were not great enough to have a significant effect on biomass.”

The crop growth of the filter-covered boxes was compared to that of two reference boxes covered with a clear-glass or shaded control.  “Our experiments were designed to account for both the quantitative and qualitative aspects of OSC-filtered light on lettuce as a representative crop,” the academics specified.

The researchers used transcriptomic analysis to assess the key modifications of the plants grown under the solar filters.  Transcriptomics is the analysis that is able to provide the sum of all of the RNA transcripts of an organism.  Comparing transcriptomes allows the identification of genes that are differentially expressed in distinct cell populations or in response to different treatments.  “The advantage of a transcriptome analysis in the study of OSC-grown plants is that these key modifications can be identified without the need to directly measure each aspect of plant growth and development,” they emphasized.

Through this analysis tool, the academics found the physiology of the plants was altered by the differences in light quality under the OSC filters, on a molecular and transcriptomic level.  According to them, the boxes are not only able to produce different crop species at an affordable cost but also to enable growers to fine-tune plant characteristics by selecting from the wide range of OSC transmission spectra.

In a previous study, the same research group analyzed the growth of red leaf lettuce (Lactuca sativa) in a greenhouse equipped with organic solar cells coated with filters that can manage both near-IR (NIR) and long-wavelength (LW) IR.  Their analysis showed that the organic solar cells contribute to reducing overheating in the greenhouse and that lettuce growth proceeded unabated under the solar cells as the different transmission spectra had no impact on the fresh weight of the plants.

 

Author:  Emiliano Bellini

pv-magazine

Designing the Smart City Digital Twin

A smart city digital twin relies on a number of layers of data that build on top of each other, layering in information about the terrain, buildings, infrastructure, mobility, and IoT devices. The digital twin uses the data generated in the virtual smart city layer to perform additional simulations; this information is fed back through the layers of the model, where it can be implemented in the physical world.

The term “digital twin” has existed since the early 2000s.  Beginning with applications in manufacturing and construction, various industries have since come to define the term in their own contexts.  According to IBM, a digital twin “is a virtual representation of a physical object or system across its lifecycle, using real-time data to enable understanding, learning and reasoning.”  Siemens adds to that the ability of a digital twin “to simulate, predict, and optimize the product and production system before investing in physical prototypes and assets.”

In the planning context, we are mainly talking about digital twins of entire cities.  According to Arup, “the promise of the city digital twin is to help provide a simulation environment, to test policy options, bring out dependencies and allow for collaboration across policy areas, whilst improving engagement with citizens and communities.”

Advancements in technology for smart cities — such as the deployment of information communication technology, sensors, and the Internet of Things — enable us to collect data about pretty much every movement, flow, or activity in a city.  The availability of this data, combined with increased computing power and artificial intelligence, can allow for the digitalization of entire cities.

The synergy of these technologies essentially leads to the development of smart city digital twins.  SCDTs can improve decision-making processes and allow for simulation and experimentation with real-time data.

Digital twin technology is relatively new to planners, as are smart city applications. But this shift signals the end of our days of limited experimentation.  Planners have the chance to go beyond simulations that can only focus on isolated questions due to limited capabilities.  While not quite mainstream yet, according to ABI Research, by 2025 more than 500 city digital twins will be deployed globally.  So, planners can expect digital replicas of entire cities and their systems sooner rather than later — that is, if they’re ready to take the first steps.

 

By Petra Hurtado, PhD and Alexsandra Gomex

INNOVATIONS TECH Magazine

How BIM helps to create a more disaster-resilient world

Building information modeling (BIM) could have a transformative impact on the resilience of the built environment.

The frequency and scale of natural disasters have been increasing since the beginning of the century, and experts believe these disasters may grow even more severe over the next few decades.

Building information modeling (BIM), along with other digital construction and building information tools, may be essential for creating a world that can stand up against these disasters.

With BIM, it is easier for construction companies, designers, building owners, and first responders to prepare structures for disaster and limit the damage a crisis can bring.

  1. AI and BIM for Designing More Resilient Buildings

With building information modeling, it’s possible for architects and construction companies to create new structures that are built from the ground up for disaster resilience.

Simulation technology, for example, combined with BIM models can then help designers predict how the building will fare in a real disaster.  Existing technology allows designers to model the spread of fire, estimate earthquake damage, and predict how flooding may impact a building.

Material choices, HVAC system design, and building layout can all be tweaked based on simulation results, allowing designers to test and redesign buildings until they’re ready for a disaster – or, at least, avoid design oversights that could make buildings much less safe.

Construction companies and designers can also use emerging technology to extend the utility of their BIM tools. Big data, for example, is increasingly popular in the construction industry. Combined with AI, construction companies are already using big data analysis to produce better project timelines and budgets.

The combination of AI and big data can help make material analysis and disaster simulations much more accurate – giving designers a better picture of how disasters could impact a new structure.

Because BIM models and data can be shared easily between job stakeholders, designers can share simulation results with business partners and others, making it easier to justify design decisions or communicate best practices regarding disaster resilience.

These same models and simulations can also be passed on to building owners after handover, allowing them to use this data to inform repairs, renovations, and disaster readiness planning.

BIM tools can also help designers create buildings that are easier to repair and recover after a disaster.  BIM information can also make these repair and recovery processes much easier.

In the event a building is damaged by a disaster, repair crews and construction teams will have access to BIM models that can help guide the recovery process, allowing them to more effectively clear debris and rebuild the structure according to its original design.

  1. BIM to Improve Disaster Response

When they need to access a building during a crisis, rescue teams, fire brigades, and other disaster response teams may be forced to rely on analog reference materials, like blueprints, or two-dimensional CAD diagrams.

The information available in these representations is often out-of-date and not at the level of detail that responders need, however, meaning they can be of limited use during an emergency.

BIMs, however, store three-dimensional indoor geometry and exit information, making them a valuable tool for disaster responders who need accurate, detailed, and up-to-date information on a building’s structure and contents.

Because BIM models can be extraordinarily in-depth, they may also contain a variety of building data that is relevant to disaster response teams – like information on whether a room contains flammable materials, or if the room’s floor is made from a material that will become slippery when wet.

  1. BIM for Building Disaster Management and Response

BIM software can also be a powerful risk management tool for building owners wanting to create more effective disaster response plans or even automated building systems that can help keep occupants safe in the event of a disaster.

For example, BIM tools can enable owners to more effectively plan rescue and evacuation routes for fires or similar emergencies.

By taking advantage of fire dynamics simulation, agent-based crowd simulation, and BIM models, owners can create evacuation routes that consider both how people move in an emergency and how fire and smoke will spread through the building.

Routes created with this method can help keep building occupants safer than a route created with a more conventional planning strategy.

BIM can also help building owners and disaster response teams more effectively manage an active evacuation. By using a combination of BIM model data and smart building technology, a system could track evacuees as they move through the building – potentially guiding them to nearby exits or helping coordinate their movement based on the location of rescue teams.

As the situation develops, the system can steer evacuees away from exits that are no longer available or parts of the building that have become especially dangerous.

A similar approach with different simulation models could also help building owners both prepare for other disasters – like floods, hurricanes, and earthquakes – and stage more effective disaster responses.

  1. BIM Automation for Informal Housing Retrofits

The world is becoming increasingly urbanized – and population growth in urban areas sometimes outstrips available housing.  As a result of population growth and internal migration in developing countries, more than a billion people around the globe live in informal and marginal housing.

Occupants of informal housing may lack easy access to essential amenities like water, electricity, and plumbing.

The structure and material choices in informal housing are often subpar, meaning occupants can be especially vulnerable to natural disasters.

It’s often possible to retrofit informal housing and remodel existing structures so that they are in line with good building practices and provide occupants the services they need.

By remodeling these structures instead of building entirely new ones, governments can limit the risk of displacing current residents while also reducing the overall cost of providing them with adequate, disaster-resilient housing.

With BIM, developers can partially automate the process of designing disaster-resilient retrofits for existing informal housing.  These retrofits help strengthen the structural integrity of existing informal housing.

For example, designers involved in the government program “Casa Digna, Vida Digna” (“Dignified House, Dignified Life”) in Columbia use BIM to strengthen informal housing against earthquakes.

Digital tools were essential in streamlining the process of identifying the risks that each structure faced and determining how these buildings could be retrofitted for earthquake resilience.

Using BIM to Prepare the World for Disasters

Experts believe extreme weather and natural disasters may become even more frequent and severe in the future. BIM may be an essential tool for both construction companies and disaster responders wanting to minimize the impact these disasters may have.

By using BIM, it may also be possible to retrofit existing structures to better prepare them for disasters – helping keep people safe without the same risk of displacement that can come with demolishing and rebuilding unsafe structures.

 

Author: Rose Morrison

Geo Week News, AEC Innovations

How Singapore created the first country-scale digital twin

Recently, Singapore completed work on the world’s first digital twin of an entire nation.  Bentley Systems tools accelerated the process of transforming raw GIS, lidar, and imagery data into reality mesh, building, and transportation models of the country.

“We envisaged that these building blocks will be part and parcel towards the building of the metaverse starting with 3D mapping and digital twins,” Hui Ying Teo, senior principal surveyor at the Singapore Land Authority (SLA), told VentureBeat.

He says he thinks of digital twins as a replication of the real world through intense digitalization and digitization.  They are critical for sustainable, resilient, and smart development.  His team has been developing a framework that enables a single source of truth across multiple digital twins that reflect different aspects of the world and use cases.

Singapore is an island nation, and rising sea levels are a big concern.  An integrated digital twin infrastructure is already helping Singapore respond to various challenges such as the impact of climate change.  A single, accurate, reliable, and consistent terrain model supports national water agency resource management, planning, and coastal protection efforts.

The digital twin efforts are also helping in the rollout of renewable energy.  An integrated source of building model data helped craft a solar PV roadmap to meet the government’s commitment to deploy two gigawatts peak (GWp) solar energy by 2030.

From mapping to twinning

One big difference between a digital twin and a map is that a digital twin can be constantly updated in response to new data.  A sophisticated data management platform is required to help update data collected by different processes to represent the city’s separate yet linked digital twins.  “To achieve the full potential, a digital twin should represent not only the physical space but also the legal space (cadaster maps of property rights) and design space (planning models like BIM),” Teo said.

City and national governments are exploring various strategies for transforming individual geographic, infrastructure, and ownership record data silos into unified digital twins.  This is no easy task since there are significant differences between how data is captured, the file formats used, and underlying data quality and accuracy.  Furthermore, governments need to create these maps in a way that respects the privacy of citizens, confidentiality of enterprise data IP, and security of the underlying data.

For example, data sources such as cadastral surveys reflect the boundaries of ownership rights across real estate, mineral, and land usage domains.  Malicious or accidental changes to these records could compromise privacy, competitive advantage, or ownership rights.

Singapore is the world’s second-most densely populated nation, leading to significant development of vertical buildings and infrastructure.  Traditional mapping approaches focused on 2D geography. After a major flood devastated the country in 2011, the government launched an ambitious 3D mapping program to map the entire country using rapid capture technologies, leading to the first 3D map in 2014.  This map helped various government agencies improve policy formation, planning, operations, and risk management.

However, the map grew outdated.  So, in 2019, the SLA launched a second effort to detect changes over time and update the original map with improved accuracy to reflect the country’s dynamic urban development.  The project combined aerial mapping of the entire country and mobile street mapping of all public roads in Singapore.

Capture once, use by many

In the past, each government agency would conduct its own topographical survey to improve planning decisions. “Duplicate efforts were not uncommon because of different development timelines,” Teo said.

The partnership with Bentley helped the SLA to implement a strategy to “capture once, use by many.”  This strategy maximized accessibility to the map by making it available as an open-source 3D national map for projects among government agencies, authorities, and consultants. Eventually, they hope to enhance the 3D map to support 4D for characterizing changes over time.

They combine lidar and automated image capture techniques to map the nation rapidly.  The new rapid capture process helped reduce costs from SGD 35 to 6 million and time from two years to only eight months.

The SLA captured over 160,000 high-res aerial images over forty-one days. Bentley’s ContextCapture tools worked to transform these into a 0.1-meter accurate nationwide 3D reality mesh.  They also used Bentley’s Orbit 3DM tool to transform more than twenty-five terabytes of local street data into the digital twin.

The team standardized on a couple of file formats for different aspects of the data. LAS and LAZ are used for point cloud data.  GeoTIFF is used for aligning imagery with physical spaces. CityGML adds support for vector models and surfaces.

Balancing openness and security

Teo said it is vital to strike the appropriate balance between open data and security.  Open data enables users to adopt appropriate tools to meet their organization’s needs, regardless of the applications.  However, this openness needed to be balanced against security and privacy considerations. They had to ensure that the raw data could be securely processed and made available to agencies, enterprises, and citizens with appropriate privacy safeguards.

All team members underwent security screening, and data was processed in a secure, controlled environment.  In addition, various sensitization and anonymization techniques were applied to protect confidentiality.  This allowed them to share the data more widely across agencies involved in planning, risk management, operation, and policy without affecting anyone’s data rights.

Data processing was completed in a controlled environment — which means it was isolated from the outside world without network access. This hampered some processes, such as getting technical support when they ran into a problem.  “However, a balance has to be struck between time and security for such a nation scale of mapping,” said Teo.

 

Author: George Lawton

VentureBeat

Special Announcement – Two Annual Competition Sessions in 2022

The SMART Competition has modified its competition schedule. Even though there are no official competition beginning or ending dates for the competition, SMART has divided its judging calendar into two competition sessions per year.

•  Competition Session 1: December 1 – April 1
•  Competition Session 2: June 1 – October 15

Teams that plan to compete in Session 1 must complete the registration process on February 15th. Teams competing in Session 2 must complete the registration process by July 15th.

Each team will still have the flexibility to create their own project plan and team work schedule within the broad limits of the competition schedule.”, said Mark Schneider, one of the SMART Competition leadership team. “The two-competition session approach enables the global student population to plan the competition around their school calendar.”

The change will provide the opportunity for teams to be recognized several times for their achievements.

The Competition is an excellent educational program that compliments studies in sustainability, LEED design issues, and renewable power generation.

The SMART Competition (www.smartcompetition.org) is a global STEM and Career and Technology Education (CTE) education program. The competition is open to all high school and university students. The competition is designed to attract all students without regard or bias of gender, race, socio-economic or academic performance level.

For additional information, contact Mike Andrews, m.andrews@smartcompetition.org

Tagged with:

Bentley Systems: The Year in Infrastructure and the 2021 Going Digital Awards in Infrastructure

Join CEO Greg Bentley, Bentley executives, Siemens, and AEC Advisors for their latest insights on December 1st and 2nd during the Year in Infrastructure.

Throughout the virtual event, Bentley Systems and its partners will celebrate the digital advancements in infrastructure and sustainability by spotlighting the winners of the 2021 Going Digital Awards in Infrastructure.

You are invited to attend the event to learn how the people behind the award-winning projects made amazing impacts in cities, energy, mobility, project delivery, and water.

Bentley Systems Chief Executive Officer Greg Bentley, Chief Success Officer Katriona Lord-Levins, and Chief Product Officer Nicholas Cumins will share their insights on infrastructure trends, sustainability, and advancements in going digital.  Hear from Siemens, AEC Advisors, and other industry experts that are making impressive infrastructure advancements in cities, energy, mobility, project delivery, and water.

On December 1, the event will focus on how going digital advances the resilience and adaptation of our organizations and infrastructure assets, including by honoring extraordinary examples.

On December 2, the conference will focus on how going digital advances global projects and skillsets, including by presenting the much anticipated 2021 Going Digital Awards in Infrastructure recognizing the outstanding projects in their categories as judged by independent jurors.

 For a sneak peek, visit: https://www.youtube.com/watch?v=EkBD6KpL9r8

 To register for the event, use the link:  https://yii.bentley.com/en for the Year in Infrastructure and the 2021 Going Digital Awards in Infrastructure virtual event is now open.  Registration is FREE for all participants.

 

Tagged with: , , ,

3 innovations for off-grid power storage

Over the past decade, the idea of a closed-loop off-the-grid home that draws its power from batteries has gone from that of an improbable wish to a very real option for many homeowners. And what’s driving this change may surprise you. Over the past several years, incredible advancements in battery technology have transformed the effectiveness, efficiency and commercial availability of these off-grid battery systems.

From increased charging and energy storage efficiency to more efficient solar panels to chanrge these off-grid batteries; today’s home charging systems are truly superior to previous eras of off-grid energy. And this, in turn, has made the prospect of home battery systems more compelling for homeowners who are not only looking to save on recurring electricity bills but who are also looking for a resilient alternative to traditional power.

Solid-state vs lithium-ion

One of the most impactful innovations in battery technology over the past several years is the commercial availability of solid-state batteries. And to showcase the tremendous potential in solid-state battery technology over the traditional lithium-ion battery, it’s important we first discuss lithium-ion’s place in the battery market.

Lithium-ion

Lithium-ion batteries have been a longtime battery staple. At a very rudimentary level, lithium-ion batteries work on the following basic battery chemistry technology.

 

Chemical changes of the charge and discharge cycle.

In a traditional lithium-ion battery cell an anode and a cathode are separated by a liquid electrolyte solution. In charging a lithium-ion battery, electrons are separated from the cathode side to the anode side through a conductive wire. This is done by applying an electrical charge to the battery and inducing an electrochemical reaction. Following a charge, electrons are “stored” in a state of higher potential energy and thus, when you attach the battery to a new electric circuit, the electrons can discharge to the lower state of energy while powering electronics within the circuit in this process.

Solid-state

Building on the lithium-ion battery, solid-state batteries are constructed in the same manner, however, in these batteries, the liquid electrolyte fluid is replaced with a solid electrolyte. The typical materials found in solid-state batteries are ceramics, oxides, sulfides and phosphates to facilitate this design.

Benefits of the solid-state battery

To understand the efficiency of solid-state batteries, and thus the value of solid-state batteries over lithium-ion batteries, it is important to address some key considerations. Here, metrics such as size (or energy density), weight and the charge will be critically important to understand the enhanced efficiency of solid-state batteries over lithium-ion batteries.

  • Size: Solid-state batteries are capable of producing 2.5 times more energy density than today’s lithium-ion battery. Here, this means in the same size constrains the solid-state batteries can store and deliver 2.5 times more energy.
  • Weight: Since solid-state batteries provide a higher energy density of 2.5 times that of today’s lithium-ion batteries, they can lighten a payload by 2.5 times.
  • Charge times: Solid-state batteries are not only higher in energy density, but they are also able to charge much more quickly than today’s lithium-ion batteries. In fact, today’s solid-state batteries are able to recharge four to six times faster than current lithium batteries.

Bring all of these factors together and get the ability to store more energy, in smaller spaces while also recharging more quickly. This all points to today’s commercially available off-grid batteries meeting the energy requirements of today’s modern home, while being packaged in a commercially viable off-grid home battery.

Lithium-carbon dioxide battery

Recently, researchers from the University of Illinois at Chicago have made a battery technology discovery that is set to revolutionize off-grid battery technology. In late 2019, a team of researchers were able to demonstrate their design of the first lithium-carbon dioxide battery. This technology, spearheaded by Amin Salehi-Khojin an associate professor of mechanical and industrial engineering, exemplified success in a theoretical design that many battery scientists were chasing for many years. Per Salehi-Khojin, “Our unique combination of materials helps make the first carbon-neutral lithium carbon dioxide battery with much more efficiency and long-lasting cycle life, which will enable it to be used in advanced energy storage systems.”

This innovation marks a major advancement in the development of lithium-carbon dioxide batteries, progressing more efficient and effective off-grid storage systems, and shows promise in offering high-efficiency eco-friendly battery storage mechanisms.

Zinc manganese battery

Dongliang Chao and Professor Shi-Zhang Qiao, another team of researchers from the University of Adelaide’s School of Chemical Engineering and Advanced Materials, revealed their recent battery innovation research regarding a new battery approach.

Based on the chemical mechanism of non-toxic zinc and manganese, these battery researchers were able to show a new battery technology that is designed on much less expensive materials. In fact, Chao and Qiao’s battery technology promises to cost a fraction of what it costs to develop the traditional lithium-ion battery. According to the team of researchers, they believe these zinc-manganese batteries could cost somewhere around $10 per kWh compared to the $300 per kWh cost for traditional lithium-ion batteries.

What does this point to? With new innovative battery technologies such as Chao and Qiao’s zinc manganese battery, consumers will begin to see off-grid battery storage come down in price.

Moving forward

Between the innovations in solid-state batteries over lithium-ion batteries, the advancement in lithium-carbon batteries, and the advancement in zinc manganese, it’s plausible to assume that the commercial viability of off-grid battery storage is going through a massive technological reformation. Building on the fact that today’s battery technologies have already fundamentally changed the way consumers leverage off-grid batteries over even the last few years, we’re almost sure to see a major increase in the adoption of off-grid battery storage in consumer’s homes.

Author:  Dalton Hurst

Smart City Tech Is Being Built Into Planned Communities

Planned development communities like New Haven in Ontario, Calif., are highlighting urban technology applications and features as signature amenities as consumer expectations reach well beyond standard pools and parks.

A gita robot delivery cart follows a pedestrian in a planned development community known as New Haven in Ontario, Calif.  Submitted Photo: Brookfield Residential

 

Robot carts and drone deliveries are just some of the baubles planned development communities are dangling as the sort of high-tech amenities residents are not only welcoming but expecting.

“Amenities isn’t just what we think of traditionally, in the vein of swimming pools, parks and playgrounds. It also includes technology,” said Caitlyn Lai-Valenti, residential senior director of sales and marketing at Brookfield Residential.  “It includes retail, and the walkability component for our residents as well.”

Brookfield is the developer behind New Haven, a master-planned community in Ontario, Calif., boasting hundreds of homes, along with retail and commercial space.  More than 350 homes in the community were sold in 2020 alone.

Some of the smart city technologies being made available to residents include drone delivery by DroneUp, ferrying goods from the New Haven Marketplace — a new retail area — to resident homes.  New Haven will also feature “robot carts” by gita, a self-operating enclosed cart about the size of a wheelbarrow, that can follow pedestrians with groceries or other items. Residents can also hop on a three-wheeled electric scooter by Clevr Scooters.

New Haven, which is part of the larger Ontario Ranch, was developed as a “gigabit community,” offering super high-speed broadband to support any number of smart city applications as well as the increasing work-from-anywhere trends following the COVID-19 pandemic.

The move to build in high-speed communications infrastructure is similar to other developments like National Landing, another planned community to be developed in the Washington, D.C., metro region.  National Landing is being developed in partnership with AT&T with 5G to support next-gen smart city technologies.

“To achieve the experiences of tomorrow, a strong, consistent, robust and secure network must be in place so that innovators know how their applications can interact today and how they can expand over time,” said Shiraz Hasan, vice president for AT&T Partner Exchange and Ecosystem Innovation.

National Landing is viewed “as a canvas for smart city innovation,” Hasan added.  “We believe a network like what we plan to deploy will improve experiences in everything we do with commercial business, government, retail, transportation and so on.”

“The opportunities could become endless in terms of expanding the experiences,” said Hasan.

Other planned development communities like Lake Nona in Orlando, Fla., are also partnering with urban technology companies to test and deploy systems to improve transportation and other aspects of living in the communities.  In addition to exploring technology related to traffic management, The Orlando Utilities Commission, Tavistock Lake Nona and Hitachi have jointly applied for a U.S. Department of Energy grant to explore energy load balancing at the building.

“So it’s not taking each of the individual energy sources and saying, how can you manage the load?  How can you balance the load between solar and photovoltaic and wind, and traditional? Yes that’s important,” said Dean Bushey, vice president for global social innovation business at Hitachi, in an interview with Government Technology in early June.

Homes in the New Haven community in California are all built with myTime and myCommand smart home services, which interface with smart home platforms from Amazon, Google or Apple.  The community also features ENE HUB (pronounced “any hub”), which are multi-functional streetlights equipped with USB charging ports, environmental sensors, Wi-Fi, wayfinding and more.

“So it really kind of provides a lot of different uses in the space,” said Lai-Valenti.

Brookfield includes an “innovation hub,” which is an internal team dedicated to researching and testing the various smart city technologies launched in the communities.

“We have a new technology group that’s always looking at all these different pieces,” said Lai-Valenti.

Developers behind the National Landing project in Washington see the community as a form of “living lab” where urban technologies can be tested and deployed and is “part of our evolving strategy to truly enable the use cases of tomorrow,” said Hasan.

 

Published in Government Technology

Author:  Skip Descant

Top