HomeNewsNew AI tool generates realistic satellite images of future floods

New AI tool generates realistic satellite images of future floods

Visualizing the potential impact of a hurricane on people's homes before it hits can assist residents prepare for and judge to evacuate.

MIT scientists have developed a technique that creates satellite images from the long run to point out what a region would appear to be after a possible flooding event. The method combines a generative artificial intelligence model with a physics-based flooding model to create realistic, bird's-eye images of a region, showing where flooding is more likely to occur given the strength of an approaching storm.

As a test case, the team applied the strategy to Houston, creating satellite images that showed what certain locations in town would appear to be after a storm just like Hurricane Harvey, which struck the region in 2017. The team compared these generated images with actual satellite images taken of the identical regions after Harvey hit. They also compared AI-generated images that didn’t include a physics-based flood model.

The team's physics-based method produced more realistic and accurate satellite images of future floods. The pure AI method, then again, generated images of floods in places where flooding is physically unimaginable.

The team's method is a proof-of-concept intended to reveal a case by which generative AI models combined with a physics-based model can generate realistic, trustworthy content. In order to use the strategy to other regions to depict flooding from future storms, it must be trained on many more satellite images to learn what flooding would appear to be in other regions.

“The idea is: One day we could use this before a hurricane, where it gives the general public an extra layer of visualization,” says Björn Lütjens, a postdoctoral fellow in MIT's Department of Earth, Atmospheric and Planetary Sciences, who led the research during that point was a graduate student within the Department of Aerospace Engineering (AeroAstro) at MIT. “One of the largest challenges is encouraging people to evacuate once they are in peril. Maybe this could possibly be one other visualization to extend that readiness.”

To reveal the potential of the brand new method, which they named the “Earth Intelligence Engine,” the team developed it available as a web-based resource for others to try.

The researchers report their ends in the journal today . MIT co-authors of the study include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, AeroAstro professor and director of the MIT Media Lab; along with employees from several institutions.

Generative adversarial images

The recent study is an extension of the team's efforts to make use of generative AI tools to visualise future climate scenarios.

“Providing a hyperlocal perspective on climate appears to be probably the most effective strategy to communicate our scientific findings,” says Newman, the study’s lead creator. “People relate to their very own postcode, their local area where their family and friends live. The provision of local climate simulations becomes intuitive, personal and comprehensible.”

For this study, the authors use a conditional generative adversarial network (GAN), a kind of machine learning technique that may generate realistic images using two competing or “adversarial” neural networks. The first “generator” network is trained on pairs of real data, equivalent to satellite images before and after a hurricane. The second “discriminator” network is then trained to differentiate between the true satellite images and the pictures synthesized by the primary network.

Each network robotically improves its performance based on feedback from the opposite network. So the concept is that such adversarial backwards and forwards should ultimately produce synthetic images which are indistinguishable from reality. However, GANs can still create “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.

“Hallucinations can mislead the viewer,” says LĂĽtjens, who began to ponder whether such hallucinations could possibly be avoided in order that generative AI tools might be trusted to tell people, especially in risk-sensitive scenarios. “We thought: How can we use these generative AI models in a climate impact environment where it’s so vital to have trusted data sources?”

Flood hallucinations

In their recent work, the researchers checked out a risk-sensitive scenario by which generative AI is tasked with producing satellite images of future floods that could possibly be trustworthy enough to make decisions about methods to prepare and potentially evacuate people from the danger zone.

Typically, policymakers can use visualizations in the shape of color-coded maps to get an idea of ​​where flooding might occur. These maps are the top product of a pipeline of physical models that typically begins with a hurricane track model, which then feeds right into a wind model that simulates the pattern and strength of winds over an area region. This is combined with a flood or storm surge model that predicts how wind might push a close-by body of water onshore. A hydraulic model then maps where flooding will occur based on local flood infrastructure and creates a visible, color-coded map of flood elevations over a selected region.

“The query is: Can satellite imagery visualizations add one other layer that’s just a little more tangible and emotionally engaging than a color-coded map with reds, yellows and blues and still be trustworthy?” says LĂĽtjens.

The team first tested how generative AI alone would create satellite images of future floods. They trained a GAN on actual satellite images captured by satellites as they flew over Houston before and after Hurricane Harvey. When they asked the generator to create recent flood images of the identical regions, they found that the pictures resembled typical satellite images. However, upon closer inspection, they found hallucinations in some images in the shape of flooding where flooding mustn’t be possible (e.g. in places at higher elevations).

To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team combined the GAN with a physics-based flooding model that takes into consideration real-world, physical parameters and phenomena equivalent to the trajectory of an approaching hurricane, storm surges, and inundation patterns. Using this physics-based method, the team created satellite images around Houston that, pixel by pixel, depict the identical extent of flooding because the flood model predicts.

“We reveal a concrete strategy to mix machine learning with physics for a risk-sensitive use case that requires us to investigate the complexity of Earth systems and project future actions and possible scenarios to maintain people out of danger,” says Newman. “We can’t wait to get our generative AI tools into the hands of decision-makers at the area people level, which could make a big difference and maybe save lives.”

The research was supported partly by the MIT Portugal program, the DAF-MIT Artificial Intelligence Accelerator, NASA and Google Cloud.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read