Irish philosopher George Berkely, best known for his theory of immaterialism, once mused: “If a tree falls in a forest and nobody is around to listen to it, does it make a sound?”
What about AI generated trees? They probably wouldn't make a sound, but they'll still be crucial for applications like adapting urban flora to climate change. For this purpose the novel “Tree D fusionDeveloped by researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google and Purdue University, the system merges AI and tree growth models with Google's Auto Arborist data to create accurate 3D models of existing urban trees. The project has created the primary large-scale database of 600,000 environmentally conscious, simulation-ready tree models across North America.
“We are combining many years of forestry science with modern AI capabilities,” says Sara Beery, assistant professor of electrical engineering and computer science (EECS) at MIT, principal investigator at MIT CSAIL, and co-author of a brand new book Articles about Tree-D Fusion. “This allows us to not only discover trees in cities, but additionally predict how they are going to grow over time and impact their surroundings. We aren’t ignoring the work of the last 30 years to know how you can create these synthetic 3D models. Instead, we’re using AI to make this existing knowledge more useful to a broader group of individual trees in cities across North America and ultimately world wide.”
Tree-D Fusion builds on previous efforts to watch urban forests using Google Street View data, but branches out further by generating full 3D models from individual images. While previous attempts at tree modeling were limited to specific neighborhoods or struggled with accuracy at scale, Tree-D Fusion can create detailed models that typically include hidden features, equivalent to the backs of trees, that aren’t visible in street photos.
The technology's practical applications go far beyond mere commentary. City planners could in the future use Tree-D Fusion to see into the long run and predict where growing branches might tangle with power lines, or to discover neighborhoods where the strategic placement of trees can have the cooling effect and improve air quality could maximize. These predictive capabilities, the team says, could transform urban forest management from reactive maintenance to proactive planning.
A tree grows in Brooklyn (and lots of other places)
The researchers took a hybrid approach to their method, using deep learning to create a 3D shell of every tree's shape after which using traditional procedural models to simulate realistic branch and leaf patterns based on the tree genus. Using this mix, the model was capable of predict how trees would grow under different environmental conditions and climate scenarios, equivalent to different possible local temperatures and different access to groundwater.
Now that cities world wide are combating this rising temperaturesThis research offers a brand new window into the long run of urban forests. In a collaboration with MIT’s Senseable City LabThe Purdue University team and Google are launching a worldwide study that reimagines trees as living climate shields. Their digital modeling system captures the intricate dance of shadow patterns over the seasons and shows how strategic urban forestry could hopefully transform muggy city blocks into more naturally cooled neighborhoods.
“Every time a street mapping vehicle drives through a city now, we don’t just take snapshots – we watch the evolution of those urban forests in real time,” says Beery. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, providing cities with a strong lens to look at how environmental pressures affect the health and growth patterns of the trees of their urban landscape.”
AI-based tree modeling has emerged as an ally within the pursuit of environmental justice: by mapping urban tree canopies in unprecedented detail, a sister project to the Google AI for Nature team has helped reveal disparities in access to green spaces across different socioeconomic sectors. “We’re not only studying urban forests – we’re attempting to create more equity,” Beery says. The team is now working closely with ecologists and tree health experts to refine these models and be sure that the advantages of expanding cities' green tree canopies extend equally to all residents.
It's a chunk of cake
While Tree-D fusion represents a significant “growth” in the sphere, trees can present a specific challenge for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques cope well with, trees are nature's shapeshifters – swaying within the wind, intertwining branches with neighbors, and always changing shape as they grow. The Tree-D fusion models are “simulation ready” because they’ll estimate the form of trees in the long run depending on environmental conditions.
“The exciting thing about this work is the way it pushes us to rethink fundamental assumptions in computer vision,” says Beery. “While 3D scene recognition techniques equivalent to photogrammetry or NeRF (Neural Radiation Fields) excel at capturing static objects, trees require latest approaches that may account for his or her dynamic nature, where even a delicate breeze can dramatically change their structure from moment to moment.”
The team's approach of making rough structural shells that approximate the form of every tree has proven remarkably effective, but certain problems remain unresolved. Perhaps most vexing is the “tangled tree problem.” When neighboring trees grow into one another, their intertwined branches create a puzzle that no current AI system can fully solve.
The scientists view their data set as a springboard for future innovations in computer vision and are already exploring applications beyond Street View imagery and searching to expand their approach to platforms equivalent to iNaturalist and wildlife camera traps.
“This is only the start for Tree-D Fusion,” says Jae Joong Lee, a Purdue University graduate student who developed, implemented and deployed the Tree-D Fusion algorithm. “Together with my colleagues, I envision expanding the platform’s capabilities to a planetary scale. Our goal is to make use of AI-driven insights in service of natural ecosystems – supporting biodiversity, promoting global sustainability and ultimately benefiting the health of our entire planet.”
Beery and Lee's co-authors are Jonathan Huang, head of AI at Scaled Foundations (formerly Google); and 4 others from Purdue University: graduate students Jae Joong Lee and Bosheng Li, professor and dean of the Department of Remote Sensing Songlin Fei, assistant professor Raymond Yeh, and professor and associate head of computer science Bedrich Benes. Their work relies on efforts supported by the U.S. Department of Agriculture's (USDA) Natural Resources Conservation Service and is directly supported by USDA's National Institute of Food and Agriculture. The researchers presented their findings this month on the European Conference on Computer Vision.