November 5, 2025 Research

Mapping, modeling, and understanding nature with AI

By the Ecosystem modeling team

A digital rendering of a field of wildflowers growing out of glowing, hexagonal honeycomb tiles. Large, translucent structures are scattered among the plants, creating a futuristic, natural scene.

AI models can help map species, protect forests and listen to birds around the world

The planet’s biosphere is the sum of its plants, animals, fungi, and other organisms. Every day, we depend on it for our survival – the air we breathe, the water we drink, and the food we eat are all produced by Earth’s ecosystems.

As increasing demand for land and resources puts pressure on these ecosystems and their species, artificial intelligence (AI) can be a transformative tool to help protect them. It can make it easier for governments, companies and conservation groups to collect field data, integrate that data into new insights, and translate those insights into action. And it can inform better plans and monitor the success of those plans when put into practice.

Today we're announcing new biosphere research predicting the risk of deforestation, a new project to map the ranges of Earth’s species, and the latest updates on our bioacoustics model Perch.

Predicting deforestation

Forests stand as one of the biosphere’s most critical pillars — storing carbon, regulating rainfall, mitigating floods, and harboring the majority of the planet’s terrestrial biodiversity. Unfortunately, despite their importance, forests continue to be lost at an alarming rate.

For more than 20 years it has been possible to track deforestation from space, using satellite-based remote sensing. Together with the World Resources Institute, we recently went one level deeper, developing a model of the drivers of forest loss — from agriculture and logging to mining and fire — at an unprecedented 1km2 resolution, for the years 2000-2024.

Today, we’re releasing a benchmark dataset for predicting deforestation risk. This model uses pure satellite inputs, avoiding the need for specific local input layers such as roads, and an efficient model architecture, built around vision transformers. This approach enables accurate, high-resolution predictions of deforestation risk, down to a scale of 30 meters, and over large regions.

A map showing deforestation risk for a region in Southeast Asia in 2023, with green showing areas already deforested, and red indicating higher risk for deforestation.

A map showing deforestation risk for a region in Southeast Asia in 2023, with green showing areas already deforested, and red indicating higher risk for deforestation. Underlying map data ©2025 Imagery ©2025 Airbus, CNES / Airbus, Landsat / Copernicus, Maxar Technologies

Modeling the distribution of Earth’s species

To conserve the planet’s threatened species, we have to know where they are. With more than 2 million known species, and millions more to be discovered and named, that’s a monumental task.

To help tackle this problem, Google researchers are developing a new AI-powered approach for producing species range maps at unprecedented scale – with more species, over more of the world, and at higher resolution than ever before. The Graph Neural Net (GNN) model combines open databases of field observations of species, with satellite embeddings from AlphaEarth Foundations, and with species trait information (such as body mass). This approach allows us to infer a likely underlying geographical distribution for many species at once, and for scientists to refine those inferred distributions with additional local data and expertise.

As part of a pilot with researchers at QCIF and EcoCommons, we’ve used our model to map Australian mammals like the Greater Glider: a nocturnal, fluffy-tailed marsupial that lives in old-growth eucalyptus forests. We are also releasing 23 of these species maps via the UN Biodiversity Lab today.

Using artificial intelligence, Google is shedding new light on where species live, helping scientists and decisionmakers better protect the Earth’s wildlife.

Listening through bioacoustics

All efforts to understand and model ecosystems ultimately depend on monitoring in the field. AI can play a critical role here as well, augmenting traditional ecological field monitoring — which is notoriously difficult and costly — with automated identification of habitats and species from monitoring devices.

A compelling example is bioacoustics. Birds, amphibians, insects and other species use sound to communicate, making it an excellent modality for identifying resident species and understanding the health of an ecosystem. Reliable and affordable bioacoustic monitors are readily available. However, these devices produce vast audio datasets, full of unknown and overlapping sounds, which are too large to be reviewed manually, but also difficult to analyse automatically.

To help scientists and conservationists untangle this complexity, we recently released Perch 2.0 - an update to our animal vocalization classifier. This new model is not only state of the art for bird identification, but is also available as a foundational model, allowing for field ecologists to quickly adapt the model to identify new species and habitats, anywhere on Earth.

We are especially proud of our work with the University of Hawai`i, where Perch is guiding protective measures for endangered honeycreepers, and also being used to identify juvenile calls to understand population health.

Google's Perch model helps scientists leverage AI to identify sounds in nature - like endangered Hawaiian birds - enabling timely conservation action.

The future of AI for Nature

The goal of this work is to make it easier for decisionmakers at all levels to take action to protect the planet. But better data only leads to better decisions if that data is thorough; if it really captures what’s happening in a given ecosystem at all levels.

That’s why we’re working to integrate these and other models together, combining data from more modalities like satellite data, images, bioacoustics, documents, and more. And, to join all this up alongside models of human activity like land-use changes and agricultural practices as well as models of agricultural yields, flood prevention, and other human-relevant consequences.

By giving policymakers a comprehensive understanding of threats to the biosphere, we can help them take action to protect future generations of plants, animals, and people. If we can model the environment, perhaps we can help it thrive.

You can learn more about the growing story of AI and sustainability by checking out

Acknowledgements

This research was co-developed by Google DeepMind and Google Research.

Google DeepMind: Andrea Burns, Anton Raichuk, Arianna Manzini, Bart van Merrienboer, Burcu Karagol Ayan, Dominic Masters, Drew Purves, Jenny Hamer, Julia Haas, Keith Anderson, Matt Overlan, Maxim Neumann, Melanie Rey, Mustafa Chasmai, Petar Veličković, Ravi Rajakumar, Tom Denton, Vincent Dumoulin

Google Research and Google Partners: Ben Williams, Charlotte Stanton, Dan Morris, Elise Kleeman, Lauren Harrell, Michelangelo Conserva

We’d also like to thank our partners at UNEP-WCMC and QCIF, additional collaborators Aparna Warrier, Artlind Kortoci, Burooj Ghani, Christine Kaeser-Chen, Grace Young, Kira Prabhu, Jamie McPike, Jane Labanowski, Jerome Massot, Kuan Lu, Mélisande Teng, Michal Kazmierski, Millie Chapman, Rishabh Baghel, Scott Riddle, Shelagh McLellan, Simon Guiroy, Stefan Kahl, Tim Coleman and Youngin Shin, as well as Peter Battaglia and Kat Chou for their support.

Related posts