AI Model Achieves Sub-Meter Precision in Forest Canopy Mapping Using Standard Satellite Imagery

December 26th, 2025 8:00 AM
By: Newsworthy Staff

Researchers have developed an artificial intelligence model that uses standard RGB satellite imagery to create high-resolution canopy height maps with near-lidar accuracy, offering a cost-effective solution for monitoring forest growth and carbon sequestration.

AI Model Achieves Sub-Meter Precision in Forest Canopy Mapping Using Standard Satellite Imagery

Researchers have developed an advanced artificial intelligence model that produces high-resolution canopy height maps using only standard RGB imagery, achieving near-lidar accuracy for precise monitoring of forest biomass and carbon storage over large areas. Monitoring forest canopy structure is essential for understanding global carbon cycles, assessing tree growth, and managing plantation resources, yet traditional lidar systems are limited by high costs and technical complexity while optical remote sensing often lacks required structural precision.

A joint research team from Beijing Forestry University, Manchester Metropolitan University, and Tsinghua University has developed a new AI-driven vision model that delivers sub-meter accuracy in estimating tree heights from RGB satellite images. Published in the Journal of Remote Sensing on October 20, 2025, the study introduces a novel framework that combines large vision foundation models with self-supervised learning, addressing the long-standing problem of balancing cost, precision, and scalability in forest monitoring. The research offers a promising tool for managing plantations and tracking carbon sequestration under initiatives such as China's Certified Emission Reduction program.

The researchers created a canopy height estimation network composed of three modules: a feature extractor powered by the DINOv2 large vision foundation model, a self-supervised feature enhancement unit to retain fine spatial details, and a lightweight convolutional height estimator. The model achieved a mean absolute error of only 0.09 meters and an R² of 0.78 when compared with airborne lidar measurements, outperforming traditional CNN and transformer-based methods. It also enabled over 90% accuracy in single-tree detection and strong correlations with measured above-ground biomass.

The model was tested in the Fangshan District of Beijing, an area with fragmented plantations primarily composed of Populus tomentosa, Pinus tabulaeformis, and Ginkgo biloba. Using one-meter-resolution Google Earth imagery and lidar-derived references, the AI model produced canopy height maps closely matching ground truth data. It significantly outperformed global canopy height model products, capturing subtle variations in tree crown structure that existing models often missed. The generated maps supported individual-tree segmentation and plantation-level biomass estimation with R² values exceeding 0.9 for key species.

When applied to a geographically distinct forest in Saihanba, the network maintained robust accuracy, confirming its cross-regional adaptability. The ability to reconstruct annual growth trends from archived satellite imagery provides a scalable solution for long-term carbon sink monitoring and precision forestry management. This innovation bridges the gap between expensive lidar surveys and low-resolution optical methods, enabling detailed forest assessment with minimal data requirements. The study is available through the original source at https://spj.science.org/doi/10.34133/remotesensing.0880.

Dr. Xin Zhang, corresponding author at Manchester Metropolitan University, stated that the model demonstrates how large vision foundation models can fundamentally transform forestry monitoring. By combining global image pretraining with local self-supervised enhancement, researchers achieved lidar-level precision using ordinary RGB imagery, drastically reducing costs and expanding access to accurate forest data for carbon accounting and environmental management.

The team employed an end-to-end deep-learning framework combining pre-trained large vision foundation model features with a self-supervised enhancement process. High-resolution Google Earth imagery from 2013 to 2020 was used as input, with UAV-based lidar data serving as reference for training and validation. The model was implemented in PyTorch and trained using the fastai framework on an NVIDIA RTX A6000 GPU. Comparative experiments with conventional networks and global canopy height datasets confirmed superior accuracy and efficiency, validating the model's potential for scalable canopy height mapping and biomass estimation.

The AI-based mapping framework offers a powerful and affordable approach for tracking forest growth, optimizing plantation management, and verifying carbon credits. Its adaptability across ecosystems makes it suitable for global afforestation and reforestation monitoring programs. Future research will extend this method to natural and mixed forests, integrate automated species classification, and support real-time carbon monitoring platforms. As the world advances toward net-zero goals, such intelligent, scalable mapping tools could play a central role in achieving sustainable forestry and climate-change mitigation.

Source Statement

This news article relied primarily on a press release disributed by 24-7 Press Release. You can read the source press release here,

blockchain registration record for the source press release.
;