Saturday, November 8, 2025
LBNN
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • Documentaries
No Result
View All Result
LBNN

Flexible mapping technique can help search-and-rescue robots navigate unpredictable environments

Simon Osuji by Simon Osuji
November 5, 2025
in Artificial Intelligence
0
Flexible mapping technique can help search-and-rescue robots navigate unpredictable environments
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Teaching robots to map large environments
The artificial intelligence-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map, like of an office cubicle, while estimating the robot’s position in real-time. Credit: Courtesy of the researchers

A robot searching for workers trapped in a partially collapsed mine shaft must rapidly generate a map of the scene and identify its location within that scene as it navigates the treacherous terrain.

Related posts

Microsoft to pursue superintelligence after OpenAI deal

Microsoft to pursue superintelligence after OpenAI deal

November 8, 2025
Gear News of the Week: Fairphone Lands in the US, and WhatsApp Is Finally on the Apple Watch

Gear News of the Week: Fairphone Lands in the US, and WhatsApp Is Finally on the Apple Watch

November 8, 2025

Researchers have recently started building powerful machine-learning models to perform this complex task using only images from the robot’s onboard cameras, but even the best models can only process a few images at a time. In a real-world disaster where every second counts, a search-and-rescue robot would need to quickly traverse large areas and process thousands of images to complete its mission.

To overcome this problem, MIT researchers drew on ideas from both recent artificial intelligence vision models and classical computer vision to develop a new system that can process an arbitrary number of images. Their system accurately generates 3D maps of complicated scenes like a crowded office corridor in a matter of seconds.

The AI-driven system incrementally creates and aligns smaller submaps of the scene, which it stitches together to reconstruct a full 3D map while estimating the robot’s position in real-time.

Unlike many other approaches, their technique does not require calibrated cameras or an expert to tune a complex system implementation. The simpler nature of their approach, coupled with the speed and quality of the 3D reconstructions, would make it easier to scale up for real-world applications.

Beyond helping search-and-rescue robots navigate, this method could be used to make extended reality applications for wearable devices like VR headsets or enable industrial robots to quickly find and move goods inside a warehouse.

“For robots to accomplish increasingly complex tasks, they need much more complex map representations of the world around them. But at the same time, we don’t want to make it harder to implement these maps in practice. We’ve shown that it is possible to generate an accurate 3D reconstruction in a matter of seconds with a tool that works out of the box,” says Dominic Maggio, an MIT graduate student and lead author of a paper on this method.

Maggio is joined on the paper by postdoc Hyungtae Lim and senior author Luca Carlone, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro), principal investigator in the Laboratory for Information and Decision Systems (LIDS), and director of the MIT SPARK Laboratory. The research will be presented at the Conference on Neural Information Processing Systems.

The findings are published on the arXiv preprint server.

Mapping out a solution

For years, researchers have been grappling with an essential element of robotic navigation called simultaneous localization and mapping (SLAM). In SLAM, a robot recreates a map of its environment while orienting itself within the space.

Traditional optimization methods for this task tend to fail in challenging scenes, or they require the robot’s onboard cameras to be calibrated beforehand. To avoid these pitfalls, researchers train machine-learning models to learn this task from data.

While they are simpler to implement, even the best models can only process about 60 camera images at a time, making them infeasible for applications where a robot needs to move quickly through a varied environment while processing thousands of images.

Teaching robots to map large environments
Reconstruction and pose estimates from VGGT-SLAM on the office scene from 7-Scenes showing 8 submaps and from a custom scene showing a 55 meter loop around an office corridor with 22 submaps. Both use 𝑤 = 16 . Different frame colors indicate the submap associated with each frame. Credit: arXiv (2025). DOI: 10.48550/arxiv.2505.12549

To solve this problem, the MIT researchers designed a system that generates smaller submaps of the scene instead of the entire map. Their method “glues” these submaps together into one overall 3D reconstruction. The model is still only processing a few images at a time, but the system can recreate larger scenes much faster by stitching smaller submaps together.

“This seemed like a very simple solution, but when I first tried it, I was surprised that it didn’t work that well,” Maggio says.

Searching for an explanation, he dug into computer vision research papers from the 1980s and 1990s. Through this analysis, Maggio realized that errors in the way the machine-learning models process images made aligning submaps a more complex problem.

Traditional methods align submaps by applying rotations and translations until they line up. But these new models can introduce some ambiguity into the submaps, which makes them harder to align. For instance, a 3D submap of a one side of a room might have walls that are slightly bent or stretched. Simply rotating and translating these deformed submaps to align them doesn’t work.

“We need to make sure all the submaps are deformed in a consistent way so we can align them well with each other,” Carlone explains.

A more flexible approach

Borrowing ideas from classical computer vision, the researchers developed a more flexible, mathematical technique that can represent all the deformations in these submaps. By applying mathematical transformations to each submap, this more flexible method can align them in a way that addresses the ambiguity.

Based on input images, the system outputs a 3D reconstruction of the scene and estimates of the camera locations, which the robot would use to localize itself in the space.

“Once Dominic had the intuition to bridge these two worlds—learning-based approaches and traditional optimization methods—the implementation was fairly straightforward,” Carlone says. “Coming up with something this effective and simple has potential for a lot of applications.

Their system performed faster with less reconstruction error than other methods, without requiring special cameras or additional tools to process data. The researchers generated close-to-real-time 3D reconstructions of complex scenes like the inside of the MIT Chapel using only short videos captured on a cell phone.

The average error in these 3D reconstructions was less than 5 centimeters.

In the future, the researchers want to make their method more reliable for especially complicated scenes and work toward implementing it on real robots in challenging settings.

“Knowing about traditional geometry pays off. If you understand deeply what is going on in the model, you can get much better results and make things much more scalable,” Carlone says.

More information:
Dominic Maggio et al, VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold, arXiv (2025). DOI: 10.48550/arxiv.2505.12549

Journal information:
arXiv

Provided by
Massachusetts Institute of Technology

This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
Flexible mapping technique can help search-and-rescue robots navigate unpredictable environments (2025, November 5)
retrieved 5 November 2025
from https://techxplore.com/news/2025-11-flexible-technique-robots-unpredictable-environments.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

How to Stop the Genocide in Sudan

Next Post

Experts see promise, risk in Pentagon’s draft acquisition reforms

Next Post
Experts see promise, risk in Pentagon’s draft acquisition reforms

Experts see promise, risk in Pentagon’s draft acquisition reforms

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

Surmodics Issues Statement on U.S. Federal Trade Commission Challenge to Proposed Acquisition of Surmodics by Funds Affiliated with GTCR

Surmodics Issues Statement on U.S. Federal Trade Commission Challenge to Proposed Acquisition of Surmodics by Funds Affiliated with GTCR

8 months ago
Ailing Kenyan Worker Stranded in Saudi Arabia

Ailing Kenyan Worker Stranded in Saudi Arabia

4 weeks ago
US Army Trials Tactical Resupply Drone to Boost Logistics

US Army Trials Tactical Resupply Drone to Boost Logistics

5 months ago
Ethiopia and Somalia Begin Talks to Defuse Somaliland Coastline Tensions

Ethiopia and Somalia Begin Talks to Defuse Somaliland Coastline Tensions

9 months ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0
  • Privacy Policy
  • Contact

© 2023 LBNN - All rights reserved.

No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • Documentaries
  • Quizzes
    • Enneagram quiz
  • Newsletters
    • LBNN Newsletter
    • Divergent Capitalist

© 2023 LBNN - All rights reserved.