When Vasileios Tzoumas, a research scientist at the Massachusetts Institute of Technology (MIT), visits a new city, he likes to explore by going for a run. And sometimes he gets lost. A few years ago, on a long run while in Osaka for a conference, the inevitable happened. But then Tzoumas spotted a 7-Eleven he remembered passing soon after leaving his hotel. This recognition allowed him to mentally “close the loop,” to connect the loose end of his trajectory to someplace he knew and was sure about, thus solidifying his mental map and allowing him to make his way back to the hotel.

The graduated nonconvexity (GNC) algorithm could help machines traverse land, water, sky, and space—and come back to tell the tale.

*simultaneous localization and mapping (SLAM)*. SLAM is not new. It is used for robotic vacuum cleaners, self-driving cars, search-and-rescue aerial drones, and robots in factories, warehouses, and mines. As autonomous devices and vehicles navigate new spaces, from a living room to the sky, they construct a map as they travel. They must also figure out where they are on the map using sensors such as cameras, GPS, and lidar.

As SLAM finds more applications, it is more important than ever to ensure that SLAM algorithms produce correct results in challenging real-world conditions. SLAM algorithms often work well with perfect sensors or in controlled lab conditions, but they get lost easily when implemented with imperfect sensors in the real world. Unsurprisingly, industrial customers frequently worry about whether they can trust those algorithms.

Researchers at MIT have developed several robust SLAM algorithms, as well as methods to mathematically prove how much we can trust them. The lab of Luca Carlone, the Leonardo Career Development assistant professor at MIT, published a paper about their graduated non-convexity (GNC) algorithm, which reduces the random errors and uncertainties in SLAM results. More importantly, the algorithm produces correct results where existing methods “get lost.” The paper, by Carlone, Tzoumas, and Carlone’s students Heng Yang and Pasquale Antonante, received the best paper award in robot vision at the International Conference on Robotics and Automation (ICRA). This GNC algorithm will help machines traverse land, water, sky, and space—and come back to tell the tale.

Robot perception relies on sensors that often provide noisy or misleading inputs. MIT’s GNC algorithm allows the robot to decide which data points to trust and which to discard. One application of the GNC algorithm is called *shape alignment*. A robot estimates the 3D location and orientation of a car using 2D camera images. The robot receives a camera image with many points labeled by a feature-detection algorithm: headlights, wheels, mirrors. It also has a 3D model of a car in its memory. The goal is to scale, rotate, and place the 3D model so its features align with the features in the image. “This is easy if the feature-detection algorithm has done its job perfectly, but that’s rarely the case,” Carlone says. In real applications, the robot faces many outliers—mislabeled features—which can make up more than 90% of all observations. That’s where the GNC algorithm comes in and outperforms all competitors.

Robots solve this problem using a mathematical function that takes into account the distance between each pair of features—for instance, the right headlight in the image and the right headlight in the model. They try to “optimize” this function—to orient the model so as to minimize all of those distances. The more features, the more difficult the problem.

One way to solve the problem would be to try all the possible solutions to the function and see which one works best, but there are too many to try. A more common method, Yang and Antonante explain, “is to try one solution and keep nudging—making, say, the headlights in the model more aligned with the headlights in the 2D image—until you can’t improve it anymore.” Given noisy data, it won’t be perfect—maybe the headlights align but the wheels don’t—so you can start over with another solution and refine that one as much as possible, repeating the process several times to find the best outcome. Still, the chances of finding the best possible solution are slim.

In real applications, the robot faces many outliers, which can make up more than 90% of all observations. That’s where the GNC algorithm comes in and outperforms all competitors.

The researchers found that the GNC algorithm was more accurate than the state-of-the-art techniques and could handle a higher percentage of outliers.

A recalled trajectory before optimization might look like a tangled ball of twine. After untangling, it resembles a set of right-angled lines mirroring the shape of the campus pathways and hallways that the robot traversed. The technical term for this SLAM process is *pose graph optimization*.

In the paper, the researchers compared their GNC algorithm with other algorithms on several applications, including shape alignment and pose graph optimization. They found their method was more accurate than the state-of-the-art techniques and could handle a higher percentage of outliers. For SLAM, it worked even if three in four loop closures were mistaken, which is many more outliers than it would encounter in real-world application. What’s more, their method is often more efficient than other algorithms, requiring fewer computational steps. Tzoumas says, “One of the difficulties was finding a general-purpose algorithm that works well across many applications.” Yang says they’ve tried it on more than 10. In the end, Tzoumas says, they found “the sweet spot.”

Going from research to production is an important step for research outcomes to make a difference at scale, says Roberto G. Valenti, a robotics research scientist at MathWorks. MathWorks has been working with Carlone’s lab to integrate the GNC algorithms into MATLAB as part of Navigation Toolbox™, which companies use to implement SLAM on commercial and industrial autonomous systems.

Carlone’s lab is working on ways to extend the capabilities of their GNC algorithm. For example, Yang aims to design perception algorithms that can be certified to be correct. And Antonante is finding ways to manage inconsistency across different algorithms: If the SLAM module in an autonomous vehicle says the road goes straight, but the lane-detection module says it bends right, you have a problem.

The GNC algorithm is the new benchmark in allowing robots to catch their own mistakes.

Tzoumas is looking at how to scale up not just to interaction between multiple algorithms in one robot, but collaboration between multiple robots. In earlier work, he programmed flying drones to track targets, such as criminals trying to escape on foot or by car. Going forward, multiple machines could perhaps run the GNC algorithm collectively. Each would contribute partial information to its neighbors, and together they would build a global map—of locations on Earth or elsewhere. This year he’s moving to the aerospace engineering department at the University of Michigan to work on trustworthy autonomy for multi-robot planning and self-navigation—even in difficult environments, such as battlefields and other planets.

“Not knowing how AI and perception algorithms will behave is a huge deterrent for using them,” Antonante says. He notes that robotic museum guides won’t be trusted if there’s a chance they’ll crash into visitors or the Mona Lisa: “You want your system to have a deep understanding of both its environment and itself, so it can catch its own mistakes.” The GNC algorithm is the new benchmark in allowing robots to catch their own mistakes, and, most importantly, as Tzoumas says, “it helps you get out of the woods.”

Academia / AI

Using Deep Learning and Image Processing to Restore and Preserve Artwork

Green Technology / Control Systems

Electrifying Commercial Vehicles with Hydrogen Fuel Cells

STEM / Academia

High School Student Discovers There Is More to Coding than a Computer Screen