Visual SLAM technology benefits and applications

Visual simultaneous localization and mapping (SLAM) is quickly becoming an important advancement in embedded vision and refers to the process of determining the position and orientation of a sensor with respect to its surroundings while simultaneously mapping the environment around that sensor.

By AIA May 18, 2018

Visual simultaneous localization and mapping (SLAM) is quickly becoming an important advancement in embedded vision with many different possible applications. The technology, commercially speaking, is still in its infancy. However, it’s a promising innovation that addresses the shortcomings of other vision and navigation systems and has great commercial potential.

Visual SLAM does not refer to any particular algorithm or piece of software. It refers to the process of determining the position and orientation of a sensor with respect to its surroundings, while simultaneously mapping the environment around that sensor.

There are several different types of SLAM technology, some of which don’t involve a camera at all. Visual SLAM is a specific type of SLAM system that leverages 3-D vision to perform location and mapping functions when neither the environment nor the location of the sensor is known. Visual SLAM technology comes in different forms, but the overall concept functions the same way in all visual SLAM systems.

How visual SLAM technology works

Most visual SLAM systems work by tracking set points through successive camera frames to triangulate their 3-D position, while simultaneously using this information to approximate camera pose. Basically, the goal of these systems is to map their surroundings in relation to their own location for the purposes of navigation.This is possible with a single 3-D vision camera, unlike other forms of SLAM technology. As long as there are a sufficient number of points being tracked through each frame, both the orientation of the sensor and the structure of the surrounding physical environment can be rapidly understood.

All visual SLAM systems are constantly working to minimize reprojection error, or the difference between the projected and actual points, usually through an algorithmic solution called bundle adjustment. Visual SLAM systems need to operate in real-time, so often location data and mapping data undergo bundle adjustment separately, but simultaneously, to facilitate faster processing speeds before they’re ultimately merged.

Applications that use visual SLAM

Visual SLAM is still in its infancy, commercially speaking. While it has enormous potential in a wide range of settings, it’s still an emerging technology. With that said, it is likely to be an important part of augmented reality (AR) applications. Accurately projecting virtual images onto the physical world requires a precise mapping of the physical environment, and only visual SLAM technology is capable of providing this level of accuracy.

Visual SLAM systems are also used in a wide variety of field robots. For example, rovers and landers for exploring Mars use visual SLAM systems to navigate autonomously. Field robots in agriculture, as well as drones, can use the same technology to independently travel around crop fields. Autonomous vehicles could potentially use visual SLAM systems for mapping and understanding the world around them.

One major potential opportunity for visual SLAM systems is to replace GPS tracking and navigation in certain applications. GPS systems aren’t useful indoors, or in big cities where the view of the sky is obstructed, and they’re only accurate within a few meters. Visual SLAM systems solve each of these problems as they’re not dependent on satellite information and they’re taking accurate measurements of the physical world around them.

Visual SLAM technology has many potential applications and demand for this technology will likely increase as it helps augmented reality, autonomous vehicles and other products become more commercially viable.

The ability to sense the location of a camera, as well as the environment around it, without knowing either data points beforehand is incredibly difficult. Visual SLAM systems are proving to be effective at tackling this challenge, however, and are emerging as one of the most sophisticated embedded vision technologies available.

This article originally appeared on the AIA website. The AIA is a part of the Association for Advancing Automation (A3). A3 is a CFE Media content partner. Edited by Chris Vavra, production editor, Control Engineeringcvavra@cfemedia.com.

ONLINE extra

See more from AIA below.