Self-Driving Cars and Autonomous Robots: Where to Now?

First Posted: Nov 24, 2013 11:31 PM EST
Close

There isn’t a radio-control handset in sight as a nimble robot briskly weaves itself in and out of the confined tunnels of an underground mine.

Powered by ultra-intelligent sensors, the robot intuitively moves and reacts to the changing conditions of the terrain, entering areas unfit for human testing. As it does so, the robot transmits a detailed 3D map of the entire location to the other side of the world.

While this might read like a scenario from a George Orwell novel, it is actually a reasonable step into the not-so-distant future of the next generation of robots.

A recent report released by the McKinsey Institute predicts the potential economic contribution of new technologies such as advanced robotics, mobile internet and 3D printing are expected to return between US$14 trillion and US$33 trillion globally per year by 2025.

Technology advisory firm Gartner also recently released a report predicting the “smart machine era” to be the most disruptive in the history of IT. This trend includes the proliferation of contextually aware, intelligent personal assistants, smart advisers, advanced global industrial systems and the public availability of early examples of autonomous vehicles.

If the global technology industry and governments are to reap the productivity and economical benefits from this new wave of robotics they need to act now to identify simple yet innovative ways to disrupt their current workflows.

Self-driving cars

The automotive industry is already embracing this movement by discovering a market for driver assistance systems that includes parking assistance, autonomous driving in “stop and go” traffic and emergency braking.

In August 2013, Mercedes-Benz demonstrated how their “self-driving S Class” model could drive the 100-kilometre route from Mannheim to Pforzheim in Germany. (Exactly 125 years earlier, Bertha Benz drove that route in the first ever automobile, which was invented by her husband Karl Benz.)

The car they used for the experiment looked entirely like a production car and used most of the standard sensors on board, relying on vision and radar to complete the task. Similar to other autonomous cars, it also used a crucial extra piece of information to make the task feasible – it had access to a detailed 3D digital map to accurately localise itself in the environment.

When implemented on scale, these autonomous vehicles have the potential to significantly benefit governments by reducing the number of accidents caused by human error as well as easing traffic congestion as there will no longer be the need to implement tailgating laws enforcing cars to maintain large gaps in between each other.

In these examples, the task (localisation, navigation, obstacle avoidance) is either constrained enough to be solvable or can be solved with the provision of extra information. However, there is a third category, where humans and autonomous systems augment each other to solve tasks.

This can be highly effective but requires a human remote operator or depending on real time constraints, a human on stand-by.

The trade-off

The question arises: how can we build a robot that can navigate complex and dynamic environments without 3D maps as prior information, while keeping the cost and complexity of the device to a minimum?

Using as few sensors as possible, a robot needs to be able to get a consistent picture of its environment and its surroundings to enable it to respond to changing and unknown conditions.

This is the same question that stood before us at the dawn of robotics research and was addressed in the 1980s and 1990s to deal with spatial uncertainty. However, the decreasing cost of sensors, the increasing computing power of embedded systems and the ability to provide 3D maps, has reduced the importance of answering this key research question.

In an attempt to refocus on this central question, we – researchers at the Autonomous Systems Laboratory at CSIRO – tried to stretch the limits of what’s possible with a single sensor: in this case, a laser scanner.

In 2007, we took a vehicle equipped with laser scanners facing to the left and to the right and asked if it was possible to create a 2D map of the surroundings and to localise the vehicle to that same map without using GPS, inertial systems or digital maps.

The result was the development of our now commercialised Zebedee technology – a handheld 3D mapping system incorporates a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it.

The Leaning Tower of Pisa gets mapped in 3D.

While the system does add a simple inertial measurement unit which helps to track the position of the sensor in space and supports the alignment of sensor readings, the overall configuration still maximises information flow from a very simple and low cost setup.

It achieves this by moving the smarts away from the sensor and into the software to compute a continuous trajectory of the sensor, specifying its position and orientation at any time and taking its actual acquisition speed into account to precisely compute a 3D point cloud.

The crucial step of bringing the technology back to the robot still has to be completed. Imagine what is possible when you remove the barrier of using an autonomous vehicle to enter unknown environments (or actively collaborating with humans) by equipping robots with such mobile 3D mapping technologies. They can be significantly smaller and cheaper while still being robust in terms of localisation and mapping accuracy.

From laboratory to factory floor

A specific area of interest for this robust mapping and localisation is the manufacturing sector where non-static environments are becoming more and more common, such as the aviation industry. Cost and complexity for each device has to be kept to a minimum to meet these industry needs.

With a trend towards more agile manufacturing setups, the technology enables lightweight robots that are able to navigate safely and quickly through unstructured and dynamic environments like conventional manufacturing workplaces. These fully autonomous robots have the potential to increase productivity in the production line by reducing bottlenecks and performing unstructured tasks safely and quickly.

The pressure of growing increasing global competition means that if manufacturers do not find ways to adopt these technologies soon they run the risk of losing their business as competitors will soon be able to produce and distribute goods more efficiently and at less cost.

It is worth pushing the boundaries of what information can be extracted from very simple systems. New systems which implement this paradigm will be able to gain the benefits of unconstrained autonomous robots but this requires a change in the way we look at the production and manufacturing processes.

This article is an extension of a keynote presented at the robotics industry business development event RoboBusiness in Santa Clara, CA on October 25 2013.

By Michael Brünig, who works for CSIRO. Part of this work has received funding from 3D Laser Mapping.

See Now: NASA's Juno Spacecraft's Rendezvous With Jupiter's Mammoth Cyclone

This article was originally published at The Conversation.

Join the Conversation

Real Time Analytics