Lidar, radar and cameras work together to take in an autonomous vehicle’s environment in 360 degrees and to pinpoint its location. While today’s cameras and radar are robust and cost-effective, lidar sensors aren’t there yet. They range in price from $7,000 to $70,000 and in some cases, they’re lasting in the hundreds-of-miles range – far from the auto-grade 100,000 miles or the lifetime of a car. U-M researchers have a possible solution. Ryan Eustice, professor of naval architecture and marine engineering and senior vice president of automated driving at the Toyota Research Institute – Ann Arbor, used video game technology to turn pre-recorded maps into 3D visualizations that make it possible for an autonomous vehicle to rely on inexpensive cameras rather than lidar technology to pinpoint location.
IMAGE: Radar, lidar and cameras are among the features of the University of Michigan’s Open CAVs, open testbeds for academic and industry researchers to rapidly test self-driving and connected vehicle technologies. Photo by Joseph Xu, Michigan Engineering Communications & Marketing
In order for autonomous vehicles to understand what their sensors take in, researchers are turning to a combination of classical computer vision and the younger field of deep learning. Where traditional computer vision relies on models that focus on edges and other defining features that humans find meaningful, deep learning takes what some call a “brute force” approach. It involves feeding the system an immense set of annotated images that it can learn from.
“At the moment there are publically available datasets to test deep learning systems and they have several thousand images – all annotated by a human. People go in and draw boxes around all the people and cars and sidewalks and stop signs, for example. But we need millions of these images to train these algorithms well,” said Ram Vasudevan, assistant professor of mechanical engineering and co-leader of the U-M/Ford Center for Autonomous Vehicles.
He and his colleague and co-leader Matthew Johnson-Roberson, associate professor of naval architecture and marine engineering, are working to streamline the process. Video games come to the rescue again. Grand Theft Auto, it turns out, looks enough like the real world to train a system. They were able to develop automated image annotation algorithms and then, overnight, extract and mark up ten million scenes, which they used to improve the accuracy of their system.
The team has also developed an algorithm that can find a pedestrian in a scene and zoom in on their hands, which can be used to make predictions about what they’ll do next.
To be as safe as possible, autonomous vehicles should talk to each other and to the infrastructure around them.
Dedicated Short Range Communications, or DSRC, lets vehicles send messages about their location, direction, speed and more at the rate of 10 per second, and at a distance of up to 1,500 feet. DSRC isn’t restricted to line-of-sight, like a camera or lidar. The technology has undergone testing for more than a decade, and it’s ready for market, even on human-driven vehicles. It’s being piloted on a grand scale around Ann Arbor right now. A federal government mandate, which has stalled, would advance adoption, says Jim Sayer, director of UMTRI.
IMAGE: Ding Zhao, research fellow at the University of Michigan Transportation Research Institute (UMTRI), adjusts a LIDAR system on an Open CAV autonomous and connected vehicle testbed at Mcity. Photo by Joseph Xu, Michigan Engineering Communications & Marketing
“Every year that we wait to put connected vehicle technology in place, we’re losing tens of thousands of lives,” Sayer said. “And I don’t believe you can have highly automated vehicles without connectivity.”
Some automakers are moving ahead with plans to install DSRC ahead of any mandates.
Beyond reducing crashes, connectivity could curb traffic jams and lead to dramatic improvements in energy efficiency. Gabor Orosz, an assistant professor of mechanical engineering, has shown that the smoother transitions a connected, automated vehicle makes between braking and accelerating can boost energy efficiency by as much as 19 percent.
How will autonomous vehicles decide how to get where they’re going – not just where to turn, but when to change lanes, when to brake hard and when to speed up? Prior mapping will be central to navigation. Prior mapping involves loading the vehicle with detailed surveys so it knows where to expect traffic signals and trees, for example, reducing the need for on-the-fly perception. Not only does this tell the car where it is in the world, it opens space for the vehicle to pay more attention to things that aren’t on its map. Eustice and Edwin Olson, a U-M associate professor of computer science and engineering, worked to develop streamlined, robust prior mapping approaches that let vehicles localize themselves, with centimeter precision, even when the road is covered in snow.
If a vehicle could predict what will happen around it, it could make better decisions. “Predicting the future is hard,” said Jason Corso, associate professor of electrical and computer engineering. “Today most methods are able to detect a lane change only when a vehicle has signaled that it’s going to change lanes.”
Corso can best that, though. He recently showed that, based on traffic flow and vehicle speeds, he can anticipate the trajectories of nearby vehicles up to five seconds before a vehicle signals its intent.
IMAGE: Hardware on Ann Arbor infrastructure such as this streetlight enables vehicles in the Ann Arbor Connected Vehicle Test Environment to communicate with one another and their surroundings. More than 3,000 cars are participating in the test. Photo (frame grab from video) by Joseph Xu, Michigan Engineering Communications & Marketing
He’s also working on ways to get autonomous vehicles to understand verbal commands. “Imagine at some point, you want your vehicle to change course,” he said. “You don’t want to have to use special language or a dashboard controller.”
U-M is also the birthplace of a radically different way of planning vehicle behaviors using a technique known as multi-policy decision making. In this approach, the vehicle doesn’t plan a path at all – it uses a library of driving strategies and runs a real-time “election” to pick the best one for a particular situation. Olson and his team are pushing this technology to produce increasingly human-like behavior, and he’s commercializing it through his startup, May Mobility.