Steven Parkison, Ph.D. student in Electrical and Computer Engineering, has received an NSF Fellowship to support his research on machine learning for autonomous vehicles.
Steven is working with Prof. Ryan Eustice as part of the Next Generation Vehicle (NGV) project, a partnership between Ford Motor Company and researchers at the University of Michigan and State Farm Insurance to develop the autonomous vehicles of the future. Michigan’s principal investigators, Profs. Eustice and Edwin Olson, are taking a leading role on sensing and decision-making.
The goal of Steven’s research is to improve vision-based perception systems on cars so that they are reliable and robust even in the absence of mapping data, and to create an extra layer of safety when combined with current systems. There are several areas that could be improved with this approach, including reducing the effects of changes in lighting and measurement noise on visual accuracy.
Many researchers investigating autonomous road vehicles are converging to similar hardware configurations. These usually include multiple light detection and ranging sensors (LIDAR), high-precision inertial measurement units, global position system (GPS) receivers, wheel encoders, and automotive radar systems.
Today’s most common localization technique uses a map that is built off-line using LIDAR and odometry data collected with a survey vehicle. This approach is advantageous because of its precision and its ability to embed road data (i.e., how many lanes there are, where traffic lights are located) directly into the map. This decreases the amount of information that the vehicle needs to perceive to operate autonomously.
“You get a lot of really good ground truth data from LIDAR,” says Steven. “It’s very accurate and lighting invariant – in a lot of ways it’s better than visual cameras.”
Steven hopes to improve these shortcomings of camera-based perception systems to make them a useful counterpart to LIDAR-map navigation. This could also help overcome issues of scaling – maps of cities and highways are reasonable to record, but recovering data for rural and county systems is time and resource-intensive.
Overcoming changes in lighting will be particularly difficult with a camera system – while LIDAR is an active sensor that provides its own illumination, passive cameras can be interrupted by glare from the sun and nighttime darkness.
It will take robust techniques to be able to match camera images to a LIDAR map in these inadequate lighting scenarios. Steven’s approach would use supervised learning techniques to increase the likelihood of successful matches in various lighting conditions.
The Ford partnership has provided the group with two fully-equipped test vehicles, a Ford Fusion and Ford Escape. Additionally, the team will have the opportunity to use MCity to test its products. MCity is a new U-M facility opening later in 2015 that will provide a proving ground for autonomous vehicles.
Are researchers optimistic about this research making its way to market soon? Opinions vary.
“I think it will be a progression,” says Steven. “You might not see full-scale autonomy right away or all at once, but I think what we’re researching may appear as an added safety feature first, and that could happen in the near future.”
Steven comes to Michigan from the University of Nebraska-Lincoln, where he received his bachelor’s degree in EE. He had an early interest in robotics, and was drawn to Prof. Eustice’s lab by his work in simultaneous localization and mapping (SLAM), which is Steven’s broad area of interest.