
Off-road autonomy and digital twins: a Q&A with Bogdan Epureanu
Going beyond driving or tele-operating single vehicles, an up-to-date digital environment is needed to help humans operate fleets of autonomous vehicles.
Going beyond driving or tele-operating single vehicles, an up-to-date digital environment is needed to help humans operate fleets of autonomous vehicles.
Experts
Automotive Research Center (ARC) Director
The Roger L. McCarthy Professor of Mechanical Engineering
Arthur F. Thurnau Professor
Professor of Electrical and Computer Engineering
Digital twins are a rapidly advancing area in engineering, going beyond static models to continuously receive data from the physical world and make predictions that go on to affect that reality. They have applications in areas such as energy systems, manufacturing and medicine. U-M’s Automotive Research Center (ARC) uses them to help design, test and control autonomous off-road vehicles that operate in human-led teams.
While the Army supports the ARC Center of Excellence with an eye to projecting military force at a longer distance without soldiers in harm’s way, the results could improve autonomous teams of humans and machines that operate in many other situations that are dangerous for humans. These include wildfires and other disaster relief operations, mining activities, and explorations of other planets.
The ARC director is Bogdan Epureanu, the Roger L. McCarthy Professor of Mechanical Engineering, an Arthur F. Thurnau Professor, and a professor of electrical and computer engineering. In an interview with the Michigan Engineer, he shared insights into how the ARC leverages digital twins and explores the potential of human-robot teaming in various scenarios.
Digital engineering accelerates the process of developing better autonomous off-road vehicles. Therefore, we aim to gather user input early on—to involve humans in using these systems from the beginning. We want to ask users: If you had a system with these features, how would you use it? What design options do you prefer? What are the implications of the key design choices we make?
That means we need to either build the system and provide it to humans for testing, or design it in a digital environment and allow them to try it virtually. In the long run, it is more cost-effective to develop a computational environment where the human can be immersed and experience that interaction without first having to build the fleet of autonomous vehicles. We aim to have a physically accurate and cinematic representation of the world with very high fidelity, allowing humans to use all their senses while interacting with the environment and the engineered vehicle system.
This isn’t just tele-operation of the vehicles. Instead, the human is operating at a higher level of abstraction, managing the work of multiple vehicles—each one functioning as an independent agent with its own goals that align with those of the human commander.
Human practitioners who use autonomous vehicles often discover new ways of operating them. If we don’t take their opinion into account early on in the design process, we risk creating a system that ends up being used differently than intended. Including users from the start helps uncover new tactics, new ways of interaction and the need for new capabilities, providing guidance about what new capabilities should be developed.
First, we scan the area we want to model. Our models include the topology of the terrain, soil types, vegetation, structures, and more. We then release a digital model of the vehicle, with synthetic sensors, into that digital environment. We observe its movements and actions.
In some cases, we may also have a physical vehicle in the real-world version of the landscape. We can monitor the real world and see where the vehicle is going and what it’s doing. The cinematic digital environment, viewed on large screens, continuously predicts the actions of vehicles over the next five to ten seconds. It also updates its representations multiple times each second using data from the vehicles. The digital twin serves as an interface between the synthetic environment in which the human operates and the dangerous, real-world environment.
This feedback is very important because it’s hard to predict in every situation what the vehicle will do. There are so many factors that affect the individual vehicle decisions, and one can’t really capture all of them. In the simulation, the vehicle might choose to turn left at a certain point, while in reality, it chooses to turn right at that same point due to small differences.
In the simulated environment, not only can the human see where each vehicle is located, zooming in and out and panning around the environment, but they are also able to feel the vehicle vibrations, the vehicles’ pitch, roll and yaw, and hear the sounds that the vehicles make as they move through and interact with the environment. These sensory cues provide better situational awareness, and alert the human to emerging issues, such as the presence of an adversary or damage progressing through the vehicle structure.
We envision human agents managing autonomous agents with varying capabilities that can work as a team to carry out missions. For instance, many autonomous vehicles in a fire-fighting crew may focus on carrying water to douse the fire, but others may be designed to locate humans and help them get to safety.
The humans on the team would act as team leads, changing the directives of the vehicles as new details about the situation emerge. They may also help their vehicles navigate out of tricky situations that the autonomy can’t handle alone, such as the coordinated steering and throttle needed on a steep, sandy hill.
Sometimes, the human may handle more information than they can manage all at once. For this reason, we are also building a system to capture physiological measurements that provide AI agents with clues to the psychological states of human leads. For instance, if the human reaches cognitive overload, the AI agents will prioritize and simplify the information that they provide and take fewer risks that could lead to the need for human intervention.
The ARC is founded on a long tradition, and pioneers translational research since its inception. It hasn’t focused on digital twins for its entire history, but we have been building this area since about 2018, when I became director. We still aim to build resilient machines with strong powertrains, tires or tracks that can handle the necessary surfaces, and provide protection from harsh environments and adversaries. However, autonomy is a fundamental disruptor in many ways.
For example, the use of autonomous vehicles in the military is fundamentally different from the use of conventional vehicles. Because there are no humans in harm’s way in an autonomous vehicle, the strategy and tactics change, and so do the technologies we pursue.
Additionally, the design objectives vary considerably. Having a human in a conventional vehicle set our most stringent constraints on the design in the past. We needed a restraint system and a way to get in and out of the vehicle. We couldn’t expose the vehicle to excessive vibrations, and we had to control the internal temperature. It also limited the missions that were possible, as a human would need to return to a location for food, rest and recovery. All these are eliminated for an autonomous vehicle.
When a vehicle can loiter in a position or an area for months, we have to think about different constraints on the design. For example, new methods are needed for vehicles to manage their energy and maintain situational awareness, autonomously perceiving the environment and making decisions.
Thus, we are currently reimagining what an off-road vehicle can do. With such a large design space, it is extremely valuable to test ideas in a digital environment before producing physical vehicle prototypes. But not all aspects of a vehicle can be simulated accurately. Hence, we have made significant progress toward creating ways to blend the boundary between digital worlds and reality, where we include in digital representations the aspects we can model most easily, and have those interact with the elements of the physical world that are hard to model. Digital twins are an integral part of this approach. They are also one of the aspects we are pursuing in the ground vehicle alliance for digital engineering created in partnership with the U.S. Army Futures Command.
Digital twins are more critical than ever in the ARC current initiatives. They enable real-time data integration and scenario modeling, essential for advancing human-machine teaming in complex operational scenarios. A prime example is a project that aims to develop a dynamic data-driven application system (DDDAS) framework to enhance combat search and rescue missions. This framework focuses on real-time data integration, human-in-the-loop modeling and optimizing human-machine collaboration, demonstrating the pivotal role of digital twins in modern military operations. By simulating various operational environments and potential outcomes, digital twins help commanders make informed decisions, leading to more successful mission outcomes. This underscores the ARC’s commitment to leveraging cutting-edge digital twin technology to address contemporary challenges in autonomous systems and defense applications.