Michigan Engineering News

Foreground: Two out of focus researchers look at a computer screen together. Background: An in focus, red robotic arm stands on a pedestal.

A common language to describe and assess human-agent teams

Using a new taxonomy, an analysis of testbeds that simulate human and autonomous agent teams finds a need for more complex testbeds to mimic real-world scenarios.

Experts

Xi Jessie Yang

Portrait of Xi Jessie Yang.

See full bio

Associate Professor of Industrial and Operations Engineering, Robotics and Information

Hyesun Chung

See full bio

Doctoral Student of Industrial and Operations Engineering

Understanding how humans and AI or robotic agents can work together effectively requires a shared foundation for experimentation. A University of Michigan-led team developed a new taxonomy to serve as a common language among researchers, then used it to evaluate current testbeds used to study how human-agent teams will perform.

“Our goal was to bring structure to a rapidly growing and fragmented research area. Without a comprehensive review, research synthesis has been very difficult and has prevented the field from moving forward,” said Xi Jessie Yang, an associate professor of industrial and operations engineering, robotics and information at U-M and corresponding author of the study published in Human Factors.

The study was funded by the National Science Foundation and the Air Force Office of Scientific Research. 

A researcher sits at a desk in front of two computers while extending his hand to the hand of a small humanoid robot, about two feet tall, that sits on the desk.
Ruikun Luo prepares to interact with a Nao Robot, an autonomous programmable robot, to serve as a robot docent for art, in a one-human, one-agent setting. Credit: X. Jessie Yang, Michigan Engineering.

In human-agent teams, also known as human-machine teams, at least one human works with one agent, either virtual or embodied (i.e., robotic), to accomplish a common goal. The partnership could be as simple as a human working with a robotic arm to assemble a car door to a frame. Or it could be more complex, as with one human giving tactical instructions to a group of embodied AI agents in a search and rescue mission. 

“To design AI or robotic teammates that are truly effective, we need testbeds that reflect the messy, dynamic nature of real teamwork. Our taxonomy provides a roadmap for future research to get there,” said Hyesun Chung, a doctoral student of industrial and operations engineering at U-M, Barbour Fellow and lead author of the study. 

Just as a taxonomy is used in biology to organize living things into groups and help scientists communicate clearly with one another, this taxonomy aims to create a shared language to guide future human-agent team research. The taxonomy classifies how teams are structured and how they function, using ten attributes:

  • Team composition—number of humans to number of agents
  • Task interdependence—the extent team members depend on the action of others
  • Role structure—the extent roles are fundamentally different or interchangeable
  • Leadership structure—the pattern, or distribution, of leadership functions such as setting discretion and aligning goals among team members (e.g., external manager, designated, temporary, distributed)
  • Leadership role assignment—whether the human, the agent or both assume leadership roles
  • Communication structure—the pattern or flow of information sharing among team members
  • Communication direction—between humans and agents, among humans and among agents
  • Communication medium—the available ways to exchange information 
  • Physical distribution—spatial location of team members to one another
  • Team life span—how long the team exists as a functional, active unit

Beyond improving communication between researchers, the taxonomy can also help researchers identify which attributes to incorporate or modify in new testbed designs or even which characteristics to build new experimental designs around.

Using these terms, the research team analyzed 103 different testbeds from 235 studies, with some testbeds used in multiple studies, while noting the task goal and overall scenario.

Foreground: Two out of focus researchers look at a computer screen together. Background: An in focus, red robotic arm stands on a pedestal.
X. Jessie Yang and Teerachart Soratana test a robotic arm that will be used in a human-agent team to perform teleoperation tasks. The new human-agent team taxonomy developed by the researcher team creates a shared foundation for experimentation. Credit: Brenda Ahearn, Michigan Engineering.

While 56.3% (58 cases) of the testbeds had a simple one-human, one-agent composition, only 7.8% (8 cases) involved a larger team consisting of many humans and many agents. Humans assumed leadership roles in most cases, with only two cases allowing either the human or agent to lead, and the dynamics within teams remained static over time.

Beyond categorizing existing platforms, the taxonomy offers a benchmarking tool for designing new testbeds. This study highlights the need to expand team composition, leadership structure and communication to explore more complex team dynamics between humans and agents. 

This research was conducted in collaboration with the Massachusetts Institute of Technology and funded by the National Science Foundation (2045009) and the Air Force Office of Scientific Research (FA9550-23-1-0044).