Ten challenges for making automation a ‘team player’ in joint human-agent activity

Ten challenges for making automation a ‘team player’ in joint human-agent activity, Klein et al., IEEE Computer Nov/Dec 2004

With thanks to Thomas Depierre for the paper suggestion.

Last time out we looked at some of the difficulties inherit in automating control systems. However much we automate, we’re always ultimately dealing with some kind of human/machine collaboraton. Today’s choice looks at what it takes for machines to participate productively in collaborations with humans. Written in 2004, the ideas remind me very much of Mark Burgess’ promise theory, which was also initially developed around the same time.

Let’s work together

If a group of people (or people and machines) are going to coordinate with each other to achieve a set of shared ends then there are four basic requirements that must be met to underpin their joint activity:

  1. They must agree to work together (the authors call this agreement a Basic Compact).
  2. They must be mutually predictable in their actions
  3. They must be mutually directable.
  4. They must maintain common ground.

A basic compact is…

… an agreement (often tacit) to facilitate coordination, work toward shared goals, and prevent breakdowns in team coordination. This Compact involves a commitment to some degree of goal alignment… It includes an expectation that the parties will repair faulty mutual knowledge, beliefs, and assumptions when these are detected.

Mutual predictability is necessary for effective coordination: “planning our own actions becomes possible only when we can accurately predict what others will do.” A degree of directability is required to ensure responsiveness to the influence of others as the activity unfolds. Common ground is the framework in which decisions and actions are taken. It includes the “pertinent knowledge, beliefs, and assumptions that the involved parties share.”

Ten challenges

If we want to create effective symbioses between humans and machines, then we need agents to be able to meet these requirements.

Given the widespread demand for increasing the effectiveness of team play for complex systems that work closely and collaborate with people, a better understanding of the major challenges is important.

1. Agents must fulfil the requirements of a basic compact

Agents must engage in common grounding activities to maintain a shared view of the world. One key example is when an agent is failing and can no longer perform its role. In such a situation the struggling agent should notify each team member of the actual or impending failure.

2. Agents must be able to model the intentions and actions of other participants

For example, are they having trouble? Or are they on a standard path proceeding smoothly? How have others adapted to disruptions to the plan?

The key concept here usually involves some notion of shared knowledge, goals, and intentions that function as the glue that binds agents’ activities together… No form of automation today or on the horizon can enter fully into the rich forms of Basic Compact that are used among people.

3. Human-agent team members must be mutually predictable

To be a team player not only does an agent need to be able to model the intentions and actions of other participants, but also its own intentions and actions need to be easily interpretable by others. It’s hard to effectively coordinate with random behaviour!. There’s another ‘irony of automation‘ lurking here too:

Although people will rapidly confide tasks to simple deterministic mechanisms whose design is artfully made and transparent, they are usually reluctant to trust complex agents to the same degree. Ironically, by making agents more adaptable we might also make them less predictable.

4. Agents must be directable

Agents need to be able to act autonomously, yet within some guidelines. Policies are one means of dynamically regulating system behaviour without changing code or requiring the cooperation of components being governed. At the extreme end of the scale, a policy that only permits one action gives very fine-grained direction. A broader policy permits more freedom.

5. Agents must be able to make pertinent aspects of their status and intentions obvious to teammates

This is a pre-requisite to requirement #3. An example is given of auto-pilot systems on aircraft that sometimes leave pilots baffled as to what the system is doing and why

To make their actions sufficiently predictable, agents must make their own targets, status, capacities, intentions, changes, and upcoming actions obvious to the people and other agents that supervise and coordinate with them. This challenge runs counter to the advice sometimes given to automation developers to create systems that are barely noticed.

Looking at this from the perspective of 2020, this also reads like another call for interpretable systems to me.

6. Agents must be able to observe and interpret pertinent signals of status and intentions

It’s not enough just to make your own status and intentions obvious, you also need to be able to interpret the signals of others (pre-requisite for challenge #2). In the field of Human-centered Computing this is known as the Mirror Principle:

Every participant in a complex sociotechnical system will form a model of the other participant agents as well as a model of the controlled process and its environment.

7. Agents must be able to engage in goal negotiation

When the situation on the ground changes and the team needs to adapt then it may be necessary to negotiate new goals. Agents that can’t reason at this level will interfere with the maintenance of common ground.

8. Agents must support collaborative autonomy

Collaborative autonomy assumes that “the processes of understanding, problem solving, and task execution are necessarily incremental, subject to negotiation, and forever tentative.” Agents need to monitor ongoing progress, invoke replanning when necessary, and evaluate proposed changes from other agents. I can’t help but think the notion of a team leader (and some kind of leader election if needed) would simplify things here.

9. Agents must be able to participate in managing attention

As part of maintaining common ground during coordinated activity, team members direct each other’s attention to the most important signals, activities, and changes. They must do this in an intelligent and context-sensitive manner, so as not to overwhelm others with low-level messages containing minimal signals mixed with a great deal of distracting noise.

Here we see again the need for other team members to have sufficient visibility so that they can take over or compensate for a failing agent.

10. All team members must help control the costs of coordinated activity

If we’re not careful, an agent could spend all its time coordinating, meeting challenges 1-9, and not actually doing anything useful towards the shared goals of the team!

Partners in a coordination transaction must do what they reasonably can to keep coordination costs down. This is a tacit expectation – to try to achieve economy of effort.

This a good rule for all-human teams to remember as well!

The last word

16 years ago the authors signed off with this thought:

Agents might eventually be fellow team members with humans in the way a young child or a novice can be – subject to the consequences of brittle and literal-minded interpretation of language and events, limited ability to appreciate or even attend effectively to key aspects of the interaction, poor anticipation, and insensitivity to nuance.

We’ve still got a long way to go…