There have never been so many intelligent interfaces and automation systems to assist us in our everyday work. From airplanes navigation to patient monitoring systems in hospitals, machines are constantly providing essential feedback.
However, this does not leave us immune to errors (sometimes very serious) coming from the rigidity of the machines or the passivity of the users.
According to Laura Major and Julie Shoh, one of the cause for this is a lack of cooperation between users and their interface. Designers need to think user-machine interactions as a group dynamic, and encourage them to teamwork and help each other. Here is what it all means.
The limitations of automation systems
In 2009 an Airbus plane took from Paris Airport to Brazil. Everything was going well in the take-off and the plane took cruising speed. Yet, as passengers could again get up, ice crystals formed on the airspeed sensors that undermine their operation. Faced with such a situation, the plane automatically went into an alternative emergency mode, where the pilots have to control the plane manually. The problem was that this change was not well indicated, and so pilots didn’t notice it.
As the plane started to lean dangerously to the right, the pilots had then to recalibrate the trajectory by themselves. Obviously very confused, they tried to dive the plane upwards, but only made the situation worse, and crashed terribly into the sea killing all the passengers on board.
This tragic example shows the importance of active collaboration, and in particular the terrible effects of the user’s passivity in the use of his device. As autopilot devices have taken control of aircraft, pilots have become used to letting the system operate without practicing manual operations.
Numerous researches have especially shown how users when faced with a reliable automated system become more and more dependent on it, and disengage from the interaction. They increasingly lack what is called situational awareness, in which users are sufficiently engaged in the interaction to manage operation and compensate for any unexpected event.
How companies have prevented this situation since this accident? The solution is paradoxically to reduce the automation of tasks and to give some work back to the users. They should keep a certain control in the use of automated mode to remain focused on their mission, and thus respond in front of an unforeseen event. And this defines many perspectives in terms of human-machine collaboration.
Making man and machine collaborate
When NASA’s Apollo mission considered how to reliably get people to the moon, they wondered how much human judgment could be relied upon to manage delicate space operations.
Their thinking led to a new perspective on human-machine collaboration. They distinguished between situations where automation is beneficial to human users to perform their jobs (for example in the unlimited constraints of take-off or landing phases) and others where human intervention was essential to override the machines (during unexpected situations).
But they also defined a situation where the collaboration between human and machine could reduce the uncertainties and risks of a mission. Human input allows the automatic process to be corrected and adjusted in the face of unpredictable circumstances, by providing creative solutions.
These principles are applied today in the autopilot systems of mainline aircraft. For example, the Boeing Fly management system has adopted flexible automation. The pilot can take over operations that are normally automated because they are too complex. In 1985, the pilot of a China Airlines Boeing successfully regained control of his aircraft after an engine failure. He deactivated the automated controls and manually stabilized the aircraft’s trajectory.
The basis of man-machine teamwork
Just like man-to-man teamwork, human-machine collaboration requires specific abilities to work smoothly. This interaction is all the more challenging since the user will not necessarily have hours to train himself to use intelligent devices (as in the case of a delivery drone that we meet in the street). In the same time, these devices will never have an extensive knowledge of our behavior. The biases and mental models of both parties in the interaction are not necessarily known and accessible.
So how do we get the user and robot to coordinate and perform at a high level? This question is particularly relevant to studies of teamwork in professional sports. In particular, researchers have noted a key success factor, which is the ability of team members to complement each other in their respective talents.
Psychological researchers Nancy Cooke and Rob Gray played two different baseball teams. One made up of top players but not often playing together, and another less talented but more experienced working together. The latter had learned to predict the movements and strategies of their teammates, and were therefore more coordinated and complementary. As a result, they played better than the other team.
This is also reflected in a process called “disruption training”. A team that has considered and thought through all possible error scenarios together before a game is much more adaptable. Teammates have learned mental models for coordinating their movement, and are therefore able to switch roles without problems.
Building collaborative interfaces
How do these insights apply to building a collaborative user interface?
A collaborative interface must first encourage users to monitor and correct the actions of devices such as self-driving cars or exploration robots. Data visualization tools are useful for this, to understand the intent of the devices and correct it if necessary.
So-called ecological interface design seeks not just to reduce complexity, but to make the user the active monitor of the system’s actions. In this way, drivers of self-driving cars, for example, will be on the lookout for changes in the vehicle’s lines or direction. Since the interface needs an effort of visualization, they will have greater reactivity against unexpected events.
Conversely, the robot must also be equipped with mental models that allow it to compensate for typical human cognitive biases. For example, surgeons and their nurses often forget materials in their patient’s organs. Robot sensors can verify that all tools are removed from the patient. Designers need especially to manage the attention of users who are often distracted by external elements.
For this, it is necessary to give interfaces that are neither too simple nor too complex, and to take over when the human bias is too strong. For example, the ABS system in modern cars systematically corrects a human error of wanting to brake in the face of danger even in complicated weather situations. By automatically softening the brake in these conditions, the car avoids an incredible slide.
As you can see, the interaction between user and smart devices raises new questions in terms of interface design. To tackle these issues, designers need to make users and robots work together and take advantage of their complementary skills. While the user needs to remain alert to the errors of automation systems, intelligent interfaces must prevent human bias. By combining the two, we’ll get a system that works twice as well.