How do users feel and judge machine mistakes?

Did you already get upset against a constantly crashing computer?

We might assume we have less sympathy for misfunctioning apps or poorly design algorithms than for clumsy humans. Yet, these feelings are more ambiguous.

Rather than getting mad at them or putting them to blame, users may be more sympathetic to machine errors than to human ones. According to César A. Hidalgo, an HMI researcher, this comes mainly from how they judge machines not by their intention but by their outcome.

Whether facing the results of biased or unfair decisions, users never really attribute a moral intention to machines but only a functional necessity to do their job well.

This makes 4 fun facts from behavioral science that explain human-machine interaction from a fresh perspective.


AI decisions and human judgement

A pixelated picture of Barack Obama depixelated by a machine-learning model

Whether to help make a decision, hire a new employee or recognize a face, we know how algorithmic biases can influence AIs. Algorithms harvest data that retains systemic judgmental biases, and this sometimes reflects in discriminatory decisions towards certain individuals.

But what would humans feel against machines behaving unfairly to certain people? Would they resent the machine more than individuals purposely discriminating them?

César Hidalgo, a former MIT researcher, and economist, tried to answer these questions through social experiments. He presented several discrimination situations to participants and asked them to judge fairness by putting either a machine or a human in charge. These situations could vary from an HR manager who never selects candidates of a specific origin with equal qualifications, to a police squad that holds many innocent individuals of the same origin.

In all of these cases, participants judged humans to be more intentional, and therefore more responsible. Being seen as more aware of their actions, they are blamed more strongly for their bad intention than machines from their miscalculation. Nevertheless, when asked who they wanted to replace these discriminating individuals, participants favored other fairer individuals.


Humans’ ambiguous perception of work automation

This ambiguity resurfaces when facing the question of work automation and job replacement. Hidalgo studied people’s reactions to this situation through similar experiments.

He presented them with narratives like company workers being replaced by either an AI-driven machine or by a more productive and younger foreign worker. He asked them how they felt about these situations in different sectors and situations.

Surprisingly, participants generally favored replacing their job with machines more than by a foreign worker. While their preferences vary by the situation (they tend to accept more drivers being replaced by a self-driven autonomous truck than a teacher being replaced by a teaching robot), they often agree that they favor automation over another worker.

Explanations for these feelings are of several kinds. Participants may feel that technological automation is inevitable, while replacement by a foreign worker triggers their sense of belonging. They may also feel the latter threat them more personally, since it is also more frequent. Human replacement can also seem more unfair, as foreign workers with the same qualifications could no more right to have a job than they do.

This explains why automation seems to provoke less extreme emotions than the relocalization of companies in the 1990s and 2000s to low-wage countries.


No indulgence for machine-involved accidents

Are Google self-driving cars more responsible than human drivers?

As self-driving cars become more and more a reality, we might wonder how users perceive their responsibility compared to that of human drivers.

To answer this question, Hidalgo and his team put participants in front of various road accident situations involving either a human or AI driver. These accidents were more or less serious according to internal or external factors. These situations also included choices that caused damage to drivers or bystanders.

The findings of this experiment first show the great responsibility placed on autonomous cars. Participants judge accidents involving autonomous cars more negatively, perceiving them as creating more harm. One reason is that they find it relatively easier to put themselves in the shoes of a human driver than a driving machine. They are quicker to recognize that they might react the same way as a human driver, especially in the case of an exogenous factor (such as a tree falling on a road).

Thus, they are less forgiving of machines that cause accidents, expecting them to be more reliable and safe.


Machines are judged solely on outcomes

What conclusions can we draw from all of these studies?

First, when looking at the overall relations between feelings of harm and intent, people give more intent to human actions than to machine actions. However, paradoxically, they would still forgive human actions more easily than machine actions. That’s because they perceive human errors more easily as the result of bad luck, and machines more as a mistake to be corrected.

When we study the relationship between intention and injustice, we come across a finding that further qualifies this human judgement. The situations that imply the most intention from actors (such as insults or discriminations) are obviously judged more negatively for humans than for machines. Humans are seen as responsible for their bad intention, while machines are seen as having no intention of their own. On the other hand, situations involving little intention (such as accidents) put machines in charge, since we assume they are programmed to avoid any mistake.

Finally, by evaluating the relationship between the perception of injustice and harm caused, we find that the less harm is done to the humans involved, the more the machines are seen as the main culprits. Conversely, when the harm inflicted is great for the victims, the human actors are judged very negatively.


In all, we see two very different modes of judgment.

When human actors are involved, users judged their actions by their intention. They can make mistakes, but if they are ill-intentioned, they are more responsible for their action.

Machines, on the other hand, are evaluated by their outcome. If they are unable to avoid damaging mistakes, they are judged negatively no matter what happens. The good side is that for situations normally judged very severely (such as discrimination or ill-intentioned insult), the machine is judged without excess severity since no intention is involved.

But this also means that designers need to minimize the indirect damages and injustices that software and intelligent applications can produce. Because there is no sympathy for badly designed algorithm!

Don’t Stop Here

More To Explore