Since the advent of industrial design in the 1960s, designers have learned better than anyone how to adapt their products and applications to the implicit needs of users.
This user-friendliness of machines is now pushed to its very end. As they talk with us, anticipate our mistakes, are polite and empathetic, connected and smart devices have literally become our best friends.
According to Cliff Kuang and Robert Fabricant in User Friendly, product designers have learned to create a real human-robot collaboration. But not without some limitations : users have psychological expectations and cultural imaginary that still cannot be broken easily.
Here is how these machines can accommodate a better collaborative relationship with their human users.
1. Building polite machines
Smart products and technologies such as autonomous cars are increasingly learning how to fit with users expectations. But, whether at Volkswagen or Tesla, the race to build autonomous cars is not only defined by the accumulation of data or the safety of driving.
It poses a completely different problem: that of the acceptability of users, and the trust they can place in these technologies.
During the 1990s, sociologist Clifford Nass studied not only how users used computers, but also what they felt about their interaction. He tried to see if humans behaved differently with a machine than with another human. And what he found was that strangely enough the user expected as much politeness and friendliness from a machine as from a human. As machines interact with them, they expect them to behave like a human and show reliable communicative cues.
In the same way, autonomous car drivers can only really trust their vehicles if they communicate with them predictably. To be able to trust it, it needs to tell him what it is doing, when it is doing it and why it is doing it. And that’s what designers of autonomous cars have realized in their work. For example, Audi designers have added a screen that tells the driver where and when the car plans to turn right or left. It also gently warns it when it decides to take over with tactile feedback on the steering wheel.
Other self-driving designers are also building audio and light features that warn pedestrians on the street. Seeing the respectful speed by which the car stopped before, they felt that the car saw them and controlled the situation. Even more, by wearing an LED indicating they could pass, they were politely suggested passing. This shows how much the personality of the machine matters to the users: the friendlier and more respectful a car is, the more serene the users are when using it.
2. Considering user’s cultural imaginary
Far from being open to all, Internet as we know it is inaccessible to many foreign cultures users. As it’s based on metaphors natural to Westerner minds, it can prevent some individuals from understanding how to handle it.
Just look at what we call the basic tools of Internet usage. We call this system the “web” as a web we can browse from thread to thread, where we consume “streams” of content, and check the “in-box” of our email. All these notions suppose an implicit knowledge (an “in-box” supposes that there is a content that is deposited and that we can consume as long as we want, unlike a “feed”), which make us intuitively understand what we can do with the system. Designers need to know about and play on this deeply human imagination to create new uses.
For example, Bill Atkinso, a mythical designer, was behind the user-friendly design interface of the pioneering personal computer, the Macintosh. He faced the problem of making the very austere features of old computers accessible to the public. And he especially realized the power of such a familiar metaphor as the “desk” to create a more concrete experience. He and his team envisioned a space, “the desktop,” where users could touch “files” and move them into “folders.” They filled the metaphor even further by adding the “Trash” folder to get rid of files.
And the least we can say is that such naturally ingrained analogies (what psychologists call “embodied cognition”) have made computer operating systems so human.
3. Empathizing with human disabilities
Looking at the history of product design, you might think that the best inventions were tailored to the average consumer. Instead, some examples of inventions like email and the telephone goes against this story. Designers have long found inspiration on users’ specific conditions (disabilities or cultural differences), and create solution adapted to their needs.
Alexander Graham Bell was, like his mother, hard of hearing and thus pushed his curiosity in the research of acoustic and speech solutions. When he experimented on a device that would become a telephone, it was inspired by his own condition and his difficulties in perceiving sound. When Vint Cerf imagined email, he thought of it as a way to stay in touch with his hearing impaired wife, as a substitute for the telephone.
Today, a culture of innovation such as Microsoft’s are moving on issues like accessibility. Driven by a designer like de los Reyes, Microsoft’s teams see it as an opportunity to taclke the needs of different category of population. But it’s not only that. They realized that it can help them create a better user experience for everyone.
For example, the addition of adaptive reading characters for dyslexics led to the idea of creating screen-reading and real-time captioning technology. This then led to the creation of the real-time translation feature on Skype, which improves the lives of many businesses.
In all these cases, machines are designed to empathize with the specific needs and problems of individuals with disabilities. They learn to empathize with their user’s personal problems.
4. Designing honest and tactful speaking devices
If there’s one technology that can give us a human experience, it’s assistive and speech recognition devices. But the closer these smart, connected objects get to human behavior, the more they trigger contrasting reactions to them.
More than ever, an application like Microsoft’s Cortana needs to know and show basic social etiquette, so as not to overwhelm the user. At least, this is what the design teams have understood, and they have pushed it to be transparent and honest about its own limitations. Cortana doesn’t take itself for a human to the point of adopting behaviors that are out of place. It admits its shortcomings when it doesn’t understand a situation, proposes alternative solutions or completely withdraws when it understands that it is useless.
Google Assistant Duplex has carried out an experiment, which shows the measured acceptability of users in relation to the voice synthesis technology. By calling a hairdressing salon to make an appointment with the assistant, the user was able to see the results of the experiment. But the recording of the discussion caused an uproar on social networks, in that the voice assistant had not made its true nature known. Even though Google later revealed that the assistant confessed to being a robot, this shows that users expect a product that is honest and modest, even more so when it comes close to a human voice or appearance.
As you can see, the products of the future won’t certainly be assessed by fitting a need but more by their friendly personality. Politeness, modesty, empathy, and honesty are necessary traits for smart technologies to gain the trust of the public. And this is only the beginning of human-machine interaction.