Discovery of the Uncanny Valley


In The Power of Familiarity in Design: Skeuomorphic Triggers and Personified Machines, I explored and concluded how introducing familiar attributes to a new concept can provide the following three benefits:

  • Hint and trigger users: When a technology is new and alien to users, familiar attributes could hint its function and trigger these users to take an action.
  • Manage expectations: People tend to over anticipate the capabilities of a new technology, because it is often communicated abstractly. By introducing familiar attributes, people can use their prior technological point of reference and lower their expectations to ones that are more in line with the reality of the technology’s capabilities.
  • Connect emotionally: A solely functional product can isolate users, especially when the technology is new, because they cannot relate it to their past memory. Referencing something in the past could bridge this gap.
Kuri, the smart assistant robot, mimics some human attributes but does not speak (Image source: TechCrunch)

Personification is a technique to add human attributes to nonhuman objects. It is effective when people need to introduce or describe new abstract concepts, like nature, emotion, or technology by making these items more relatable to something easier to conceive. This technique can also be seen in robotic design. To benefit from it the most, designers have to select an exact amount of appropriate human attributes. Exceeding the threshold and making robots too human-like would make people uneasy, which can be described in the theory, Uncanny Valley.

Uncanny Valley

In general, it’s easier to find a robot more relatable and friendly when they have more human attributes, until they have too many realistic, human-like characteristics. At this point, people start to show negative emotional responses. Nonetheless, there is a theory that people’s attraction comes back again when the object becomes almost as real as a human. This theory, Uncanny Valley, was first introduced by the Japanese robotics professor, Masahiro Mori. The valley can be described in the movie “graphic” shown below, with its x-axis describing a degree of human attributes, and its y-axis representing people’s emotional responses.

A curve describes the Uncanny Valley. “The x-axis describes the degree of human attributes, from very little human attributes, nearly machine-like, to many human attributes, in which the machine could nearly be mistaken for a human. The y-axis represents people’s emotional responses, from very little engagement, to very high emotional involvement” The robots shown from movies correspond to the degree of human attributes (as their aligned along the x-axis).

A later study, not directly related to the Uncanny Valley, published in Social Cognitive and Affective Neuroscience in 2012, demonstrated how people’s brains would react in a similar manner when they observed a real human that moves fluidly (human-like), and when they observed a mechanical-looking robot that moves jerkily (robot-like). Interestingly however, participants’ reactions would change when a human-looking robot moved jerkily. The study concluded that unanticipated behaviors from the appearance could make people feel ‘uncanny‘- the study called this phenomena a “prediction error.”

The Uncanny Valley Theory is still conceptual and lacks scientific evidence to prove it, especially along the right side of its graph. For instance, would people retain the same positive response when they find out that a human-like object was actually a robot? Nonetheless, the theory had a huge impact on portraying humans or human-like behaviors in computer graphics and robotic designs.

Uncanny Valley beyond physical attributes

I overlaid movies that featured robots borrowing different degrees of human attributes in the graph I showed previously. For example, Eva from ex-machina was deeply connected with the main character, Caleb, with her attractive and complete human-like appearance, alongside her frail demeanor that gradually made him feel obligated to protect her. However, Caleb alongside the audience, would have felt betrayed anticipating the character’s end, who unfortunately awaited a horrifying finale. On the other hand, although TARS from Interstellar might have had the more mechanical look in the list, its lovable movements and empathic dedications to its teammates made it a very charming robot.

In the movie, Her, Samantha is a perfect human only with voice (Image source: What ‘Her’ Gets Right About Technology and Love | Daily Beast)

Samantha from Her did not have a physical appearance at all, but she was a human voice. Her interaction was portrayed just like a regular human-being, especially since her intent in the relationship had always been about learning more about human emotions (a direct correlation to a machine-learning), specifically what love was, and she evolved accordingly. In the movie, her perfect, almost too-good-to-be-true character, made the counterpart, Theodore, struggle. He then uncovered the AI aspect of Samantha that she had 641 other lovers just like him.

Personify with only a few appropriate human attributes

C-3PO in Star Wars implies it can function most likely to human. BB-8, on the other hand, has less human physical attributes which helps us not get destructed with its core functions. (Image source: Masahble)

As seen in these examples, when any of the attributes in robots including demeanor, personality, and movement, on top of physical appearance, fail to meet predictable characteristics, even the most human-like robots can fall back into the bottom of the Uncanny Valley. In other words, these robots fall into the “trap” in which the degree of human-like attributes in robots that make people feel uneasy, is dominant over the positive, helpful characteristics of the robot. Therefore, it is a common practice to borrow only a few, if not one, human attribute when designing robots. Additionally, if a robot has human-like hands and legs, it is implied that it can grab things and run just like us, which could cause people to overestimate its capabilities.

I listed some robots that were recently designed to engage with humans that have challenged this topic.

Kuri — Limiting actions helps manage the right expectation

“When something speaks back to you in fluent natural language, you expect at least a child’s level of intelligence… But right now robotics just isn’t there yet. So setting that expectation right keeps it more understandable.”

Mike Beebe, CEO of Mayfield Robotics, Kuri’s maker

Kuri’s main functions

  • It can respond to yes-or-no questions.
  • It can play podcasts.
  • It can act as an alarm alongside other basic utilities.
  • It takes photos constantly for both security reasons and to archive family activities.
  • It can move next to you to interact with you, unlike regular smart speakers.

Kuri’s team deliberately removed an option to use human language as a means of communications. Kuri is as tall as a small human child, and although it is extremely abstract, it has many human attributes; Kuri has eyes, a head that rotates, and the ability to move towards you. Just like the choice with speech, Kuri’s personification attributes are designed and limited in a way that can manage user expectations and relate to humans without causing any uncanniness.

Designed personification attributes

  • It stares at a target before it starts moving toward it — The time lag between registering something and initiating an action implies that Kuri is thinking, just like a human.
  • It communicates with minimal cues, not in human languages — A beep means yes, and a bloop means no. Without using human language, but rather simple responses with slight tone changes, users are more inclined not to over anticipate the robot’s capabilities.
  • It possesses a smile-like expression by “looking” at the humans it interacts with and with an eye animation — Instead of having a complex visual smile representation, Kuri only looks at a person who it wants to express its sentiments to, and its eye shape changes slightly to uncover a smile.
(Image source)
  • It closes its eyes when its charging to represent its idle time — Human equivalent of sleeping is simply expressed by closing its eyes.
(Image source: Product Hunt)
  • It shakes its head to exaggerate its expressions — A simple yes or no can be amplified by adding head shakes. These become more exaggerated when it shakes its entire body.
  • It comes to the door to greet users when they come to the door — This also can be helpful for detecting potential trespassers given Kuri’s recording feature.
  • It hovers the ground, rather than walking — Kuri has arms (at least flat representations of them) and does not have legs. It moves, but does not use these body parts that humans have and use to mobilize themselves.

Essentially, these personified features attempt to elevate Kuri to be treated similarly to how people would engage with their pets. I was also hoping for this to be my kid’s great companion, which could make $700 justifiable, with a little bit of home robotics dreams and serve as a robotics introduction.

ElliQ — Taking mobility out of the equation for simplicity

ElliQ is a friendly, companion robot designed for elderly people to fill their loneliness. The machine consists of two main parts: an armature that moves to express some gestures, and a display for additional information and a possible video chat.

(Image source: dribbble)

ElliQ’s main functions:

  • It speaks with humans.
  • It starts a conversation, unlike regular smart speakers, which require users to initiate a conversation.
  • It shows basic information on its display to supplement its voice function.
  • The top piece of the armature, which looks like a head, tilts as if it could bow down or nod. The armature can also rotate to face the speaker.

Designed personification attributes

  • The two-part form is only enough to represent a head and body — Additionally, the armature does not have eyes or parts that mimic facial expressions.
  • The movement is limited to rotations along only two axes — The top head-like piece tilts forward like a bow or nod. The armature can also rotate to face the speaker. It cannot displace itself.

ElliQ is essentially a smart speaker, which has a little more substantial physical presence than the more widely used Alexa or Google Home. Its simple motion works well to limit uncanny moves as well as to keep an engineering simplicity. However, its overly-simple-gimmicks could also risk boring users. In the video documentary by Bloomberg, the observer pointed out that the conversation ended too soon; she seemed to want a little more twists, and an active engagement from ElliQ in order to enjoy the interaction just like that with humans.

Vector — These gestures selected deliberately should be exaggerated.

“Humans and animals mirror naturally and instinctively, and that’s part of how we bond with one another… We chose to develop his mannerisms based on human and animal studies and then interpreted those behaviours into something Vector would and could do.”

Mooly Segal, a lead designer of Anki’s animation

Vector’s main functions

  • It can run Amazon’s Alexa. It can be used for setting timers, asking about the weather, requesting directions, etc.
  • It can take a photo.
  • Unlike regular smart speakers, it can come by you to interact with it.
  • In addition, there are plans for the following features: 1) Messaging to other family members on your behalf, 2) functioning as security camera, 3) receiving notifications from your phone.

Personified attributes

  • It communicates with comical eyes –
It stares at you but not for too long — The team studied numbers of psychological studies around eye communication, such as Stop Staring by Jason Osipa, and The Eyes Have It by Keith Lango. It also expresses its smile by narrowing its eyes, amplified by nodding and shaking motions. (Image source: Anki’s Vector Is a Little AI-Powered Robot Now on Kickstarter for $200 | IEEE SPECTRUM)
Its eyes can portray a rain animation when it is asked for the weather — It also helps dehumanize its eyes. (Image source: Anki)
  • It mirrors what people do, in Vector’s way — Vector cannot and probably shouldn’t mirror the exact users’ movements. Instead, it has a set of simpler gestures to react to what people do.
It reacts to petting: The team ended up using owls’ motions as references after also studying dogs, cats and other animals. (Image source: The New Anki Vector Robot Is Smart Enough To Just Hang Out | The Verge)
  • It exaggerates its gestures — Because of its size and limited armatures, it is more effective to exaggerate gestures to represent emotions. For example, its shimmy is so prideful when it does something right.

Both Kuri and Vector can move around. Unpredictable physical distance between robots to observers could make them feel uneasy. To accomodate, Kuri moves slowly, Vector is much smaller, and both don’t have parts resembling human legs for movement. With their mobility, they make their smart speaker functions more useful as they can function wherever the user needs. On the other hand, ElliQ’s voice interactions seem to require more advanced communication to satisfy users, because it does not move. Users have to move closer to ElliQ each time they wish to communicate, which could increase their expectations. Then, seeing the same interactions in a static environment could make the robot look too stale too quickly.

Both mobile robots, Kuri and Vector, have abstract eyes to express emotions, but they don’t have mouths. Although a mouth could form for different expressions, people may also relate their mouths to the mechanism in which they express their sincere emotions. This suggestion of human-like consciousness within robotics could bring people back into the Uncanny Valley.

In the TV Series, Mr.Robot, FBI agent Dominique DiPierro interacts with Alexa from Amazon in many lonely scenes at her home. Toward the end of the Season 2, she asks Alexa a couple of questions when coming back home, depressed.

Dominique: “Are you happy?” 
Alexa: “I’m happy when I’m helping you.”
Dominique: “Do you love me?”
Alexa: “That’s not the kind of thing I am capable of.”

Like in Her, emotions seem to be the most arguable and fragile attribute for robots, possibly causing a fall back into the Uncanny Valley. Recent robotic designs seem to deliberately avoid hinting toward any human-like emotions by carefully encapsulating relatable attributes; these attributes can correlate to ones that humans have, but their details and applications are often studied and borrowed from non-human animals and objects.

Follow on medium: https://uxdesign.cc/discovery-of-the-uncanny-valley-a2048b5c43ac

Reference

THE TOUCHY TASK OF MAKING ROBOTS SEEM HUMAN — BUT NOT TOO HUMAN | WIRED

Chatbots Have Entered the Uncanny Valley | The Atlantic

Personification: The Materials Science and Engineering of Humanoid Robots | TMC

The Design of Everyday Things Quotes | goodreads

COMPANION ROBOTS ARE HERE. JUST DON’T FALL IN LOVE WITH THEM | WIRED

Anki’s New Home Robot Sure Is Cute. But Can It Survive? | WIRED

How Anki designed and animated a loveable personality for its real robot friend, Vector | Digital Arts

The New Anki Vector Robot Is Smart Enough To Just Hang Out | The Verge

Building a Robot to Fight Loneliness | Bloomberg

Leave a comment

Your email address will not be published. Required fields are marked *