Here’s an experiment: You are in a burning building. Smoke and fire alarms blaring everywhere. An emergency robot appears in front of you, emerging out of the smoke. “This way” it signals. Would you go?
Apparently 100% of respondents who took part in an experiment just like this, did follow the robot even though they had hard evidence in their possession that the robot was prone to malfunctioning and could not be relied upon to be accurate all the time.
Trust is a means of divining intent and understanding context. When it is extended it acts as a risk reduction mechanism that allows us to approach a situation full of unknowns with a certain amount of confidence in its outcome.
When we encounter a robot that advises us to take a specific action in an emergency our mental heuristics kick in and we:
- Assess what we generally know of robots (they are mostly infallible within their narrow contexts of operation. They are also tireless.)
- They are utilitarian in nature (programmed to perform very specific tasks)
- They are machines that do not have an agenda
- They do not have intent and context beyond their programming
These are very quick mental leaps most of us do not even consciously think about. Processed at the very back of our minds they lead us to snap decisions at times when things get critical.
They are supposed to. It is how the brain saves on computing power and does everything ultra fast in an emergency. Unfortunately, as the case here they can also trip us up. The study that showed that humans would blindly trust a robot despite possessing prior evidence of its potential for failure raises troubling questions in our relationship with machines in general.
But cars, to date, just like other machines, have been functional, sleek, stylish, easy-to-use and … dumb. In the age of smart machines the complexity of our interaction with them rises (like in the case of smartphones eavesdropping on their owners) while our understanding of the change of the context of our relationship does not.
That is inherently dangerous.
Trust is both context and culture sensitive and it relies on the filter of personal experience to activate specific failsafes. We live in a world where trucks drive themselves, cars will soon do too, and the skies over our heads will soon be capable of being filled with unmanned autonomous target acquisition drones (a.k.a. Intelligent flying killing machines). It is important to understand that the ‘intelligence’ exhibited by machines is not yet infallible (and may never be) and their increased IQ requires from us an equal increase in understanding of how they work and where they can fail.
Humans Prefer Being Managed by Robots
Adding more weight to the study that showed that we are willing to blindly trust a robot in an emergency is another one that proved that humans work more happily with robot managers operating at middle management positions.
The heuristics that kick in, in this response are the same as the ones in the previous study with two more, additional and important conditions:
- Machines are placed in mediating positions rather than ones where they appear to be ‘overlords’.
- Removing ego and unclear motives from the command-and-control chain at work by placing a machine in a key position makes it function better for its human components.
Both of these illustrate areas where humans spend an enormous amount of brain power trying to work out motives and intent - trust, in other words. Because a machine is seemingly transparent to us in that context, we automatically trust it more and are happier working with it.
Where do we go from here? First, we actually need to truly understand how trust works (and why). By understanding trust we begin to understand the workings of our own minds, the social structures we create and the Venn Diagram between the two.
Clarity in what we do next will be determined by how successful we are in the task that only, we humans can do: mature.