In2016, Microsoft introduced to Twitter an artificially intelligent chatbot named Tay. By interacting with users, Tay learned new vocabulary. Unfortunately, the bot was quickly targeted by trolls, who taught it to spout racist and sexist commentary. After 16 hours, Microsoft withdrew the chabot.
Tay was not an isolated incident, says Raji Srinivasan, professor of marketing at Texas McCombs. Tech giants including Facebook, Google, Apple, and Amazon have suffered black eyes from algorithms that offended or harmed users. In a 2017 survey, 78% of chief marketing officers said that one kind of algorithm error had hurt their brands: placing their online ads next to offensive content.
In new research, Srinivasan offers them some encouraging news: Consumers are faster to forgive a brand for an algorithm failure than for a human one.
โThey assume the algorithm doesnโt know what itโs doing, and they respond less negatively to the brand,โ she says.
But the news comes with a caution. Consumers become less tolerant when an algorithm tries to mimic a human being.
โWhen youโre talking to Alexa, it has a name and a voice. When it starts asking irritating questions, youโre likely to hold it more responsible for the mistakes it makes.โ โ Raji Srinivasan
Projecting Minds into Algorithms
In recent years, consumers have become more aware that unseen programs determine much of what they see online. โEvery time you go to Facebook or Google, youโre not interacting with humans but with technology designed by humans,โ Srinivasan says.
How do people react, she wondered, when that technology upsets them? The answer, she suspected, depends on whether they unconsciously view it as having a mind.
According to the psychological theory of mind perception, she says, โHumans tend to assign mind โ more or less โ to inanimate objects. By assigning more mind to objects, they assume that the inanimate objects have more agency and are capable of taking actions.โ
If someone knows theyโre dealing with a mindless algorithm, Srinivasan reasoned, they might be less inclined to blame it for a blooper.
To test the idea, she worked with Gรผlen Sarial Abi of Denmarkโs Copenhagen Business School. In a series of 10 studies with a total of 2,999 participants, they presented examples of algorithm errors and measured participantsโ responses.
In general, the researchers found, algorithmsโ missteps did less damage to a companyโs reputation than ones committed by people.
In one study, participants read about a fictitious investment firm that had made a costly mistake. Some were told the culprit was an algorithm, others that it was a person. Subjects then rated their attitudes about the brand on a scale from 0 to 7.
Those told about the algorithm error had more positive attitudes, giving the brand an average score of 4.55. Those who faulted humans gave a lower average rating: 3.63.
โThey held the brand less responsible for the harm caused by the error.โ โ Raji Srinivasan
Being Too Human
While consumers were more forgiving of a nameless algorithm, they became less lenient when the algorithm had an identity.
In another experiment with the same fictitious financial firm, a third group of participants was shown a different cause for the error: a financial program that was given the name Charles. Anthropomorphizing the algorithm, the researchers found, made consumers less tolerant of its failures:
ยท On brand attitude, โCharlesโ scored 0.51 point lower than an anonymous algorithm.
ยท When asked to make a small donation to a hunger charity, participants told about โCharlesโ gave $1.60, compared with $2.05 for those exposed to a nameless algorithm, suggesting their โdistasteโ carried over even to unrelated behaviors.
Attitude ratings dropped as much for the mistakes of โCharlesโ as for human ones, Srinivasan notes.
โWhen you humanize the algorithm more, people assign greater blame to it.โ โ Raji Srinivasan
Accuse the Algorithm
To Srinivasan, the lesson is clear: When an embarrassing error occurs, โpublicize the fact that itโs the fault of an algorithm and not a person.โ
Consumersโ tolerance for algorithms extends to fixing mistakes as well as making them, she adds. In another experiment, participants preferred technological supervision of algorithms over human supervision for preventing future algorithm errors.
โIf youโre using technological supervision, you should highlight it,โ she says. โIf youโre using humans, it may be wiser to not publicize it.โ
A company takes a risk when it gives a program a personality and a name, like Appleโs Siri or Microsoftโs ill-fated Tay, Srinivasan says. In such cases, the company should increase its vigilance about preventing errors and make plans for damage control when they happen.
โAs more companies are using anthropomorphized algorithms, theyโre likely to have more negative responses from consumers when something goes wrong,โ she says. โBe prepared for that and know how youโre going to handle it.โ
โWhen Algorithms Fail: Consumersโ Responses to Brand Harm Crises Caused by Algorithm Errorsโ is online in advance in the Journal of Marketing.
Story by Steve Brooks