In2016, Microsoft introduced to Twitter an artificially intelligent chatbot named Tay. By interacting with users, Tay learned new vocabulary. Unfortunately, the bot was quickly targeted by trolls, who taught it to spout racist and sexist commentary. After 16 hours, Microsoft withdrew the chabot.

Tay was not an isolated incident, says Raji Srinivasan, professor of marketing at Texas McCombs. Tech giants including Facebook, Google, Apple, and Amazon have suffered black eyes from algorithms that offended or harmed users. In a 2017 survey, 78% of chief marketing officers said that one kind of algorithm error had hurt their brands: placing their online ads next to offensive content.

In new research, Srinivasan offers them some encouraging news: Consumers are faster to forgive a brand for an algorithm failure than for a human one.

“They assume the algorithm doesn’t know what it’s doing, and they respond less negatively to the brand,” she says.

But the news comes with a caution. Consumers become less tolerant when an algorithm tries to mimic a human being.

“When you’re talking to Alexa, it has a name and a voice. When it starts asking irritating questions, you’re likely to hold it more responsible for the mistakes it makes.” — Raji Srinivasan

Projecting Minds into Algorithms

In recent years, consumers have become more aware that unseen programs determine much of what they see online. “Every time you go to Facebook or Google, you’re not interacting with humans but with technology designed by humans,” Srinivasan says.

How do people react, she wondered, when that technology upsets them? The answer, she suspected, depends on whether they unconsciously view it as having a mind.

According to the psychological theory of mind perception, she says, “Humans tend to assign mind — more or less — to inanimate objects. By assigning more mind to objects, they assume that the inanimate objects have more agency and are capable of taking actions.”

If someone knows they’re dealing with a mindless algorithm, Srinivasan reasoned, they might be less inclined to blame it for a blooper.

To test the idea, she worked with Gülen Sarial Abi of Denmark’s Copenhagen Business School. In a series of 10 studies with a total of 2,999 participants, they presented examples of algorithm errors and measured participants’ responses.

In general, the researchers found, algorithms’ missteps did less damage to a company’s reputation than ones committed by people.

In one study, participants read about a fictitious investment firm that had made a costly mistake. Some were told the culprit was an algorithm, others that it was a person. Subjects then rated their attitudes about the brand on a scale from 0 to 7.

Those told about the algorithm error had more positive attitudes, giving the brand an average score of 4.55. Those who faulted humans gave a lower average rating: 3.63.

“They held the brand less responsible for the harm caused by the error.” — Raji Srinivasan

Being Too Human

While consumers were more forgiving of a nameless algorithm, they became less lenient when the algorithm had an identity.

In another experiment with the same fictitious financial firm, a third group of participants was shown a different cause for the error: a financial program that was given the name Charles. Anthropomorphizing the algorithm, the researchers found, made consumers less tolerant of its failures:

· On brand attitude, “Charles” scored 0.51 point lower than an anonymous algorithm.

· When asked to make a small donation to a hunger charity, participants told about “Charles” gave $1.60, compared with $2.05 for those exposed to a nameless algorithm, suggesting their “distaste” carried over even to unrelated behaviors.

Attitude ratings dropped as much for the mistakes of “Charles” as for human ones, Srinivasan notes.

“When you humanize the algorithm more, people assign greater blame to it.” — Raji Srinivasan

Accuse the Algorithm

To Srinivasan, the lesson is clear: When an embarrassing error occurs, “publicize the fact that it’s the fault of an algorithm and not a person.”

Consumers’ tolerance for algorithms extends to fixing mistakes as well as making them, she adds. In another experiment, participants preferred technological supervision of algorithms over human supervision for preventing future algorithm errors.

“If you’re using technological supervision, you should highlight it,” she says. “If you’re using humans, it may be wiser to not publicize it.”

A company takes a risk when it gives a program a personality and a name, like Apple’s Siri or Microsoft’s ill-fated Tay, Srinivasan says. In such cases, the company should increase its vigilance about preventing errors and make plans for damage control when they happen.

“As more companies are using anthropomorphized algorithms, they’re likely to have more negative responses from consumers when something goes wrong,” she says. “Be prepared for that and know how you’re going to handle it.”

“When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors” is online in advance in the Journal of Marketing.

Story by Steve Brooks

http://www.mccombs.utexas.edu/
Contributor

Recently Published

Key Takeaway: Scientists are decoding the mysteries of dinosaurs by examining their lives, movements, and interactions. Advances in technology are revealing new insights into their colors, patterns, and social structures. Electron microscopes have allowed scientists to reconstruct the coloration of certain feathered dinosaurs, revealing details about their appearance and behavior. CT scans are being used […]

Top Picks

Key Takeaway: Research suggests that sleep disturbances are more than just a side effect of mental health disorders; they are an integral part of the puzzle, influencing the nature of mental illness and addiction itself. Disrupted sleep can intensify symptoms of mental illness, making it harder to stay on treatment. Neuroscientists are beginning to uncover […]
Key Takeaway: Lightning, a powerful electrical discharge, may be triggering a hidden cosmic phenomenon, shaking loose high-energy electrons from space. Under the right conditions, a shower of electrons cascading into Earth’s atmosphere can be caused by electromagnetic waves produced by lightning. This groundbreaking finding sheds new light on how Earth’s storms interact with space weather, […]
Key Takeaway: Love is a crucial aspect of life, defining relationships, identity, and self-worth. The popular belief that self-love is necessary for romantic love is often questioned, as some philosophers argue that excessive focus on oneself can lead to narcissism. However, the absence of self-love can manifest as a fundamental lack of self-worth, leaving individuals […]

Trending

I highly recommend reading the McKinsey Global Institute’s new report, “Reskilling China: Transforming The World’s Largest Workforce Into Lifelong Learners”, which focuses on the country’s biggest employment challenge, re-training its workforce and the adoption of practices such as lifelong learning to address the growing digital transformation of its productive fabric. How to transform the country […]

Join our Newsletter

Get our monthly recap with the latest news, articles and resources.

Login

Welcome to Empirics

We are glad you have decided to join our mission of gathering the collective knowledge of Asia!
Join Empirics