In2016, Microsoft introduced to Twitter an artificially intelligent chatbot named Tay. By interacting with users, Tay learned new vocabulary. Unfortunately, the bot was quickly targeted by trolls, who taught it to spout racist and sexist commentary. After 16 hours, Microsoft withdrew the chabot.

Tay was not an isolated incident, says Raji Srinivasan, professor of marketing at Texas McCombs. Tech giants including Facebook, Google, Apple, and Amazon have suffered black eyes from algorithms that offended or harmed users. In a 2017 survey, 78% of chief marketing officers said that one kind of algorithm error had hurt their brands: placing their online ads next to offensive content.

In new research, Srinivasan offers them some encouraging news: Consumers are faster to forgive a brand for an algorithm failure than for a human one.

“They assume the algorithm doesn’t know what it’s doing, and they respond less negatively to the brand,” she says.

But the news comes with a caution. Consumers become less tolerant when an algorithm tries to mimic a human being.

“When you’re talking to Alexa, it has a name and a voice. When it starts asking irritating questions, you’re likely to hold it more responsible for the mistakes it makes.” — Raji Srinivasan

Projecting Minds into Algorithms

In recent years, consumers have become more aware that unseen programs determine much of what they see online. “Every time you go to Facebook or Google, you’re not interacting with humans but with technology designed by humans,” Srinivasan says.

How do people react, she wondered, when that technology upsets them? The answer, she suspected, depends on whether they unconsciously view it as having a mind.

According to the psychological theory of mind perception, she says, “Humans tend to assign mind — more or less — to inanimate objects. By assigning more mind to objects, they assume that the inanimate objects have more agency and are capable of taking actions.”

If someone knows they’re dealing with a mindless algorithm, Srinivasan reasoned, they might be less inclined to blame it for a blooper.

To test the idea, she worked with Gülen Sarial Abi of Denmark’s Copenhagen Business School. In a series of 10 studies with a total of 2,999 participants, they presented examples of algorithm errors and measured participants’ responses.

In general, the researchers found, algorithms’ missteps did less damage to a company’s reputation than ones committed by people.

In one study, participants read about a fictitious investment firm that had made a costly mistake. Some were told the culprit was an algorithm, others that it was a person. Subjects then rated their attitudes about the brand on a scale from 0 to 7.

Those told about the algorithm error had more positive attitudes, giving the brand an average score of 4.55. Those who faulted humans gave a lower average rating: 3.63.

“They held the brand less responsible for the harm caused by the error.” — Raji Srinivasan

Being Too Human

While consumers were more forgiving of a nameless algorithm, they became less lenient when the algorithm had an identity.

In another experiment with the same fictitious financial firm, a third group of participants was shown a different cause for the error: a financial program that was given the name Charles. Anthropomorphizing the algorithm, the researchers found, made consumers less tolerant of its failures:

· On brand attitude, “Charles” scored 0.51 point lower than an anonymous algorithm.

· When asked to make a small donation to a hunger charity, participants told about “Charles” gave $1.60, compared with $2.05 for those exposed to a nameless algorithm, suggesting their “distaste” carried over even to unrelated behaviors.

Attitude ratings dropped as much for the mistakes of “Charles” as for human ones, Srinivasan notes.

“When you humanize the algorithm more, people assign greater blame to it.” — Raji Srinivasan

Accuse the Algorithm

To Srinivasan, the lesson is clear: When an embarrassing error occurs, “publicize the fact that it’s the fault of an algorithm and not a person.”

Consumers’ tolerance for algorithms extends to fixing mistakes as well as making them, she adds. In another experiment, participants preferred technological supervision of algorithms over human supervision for preventing future algorithm errors.

“If you’re using technological supervision, you should highlight it,” she says. “If you’re using humans, it may be wiser to not publicize it.”

A company takes a risk when it gives a program a personality and a name, like Apple’s Siri or Microsoft’s ill-fated Tay, Srinivasan says. In such cases, the company should increase its vigilance about preventing errors and make plans for damage control when they happen.

“As more companies are using anthropomorphized algorithms, they’re likely to have more negative responses from consumers when something goes wrong,” she says. “Be prepared for that and know how you’re going to handle it.”

“When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors” is online in advance in the Journal of Marketing.

Story by Steve Brooks

Recently Published

Ads are coming to Netflix, perhaps even sooner than anticipated.  The Wall Street Journal has reported that Netflix has moved up the launch of their ad-supported subscription tier to November. The Sydney Morning Herald, meanwhile, is reporting that Australia is amongst the first countries likely to experience ads on Netflix later this year. Netflix first announced they would introduce […]
Have you ever been mocked or abused for your views on COVID? If so, you’re not alone. Anyone wishing to engage in an open dialogue surrounding the pandemic can often encounter a hostile climate, especially online. This extreme polarisation around COVID reflects a broader “culture war” in society, where disagreements on political, cultural and social […]

Top Picks

More and more colleges are becoming “metaversities,” taking their physical campuses into a virtual online world, often called the “metaverse.” One initiative has 10 U.S. universities and colleges working with Meta, the parent company of Facebook, and virtual reality company VictoryXR to create 3D online replicas – sometimes called “digital twins” – of their campuses that are […]
Proof-of-stake is a mechanism for achieving consensus on a blockchain. Blockchain is a technology that records transactions that can’t be deleted or altered. It’s a decentralized database, or ledger, that is under no one person or organization’s control. Since no one controls the database, consensus mechanisms, such as proof-of-stake, are needed to coordinate the operation […]
Woven through Thor: Love and Thunder, Taika Waititi’s latest contribution to the Marvel Cinematic Universe, is a sentimental and age-old commentary on mortality and love.  In this intergalactic rom-com, the hammer-wielding Gods of Thunder face the shady “god butcher” Gorr in a race to the Gates of Eternity. Gorr has inherited his god-killing gift through […]


The Industrial model of education that fueled the modern era and its dizzying reach did its job incredibly well. Its prime accomplishment was the expansion of our ability to abstract the world. Until then, pragmatism reigned, limiting most people’s thoughts and interactions primarily because we lacked the tools to extrapolate information, to wield it for things that lay beyond obvious […]
I highly recommend reading the McKinsey Global Institute’s new report, “Reskilling China: Transforming The World’s Largest Workforce Into Lifelong Learners”, which focuses on the country’s biggest employment challenge, re-training its workforce and the adoption of practices such as lifelong learning to address the growing digital transformation of its productive fabric. How to transform the country […]

Join our Newsletter

Get our monthly recap with the latest news, articles and resources.

Latest Titles

Empirics Podcast

  • The Future Of Aviation Is Electric?


Welcome to Empirics

We are glad you have decided to join our mission of gathering the collective knowledge of Asia!
Join Empirics