In2016, Microsoft introduced to Twitter an artificially intelligent chatbot named Tay. By interacting with users, Tay learned new vocabulary. Unfortunately, the bot was quickly targeted by trolls, who taught it to spout racist and sexist commentary. After 16 hours, Microsoft withdrew the chabot.

Tay was not an isolated incident, says Raji Srinivasan, professor of marketing at Texas McCombs. Tech giants including Facebook, Google, Apple, and Amazon have suffered black eyes from algorithms that offended or harmed users. In a 2017 survey, 78% of chief marketing officers said that one kind of algorithm error had hurt their brands: placing their online ads next to offensive content.

In new research, Srinivasan offers them some encouraging news: Consumers are faster to forgive a brand for an algorithm failure than for a human one.

“They assume the algorithm doesn’t know what it’s doing, and they respond less negatively to the brand,” she says.

But the news comes with a caution. Consumers become less tolerant when an algorithm tries to mimic a human being.

“When you’re talking to Alexa, it has a name and a voice. When it starts asking irritating questions, you’re likely to hold it more responsible for the mistakes it makes.” — Raji Srinivasan

Projecting Minds into Algorithms

In recent years, consumers have become more aware that unseen programs determine much of what they see online. “Every time you go to Facebook or Google, you’re not interacting with humans but with technology designed by humans,” Srinivasan says.

How do people react, she wondered, when that technology upsets them? The answer, she suspected, depends on whether they unconsciously view it as having a mind.

According to the psychological theory of mind perception, she says, “Humans tend to assign mind — more or less — to inanimate objects. By assigning more mind to objects, they assume that the inanimate objects have more agency and are capable of taking actions.”

If someone knows they’re dealing with a mindless algorithm, Srinivasan reasoned, they might be less inclined to blame it for a blooper.

To test the idea, she worked with Gülen Sarial Abi of Denmark’s Copenhagen Business School. In a series of 10 studies with a total of 2,999 participants, they presented examples of algorithm errors and measured participants’ responses.

In general, the researchers found, algorithms’ missteps did less damage to a company’s reputation than ones committed by people.

In one study, participants read about a fictitious investment firm that had made a costly mistake. Some were told the culprit was an algorithm, others that it was a person. Subjects then rated their attitudes about the brand on a scale from 0 to 7.

Those told about the algorithm error had more positive attitudes, giving the brand an average score of 4.55. Those who faulted humans gave a lower average rating: 3.63.

“They held the brand less responsible for the harm caused by the error.” — Raji Srinivasan

Being Too Human

While consumers were more forgiving of a nameless algorithm, they became less lenient when the algorithm had an identity.

In another experiment with the same fictitious financial firm, a third group of participants was shown a different cause for the error: a financial program that was given the name Charles. Anthropomorphizing the algorithm, the researchers found, made consumers less tolerant of its failures:

· On brand attitude, “Charles” scored 0.51 point lower than an anonymous algorithm.

· When asked to make a small donation to a hunger charity, participants told about “Charles” gave $1.60, compared with $2.05 for those exposed to a nameless algorithm, suggesting their “distaste” carried over even to unrelated behaviors.

Attitude ratings dropped as much for the mistakes of “Charles” as for human ones, Srinivasan notes.

“When you humanize the algorithm more, people assign greater blame to it.” — Raji Srinivasan

Accuse the Algorithm

To Srinivasan, the lesson is clear: When an embarrassing error occurs, “publicize the fact that it’s the fault of an algorithm and not a person.”

Consumers’ tolerance for algorithms extends to fixing mistakes as well as making them, she adds. In another experiment, participants preferred technological supervision of algorithms over human supervision for preventing future algorithm errors.

“If you’re using technological supervision, you should highlight it,” she says. “If you’re using humans, it may be wiser to not publicize it.”

A company takes a risk when it gives a program a personality and a name, like Apple’s Siri or Microsoft’s ill-fated Tay, Srinivasan says. In such cases, the company should increase its vigilance about preventing errors and make plans for damage control when they happen.

“As more companies are using anthropomorphized algorithms, they’re likely to have more negative responses from consumers when something goes wrong,” she says. “Be prepared for that and know how you’re going to handle it.”

“When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors” is online in advance in the Journal of Marketing.

Story by Steve Brooks

http://www.mccombs.utexas.edu/
Contributor

Recently Published

Key Takeaway: Artificial intelligence (AI) is revolutionizing investment by making professional financial insight and portfolio management accessible to everyone. AI-powered robo-advisers, such as Betterment and Vanguard, are democratizing investment and providing tailored advice to a new generation of investors. With 31% of gen Zs and 20% of millennials using robo-advisers, the financial industry is adapting […]
Key Takeaway: Nasa’s Artemis program is set to return astronauts to the Moon and establish a permanent orbiting laboratory by the end of the decade. As humanity’s footprint expands, a new field emerges: astroforensics. Space presents a unique and harsh environment for forensic investigations, with altered gravity, cosmic radiation, extreme temperatures, and oxygen-providing climate systems. […]

Top Picks

Key Takeaways: Stock market enthusiasts often claim to predict financial market trends with great accuracy, but this is not possible due to the uncertainty and unpredictable nature of the environments in which we make daily decisions. Human cognition tends to favor a reductionist approach to information processing, sometimes called “tunneling,” which can lead to biased […]

Trending

I highly recommend reading the McKinsey Global Institute’s new report, “Reskilling China: Transforming The World’s Largest Workforce Into Lifelong Learners”, which focuses on the country’s biggest employment challenge, re-training its workforce and the adoption of practices such as lifelong learning to address the growing digital transformation of its productive fabric. How to transform the country […]

Join our Newsletter

Get our monthly recap with the latest news, articles and resources.

Login

Welcome to Empirics

We are glad you have decided to join our mission of gathering the collective knowledge of Asia!
Join Empirics