Key Takeaway:

Online algorithms on social media platforms amplify information people are biased to learn from, leading to social misperceptions, conflict, and the spread of misinformation. This mismatch between human psychology and algorithm amplification can result in functional misalignment, leading to incorrect perceptions of the social world and the spread of misinformation. Research on this topic is in its infancy, but new studies are emerging to examine key components of algorithm-mediated social learning. To foster accurate human social learning, researchers are working on new algorithm designs that increase engagement while penalizing PRIME information.


Peopleโ€™s daily interactions with online algorithmsย affect how they learn from others, with negative consequences including social misperceptions, conflict and the spread of misinformation, my colleagues and I have found.

People are increasingly interacting with others in social media environments where algorithms control the flow of social information they see. Algorithms determine in part which messages, which people and which ideas social media users see.

On social media platforms, algorithms are mainly designed to amplify information that sustains engagement, meaning they keep people clicking on content and coming back to the platforms. Iโ€™m a social psychologist, and my colleagues and I have found evidence suggesting that a side effect of this design is that algorithms amplify information people are strongly biased to learn from. We call this information โ€œPRIME,โ€ for prestigious, in-group, moral and emotional information.

In our evolutionary past, biases to learn from PRIME information were very advantageous: Learning from prestigious individuals is efficient because these people are successful and their behavior can be copied. Paying attention to people who violate moral norms is important because sanctioning them helps the community maintain cooperation.

But what happens when PRIME information becomes amplified by algorithms and some people exploit algorithm amplification to promote themselves? Prestige becomes a poor signal of success because people can fake prestige on social media. Newsfeeds become oversaturated with negative and moral information so that there is conflict rather than cooperation. 

The interaction of human psychology and algorithm amplification leads to dysfunction because social learning supports cooperation and problem-solving, but social media algorithms are designed to increase engagement. We call this mismatch functional misalignment.

Why it matters

One of the key outcomes of functional misalignment in algorithm-mediated social learning is that people start to form incorrect perceptions of their social world. For example, recent research suggests that when algorithms selectively amplify more extreme political views, people begin to think that their political in-group and out-group are more sharply divided than they really are. Such โ€œfalse polarizationโ€ might be an important source of greater political conflict.Social media algorithms amplify extreme political views.

Functional misalignment can also lead to greater spread of misinformation. A recent study suggests that people who are spreading political misinformation leverage moral and emotional information โ€“ for example, posts that provoke moral outrage โ€“ in order to get people to share it more. When algorithms amplify moral and emotional information, misinformation gets included in the amplification.

What other research is being done

In general, research on this topic is in its infancy, but there are new studies emerging that examine key components of algorithm-mediated social learning. Some studies have demonstrated that social media algorithms clearly amplify PRIME information.

Whether this amplification leads to offline polarization is hotly contested at the moment. A recent experiment found evidence that Metaโ€™s newsfeed increases polarization, but another experiment that involved a collaboration with Meta found no evidence of polarization increasing due to exposure to their algorithmic Facebook newsfeed.

More research is needed to fully understand the outcomes that emerge when humans and algorithms interact in feedback loops of social learning. Social media companies have most of the needed data, and I believe that they should give academic researchers access to it while also balancing ethical concerns such as privacy.

Whatโ€™s next

A key question is what can be done to make algorithms foster accurate human social learning rather than exploit social learning biases. My research team is working on new algorithm designs that increase engagement while also penalizing PRIME information. We argue that this might maintain user activity that social media platforms seek, but also make peopleโ€™s social perceptions more accurate.

Contributor

Recently Published

Key Takeaway: A study has found that humble leaders can become more promotable by growing others through a “humility route”. Human capital theory suggests that employees’ value can be enhanced by investing in their knowledge, skills, and abilities. Humble leaders focus on the learning and growth of their followers, creating human capital value for themselves. […]

Top Picks

Key Takeaway: The current economic climate is particularly concerning for young people, who are often financially worse off than their parents. To overcome this, it is important to understand one’s financial attachment style, which can be secure, anxious, or avoidant. Attachment theory, influenced by childhood experiences and education, can help shape one’s relationship with money. […]
Key Takeaway: Wellness culture, which claims to provide happiness and meaning, has been criticized for its superficial focus on superficial aspects like candles and juice cleanses. Psychological research suggests that long-term wellbeing comes from a committed pursuit of both pleasure and meaning. Martin Seligman’s Perma model, which breaks wellbeing into five pillars: positive emotions, engagement, […]
Key Takeaway: Quantum computing, which uses entanglement to represent information, has the potential to revolutionize everyday life. However, the development of quantum computers has been slow due to the need to demonstrate an advantage over classical computers. Only a few notable quantum algorithms have been developed, such as the BB84 protocol and Shor’s algorithm, which […]
Key Takeaway: China’s leaders have declared a GDP growth target of 5% in 2024, despite facing economic problems and a property crisis. The country’s rapid economic growth has been attributed to market incentives, cheap labor, infrastructure investment, exports, and foreign direct investment. However, none of these drivers are working effectively. The government’s determination to deflate […]
Key Takeaway: Neuralink, founded by Elon Musk, aims to implant a brain-computer interface (BCI) in people’s brains, allowing them to control computers or phones by thought alone. This technology holds the promise of alleviating human suffering and allowing people with disabilities to regain lost capacities. However, the long-term aspirations of Neuralink include the ability to […]

Trending

I highly recommend reading the McKinsey Global Instituteโ€™s new report, โ€œReskilling China: Transforming The Worldโ€™s Largest Workforce Into Lifelong Learnersโ€, which focuses on the countryโ€™s biggest employment challenge, re-training its workforce and the adoption of practices such as lifelong learning to address the growing digital transformation of its productive fabric. How to transform the country […]

Join our Newsletter

Get our monthly recap with the latest news, articles and resources.

Login

Welcome to Empirics

We are glad you have decided to join our mission of gathering the collective knowledge of Asia!
Join Empirics