On algorithmic dystopia

TikTok, Facebook, Twitter, Instagram, Snapchat, and a “Propaganda” (yes, I’ve decided that ought to be the plural) of other social platforms promised a revolution in our ability to connect and share with each other. To a great extent, they delivered on that promise. However, a crucial element of their design, algorithmic content curation, has increasingly come under scrutiny. Critics argue that instead of enhancing our online experience, algorithms have exacerbated divisiveness, misinformation, and mental health problems. The question arises: Is algorithmic social media a failed experiment?

The initial vision of social media was to connect people, allow them to share experiences and ideas, and foster a sense of global community. The advent of algorithmic curation, however, dramatically altered this landscape. Algorithms based on machine learning and artificial intelligence were designed to personalize content for users, ostensibly to improve the user experience. They analyze patterns in a user’s behavior and preferences, then curate a feed that would, in theory, match their interests.

But the reality could have been better. Or it couldn’t have been much worse. In their quest to keep users engaged, these algorithms have proven to cultivate echo chambers, reinforcing users' existing beliefs and isolating them from differing viewpoints. A 2020 report from the Pew Research Center found that 64% of Americans believe fake news has caused “a great deal” of confusion about basic facts of current events. The inability of algorithms to effectively distinguish between valid information and sensational or misleading content is a significant contributor to this problem.

The Mental Health Conundrum

The mental health implications of algorithmic social media are a significant concern, particularly given the undeniable link between extensive social media use and psychological distress. A study conducted in 2019, published in the American Journal of Epidemiology, found a significant association between social media use and increased levels of depression and anxiety, with young people being particularly susceptible. This research provides robust evidence that our digital habits impact our mental health.

The hyper-personalized nature of algorithmically-curated feeds plays a significant role in shaping this worrying trend. These algorithms are designed to show us content that mirrors our interests, behaviors, and biases, which, while making our online experience more tailored, can also contribute to a culture of comparison and inadequacy. Users, especially young ones, are persistently exposed to meticulously curated, often unrealistic depictions of life, leading to feelings of inadequacy and fostering anxiety. When their lives, which are naturally filled with ups and downs, don’t measure up to the constantly positive and successful lives they see online, it can lead to decreased self-esteem and increased anxiety.

The design of these algorithms is rooted in a drive to captivate users' attention and keep them engaged with the platform for as long as possible. Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology, has given a compelling analogy for social media platforms, likening them to slot machines. Just like a gambler is drawn to the next spin of the wheel, users are manipulated into scrolling, clicking, and engaging endlessly. This addictive design exploits human psychology, harnessing our desire for social approval and fear of missing out to keep us locked into a perpetual cycle of engagement.

The consequences of this outright manipulation are compulsive use of social media, sleep problems, and increased stress and anxiety. Constant engagement with social media - via an algorithm rather than an organic social circle of communication and curation - leaves little room for activities crucial for mental health, such as face-to-face social interactions, physical activity, and downtime.

The Death of Value

Meaningful, intelligent, and purpose-driven content is overshadowed by the noise of viral sensations, anger-driven discourse, sensationalized half-truths, and even blatant lies that threaten the fabric of informed public discourse and hinder the growth of a well-educated and discerning digital society.

The algorithms that drive social media feeds are designed to prioritize content that generates strong user engagement. These algorithms will always favor sensational content intended to provoke solid emotional reactions inciting more clicks, shares, and comments.

Well-researched articles, insightful commentaries, and intelligent discourses either struggle to gain visibility or are quickly overtaken and overwhelmed by mountains of shit. They are drowned out by viral noise - catchy headlines, clickbait articles, and emotionally-charged posts designed to spread quickly and widely. This trend discourages the production of quality content - it impedes the capacity of social media users to engage in meaningful conversations and develop informed opinions.

Misinformation and disinformation can spread rapidly, often tapping into pre-existing biases and fears. In the worst cases, this can lead to real-world harm, as we’ve seen with the spread of conspiracy theories, false health information, and politically motivated agitprop campaigns.

An Algorithmic Pandora’s Box

Algorithms create filter bubbles, where users are only exposed to content that aligns with their existing beliefs. This leads to a lack of exposure to diverse viewpoints, deepening political and social divisions.

The 2016 U.S. presidential election brought this issue into sharp focus. Reports suggested that social media algorithms played a part in spreading misinformation and deepening partisan divides. A study from the Proceedings of the National Academy of Sciences found that false information spreads six times faster than accurate information on Twitter, primarily driven by the platform’s algorithms.

Algorithmic social media platforms use complex algorithms to determine the content each user sees. These algorithms are designed to maximize user engagement, often prioritizing controversial, sensational, or extreme content. While this model has been effective in terms of user retention and platform growth, it has also contributed to a range of societal problems, including the spread of misinformation, political polarization, and increased mental health issues among users.

The rise of artificial intelligence and machine learning will likely magnify these algorithms' impact. AI systems can process vast amounts of data and learn from user behavior, continually refining their algorithms to increase engagement. While this technology has the potential to provide personalized and relevant content, it also poses significant risks. In tandem with AI-generated content, AI-driven algorithms will inevitably create more divisive echo chambers by showing users content that aligns with their existing beliefs, reinforcing biases, and hindering exposure to diverse perspectives.

As they stand, AI systems are opaque, making it challenging to understand how they make decisions about content selection. This lack of transparency means holding platforms accountable for the content they promote is a growing problem.

The issues with algorithmic social media have led to calls for reform. Advocates argue for increased transparency in how algorithms function and make decisions, improved methods for identifying and combating misinformation and developing “humane” technology that prioritizes user well-being over engagement metrics.

In the face of mounting evidence pointing to the detrimental effects of algorithmic social media, it would not be inaccurate or premature to label it as a “failed experiment.” These platforms, transformative in their scope and impact, have proven far from perfect. They are not immutable edifices; they can be redesigned.

There is a growing push for legislative action. In the U.S., the debate around Section 230 of the Communications Decency Act, which currently protects online platforms from being held liable for user-generated content, has gained momentum. A bipartisan group of U.S. Senators and Members of Congress introduced a bill in 2023 that would make significant reforms to Section 230. The proposed legislation, called the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act, would allow internet services, mainly social media companies, to be held accountable for enabling cyber-stalking, online harassment, and discrimination. The Act aims to force online service providers to deal with the improper use of their platforms, facing potential civil liability for any failings.

The SAFE TECH Act also includes provisions addressing advertising and other paid content, removing any protections for misleading content, scams, and fraud. It allows consumers to take legal action when content on a provider’s site will likely cause irreparable harm. Furthermore, it removes any protection preventing enforcement of civil rights laws and wrongful death actions, which has had significant implications in cases where the information on the internet contributed to severe crimes.

The SAFE TECH Act does not repeal Section 230 but updates the legislation already in place. As Senator Amy Klobuchar (D-MN) expressed, “We need to be asking more from big tech companies, not less. How they operate has a real-life effect on the safety and civil rights of Americans and people worldwide, as well as our democracy. Our legislation will hold these platforms accountable for ads and content that can lead to real-world harm”.

Regulation is rarely the answer, and as we’ve seen in the actions and enforcements of the SEC under Gary Gensler, they are politically motivated and anti-commercial in their work. But when algorithms are left unchecked by their creators, to the detriment of social intercourse, and when those algorithms can even be traced to Government agencies in non-allied nations (cough, TikTok, cough), we have to ask - how much longer can we rely on common sense or ethical behaviour from private companies?

This is a critical juncture - a period of reckoning for the current iteration of modern social media. The existing rules governing the digital realm need to serve us as intended. The current algorithmic models, which were supposed to enrich our online experiences, have instead fueled divisiveness, misinformation, and mental health issues. There is a pressing need to rewrite these rules, reformulate the algorithms, and re-envision social media platforms' foundational ethos. The health of a democratic society, intelligent discourse, and self-determination hangs in the balance.

@Westenberg logo
Subscribe to @Westenberg and never miss a post.