The internet was supposed to unite us.
So what the fuck happened?
And how did we become a cross-generational digital nation of assholes?
We dreamed of a global, decentralized network spreading information, enabling communication, and connecting humanity. But somewhere along the way, it divided us. Our online spaces, overrun with tribalism, misinformation, harassment, and cruelty, often bring out the worst in human nature.
Much of the blame lies in how our dominant online platforms are built. Social media, discussion forums, and comment sections are not neutral but purpose-driven products whose goals often work against healthy discourse. Consider anonymity, which shields users from accountability, enabling threats, propaganda, and mindless tribalism divorced from any ownership or consequence. Engagement-based algorithms promote controversial, emotive, and false content to maximize likes, shares, and ad revenue.
Platforms allow misinformation to spread like wildfire, with manipulated media impersonating real people. Organized trolling campaigns operate openly, bombarding women, people of colour, and other marginalized groups with harassment yet facing few repercussions. Foreign influence operations deliberately spread propaganda to sow division. Conspiracy theories and partisan untruths proliferate unchecked, eroding our shared reality.
Outrage, tribalism, cruelty — it’s all good for business. And so we have created online ecosystems that permit and actively incentivize our worst instincts. Environments perfectly designed to bring out trolls, grifters, demagogues, and white supremacists.
But it’s not that simple. It’s not as simple as pointing at bogeymen and QAnon influencers. That’s passing the buck.
The assholeification of the digital world is a movement we’ve all been a part of. Every single one of us. We’ve all felt and — at times, given into — the pull of being an asshole. We all became assholes online because the digital world enabled us to forget, ignore, or not bother with empathy, and we jumped at the opportunity.
It didn’t have to be this way. People built these systems and these ways of interacting; people removed the friction points that made rejecting empathy either difficult or socially acceptable, and people can change.
This is the task ahead, both as individuals and societies — to reimagine our online world in a way that reconnects us to our shared humanity. A revival of the internet’s founding vision, where connection comes before division, truth outweighs propaganda, and diverse voices come together in meaningful discourse. The experiments we’ve run over the past decades have gone off the rails. It’s time to retake control of our creation.
But before we do that — we have to understand the process of empathy obstruction.
We have to understand how the asshole gets unlocked.
We Became Assholes Through Anonymity
Anonymity has always played a role online, from bulletin boards and chatrooms to modern forums like Reddit and X. As our online lives become increasingly central, anonymity has evolved from an interesting quirk into a defining and sometimes dangerous force.
An anonymous online persona decouples behaviour from real-world identities and consequences. Freed from accountability, people become more likely to make threats, spread misinformation, harass others, or behave in ways counter to social norms. Experiments have shown that even minimal anonymity shifts individuals towards more egocentric and unethical activity. The pseudo-anonymity of screen names produces measurable changes versus real names.
Total anonymity, as found on many forums and message boards, can have an even more dramatic effect. With no identity tied to actions, inhibition and empathy tend to decrease. Studies of anonymous spaces repeatedly find increases in racism, sexism, body-shaming, and other antisocial acts. Anonymity grants the courage to be cruel.
This “cowardly courage” manifests as organized attacks. Anonymous online mobs frequently coordinate harassment campaigns against women, people of colour, and other marginalized groups. A study of one such campaign found that over two-thirds of attacks originated from anonymous accounts versus named accounts. Anonymity unleashes our worst impulses.
Propagandists and bad actors weaponize anonymity to spread misinformation without accountability. On forums like 4chan, anonymous users deliberately push racism, conspiracy theories, and falsehoods into the mainstream. Russia exploits anonymity in propaganda operations to impersonate Americans digitally. Anonymity provides cover for dangerous lies.
Of course, anonymity has upsides, too. Whistleblowers rely on it to expose wrongdoing without risk of retaliation. People exploring sensitive identity issues use it to find support and understanding. Anonymity enables honesty on taboo topics and allows those without the power to challenge institutions.
This same cloak of anonymity also shelters trolls, demagogues, and harassers from the consequences of their actions. It is the primary enabler of online mobs, giving courage to those whose messages would be objectively horrible if attached to a real person. Anonymity tilts online spaces to reward outrage over discourse and noise over the signal.
We Became Assholes For Engagement
Engagement is the lifeblood of social media. Comments, shares, likes — these metrics determine what content surfaces and succeeds. To understand how our online behaviour has grown crueller and more extreme, look to how platforms incentivize engagement above all else.
Engagement-based algorithms govern nearly every social platform, recommending content based on popularity rather than accuracy or quality. Studies find falsehoods and conspiracy theories consistently outperform factual reporting in shares and likes. Extremist pundits get more engagement than moderate voices. Incendiary tweets draw more eyes than thoughtful discussion.
The nature of these algorithms perpetuates addictive feedback loops. Outrageous content drives engagement, which platforms amplify and recommend to more users, who react with further outrage. Nuance and complexity get drowned out by whatever provokes the strongest reaction. Objective truth matters less than an emotional response.
These dynamics enable fringe voices to achieve outsized influence through strategic gamed engagement. Provocateurs make extreme statements not to persuade but to trigger high-arousal emotions like anger, fear, or disgust. Those emotions drive shares, electrifying algorithms to blast content to new audiences. Studies find moral outrage is the strongest virality driver, and bad actors exploit this mercilessly.
Consider the rise of hyper-partisan influencers across social media, who traffic almost exclusively in outrage, conspiracy theories, and tribalism. Instead, they produce little original content, endlessly reacting to the day’s controversies and “owning” whichever group their audience hates. Their social capital stems not from being correct or thoughtful but skilled at punching the brain’s rage buttons.
Engagement-based systems incentivize these toxic dynamics not by accident but by design. Most major platforms are funded by advertising, with business models requiring endless growth in user attention. Controversy, drama, and outrage keep eyes glued and feeds scrolling — precisely what advertisers want. Stoking anger and paranoia is rewarded as long as it’s profitable.
From an individual user perspective, these same incentives structure our online behaviour. In hopes of going viral, we are encouraged to take cheap shots, dunk on opponents, and frame every issue as an outrage against the other tribe. Thoughtful arguments stand little chance versus calculated takedowns — no matter how dishonest. Progress requires understanding these situational factors that encourage our worst instincts.
Given their foundational role in social media business models, fixing engagement algorithms and incentive structures won’t be easy. BUT reforms are essential if we want online spaces to allow quality discourse and make the adoption of assholism harder than adopting empathy. Prioritizing accuracy over emotion, incentivizing listening over dunking, shaping norms around constructive disagreement — better sociotechnical architectures are possible. We face both a design challenge and internal psychological work of resisting reactivity.
The quest for likes and upvotes brings out our tribal, impulse-driven selves. But we are also capable of so much more — empathy, curiosity, nuance. Reclaiming the internet’s potential requires reimagining platforms built to serve humanity’s best, not worst, instincts. Who we are and what we say and do online arises from how spaces are constructed around us. We must demand, create, and frequent spaces elevating compassion over compulsion and truth over the tribe. The algorithms must serve us — not the other way around.
We Became Assholes When We Rejected Truth
The rapid spread of misinformation represents one of the most alarming trends of the social media era. False and misleading content now proliferates across online ecosystems, drowning out facts and eroding shared reality. While misinformation is an old problem, modern platforms act as super-spreaders.
This stems directly from how major platforms optimize for engagement over accuracy. Studies repeatedly show falsehoods and conspiracy theories outperform factual reporting in shares, likes, and viral spread. Platform algorithms recommend this misleading content more frequently, exposing more users and fueling engagement. Bad actors exploit these dynamics to flood the information ecosystem with propaganda and disinformation.
State and non-state actors have weaponized these vulnerabilities to attack truth itself. Russia pioneered industrialized “active measures” to digitally impersonate Americans online, using troll armies and manipulated videos to spread disinformation without disclosure. Other state actors like China and Iran follow suit. These propaganda efforts deliberately target society’s fissure lines around race, immigration, and policing to inflame tensions and create chaos.
The Kremlin’s 2016 election interference demonstrated the frightening effectiveness of these information warfare tactics. Russia infiltrated American political discourse by posing as Black Lives Matter activists, gun rights groups, veterans, and more. They organized protests and counter-protests around divisive issues on US soil. Their propaganda reached over 100 million Americans on Facebook, shaping political narratives and outcomes. Yet their primary goal was broader — to undermine shared reality and trust in institutions. In this, they succeeded.
But misinformation also proliferates from domestic sources. Partisan websites and pundits spread propaganda and conspiracy theories for political gain. Scammers use fake news to drive ad revenue and traffic. Motivated reasoning makes people more likely to believe false claims aligning with their ideology. Those with the most extreme views are often most prone to disinformation — and algorithms funnel them more of it in a dangerous feedback loop.
No issue highlights these threats more than COVID-19, which spawned misinformation undermining public health on an unprecedented scale. Viral conspiracy theories discouraged mask-wearing, social distancing, and vaccines, costing countless lives. Public health agencies fought a virus and an entire disinformation ecosystem built to undermine them. Social media’s engagement-based algorithms accelerate the death of empathy.
Each of us plays a role too. Cognitive biases make us more likely to share content that provokes strong emotion rather than search for factual accuracy. We become unwitting accomplices when we react instead of reflecting, spreading information without checking sources, or engaging with disinformation even to argue against it.
Protecting truth in the digital age requires reforming platforms’ economics and business models. But it also depends on each of us taking responsibility. We must slow down, verify sources, avoid knee-jerk reactions, and debunk falsehoods responsibly when we encounter them. Progress begins by recognizing how situational factors currently incentivize the worst in human nature — and having the wisdom and will to demand spaces that bring out our best.
We Became Assholes When We Normalized Trolling
Over the last 20 years, a new kind of epidemic has emerged and evolved: trolling. This issue has grown beyond annoyance into a systemic problem characterized by organized groups, mass coordinated campaigns, and individual trolls targeting vulnerable communities and individuals.
Organized groups of trolls often target vulnerable communities, minorities, the LGBTQ+, or people with specific political beliefs. These attacks are not spontaneous; they are carefully planned and executed with the intent to harass, intimidate, and silence.
These groups often operate in the shadows, using anonymous profiles and encrypted channels to coordinate their actions. They exploit the fear and insecurity many vulnerable individuals feel online, magnifying it through relentless abuse and threats.
The damage caused by these groups goes beyond the digital sphere. The emotional toll on victims can lead to mental health issues, withdrawal from social life, and even self-harm. These attacks erode online spaces’ trust and sense of community, replacing them with fear and suspicion.
Mass coordinated trolling campaigns take the organized nature of group attacks to a new level. These campaigns are often politically motivated, aimed at silencing dissenting voices, discrediting individuals, or pushing a particular agenda.
Through bots, fake accounts, and human trolls, these campaigns flood social media platforms with targeted harassment, misinformation, and propaganda. The scale and intensity of these campaigns can be overwhelming, making it difficult for targets to respond or defend themselves.
This form of trolling has severe implications for democracy and free speech. These campaigns undermine open societies' principles by stifling dissent and manipulating public opinion. They create a chilling effect, where people are afraid to speak out for fear of becoming targets.
Not all trolls are part of organized groups or mass campaigns. Many operate alone, driven by a desire for attention, amusement, or a warped sense of satisfaction from causing distress.
These individual trolls often engage in provocative enterprises, posting offensive comments, sharing controversial opinions, or mocking others. While their actions may seem trivial compared to organized attacks, they can still cause significant harm.
The anonymity of the internet allows these trolls to act without consequence, encouraging them to push boundaries further. The outrage and reactions they provoke often feed into their enjoyment, creating a vicious cycle of provocation and response.
The trolling epidemic is a multifaceted problem, reflecting the complex nature of human interactions and the challenges of regulating online spaces. From organized groups attacking vulnerable communities to mass coordinated campaigns and individual trolls, the issue is pervasive and deeply damaging.
We Became Assholes By Hanging Out With Assholes
Engagement-based algorithms have revolutionized how we discover content online. Platforms analyze our impulses and interests to curate personalized feeds catering precisely to our interests. This can be tremendously useful — but also carries unintended consequences.
When algorithms cater exclusively to existing preferences, they filter out challenging or opposing views. Over time, this creates isolated bubbles and echo chambers. Our feeds become dominated by similar voices, endlessly reinforcing our worldview. The mechanics of algorithms silently shape our realities.
This intense personalization fuels fragmentation into increasingly extreme niches. Racists cluster with fellow racists, anti-vaxxers with anti-vaxxers, and conspiracy theorists dig deeper down rabbit holes. People’s views grow more entrenched with less exposure to alternate perspectives. Moderates leave as communities become more polarized.
These dynamics assist the spread of misinformation and extremism. With no counterbalancing views, falsehoods and propaganda face little resistance. Highly-engaged niche groups exert outsized influence on platforms built around engagement. Anger and paranoia thrive inside closed loops.
Echo chambers breed tribalism as groups form identities around shared views under attack by outsiders. Lacking humanization or dialogue with opposing sides, caricatures emerge demonizing perceived enemies. Studies find people dehumanize those with differing political opinions — seeing them as less evolved and lacking empathy.
Bridging these divides will require making our algorithms — and worlds — more open. Platforms should balance relevance with occasional exposure to a diversity of views. Similarly, users can proactively follow those with different ideologies, creating digital “contact zones.” Openness provides an inoculation against misinformation and enables empathy.
Echo chambers do not arise by accident but by engineering. The super-personalization of algorithms intelligently gives us what we want. But this can undermine what we need — occasional discomfort in the form of new perspectives and challenges to our assumptions. Our task is to build spaces that allow synthesis across tribes, enabling understanding.
Escaping bubbles is difficult when business models profit from maximizing our time inside walled gardens. Revitalizing the web’s connective potential requires fighting fragmentation with curiosity. Our shared future depends on architecture encouraging us to inhabit each other’s worlds, not just our own.
We Became Assholes To Sell Platforms
Behind the algorithms, influencers, and outrage factories lies an even more fundamental driver — the very business models of major platforms. To understand the forces shaping online discourse, we must follow the money.
The dominant model for most major social platforms is advertising. Platforms offer free services in exchange for users’ attention and data, which is used to target ads. Revenue depends on maximizing engagement — keeping users constantly plugged into algorithmic feeds filled with content optimized to compel the brain’s attention.
This incentivizes a focus on user time over quality interactions. Features promoting health, like parental controls or limited usage, undermine profits. Controversy, outrage, and drama are good for business. The idea? Addicted users trapped in endless scrolls and infinite feeds.
These incentives carry societal consequences when outrage fuels engagement. Platforms benefit from — and thus amplify — the most incendiary voices. Extremists drive clicks; conspiracy theories spread faster than facts. Corporate profit motives shape what billions see online.
Platforms have outsized influence on public discourse, with little oversight or transparency. A handful of private companies command unprecedented control over the flow of information and the emergence of narratives, public opinion, and political outcomes. Yet their inner workings are largely shielded behind proprietary algorithms and business secrets.
This lack of accountability incentivizes business models exploiting societal division. Platforms drive engagement by siloing users into echo chambers and reactionary tribes. Protections against harassment, abuse, and misinformation threaten profits — and as a result, they get inadequate attention. Critics argue current incentives are incompatible with healthy discourse.
Reform will require rethinking the dominant ad-based business model. Some argue platforms are public goods that should not be organized around private profit. Others propose regulation forcing transparency around algorithms and moderation. Activist investors pressure companies to prioritize social good over endless growth.
But solutions also depend on public pressure and individual choices. As users, would-be-non-assholes and citizens, we must demand platforms designed to serve humanity’s best interests, not the whims of algorithms. And make conscientious decisions about how we spend time and attention online. The path forward lies in grappling with complex economics shaping our virtual world.
Our digital public squares should connect and empower diverse voices in the healthy debate — instead, profit motives fuel division. Reclaiming the internet’s potential requires examining how market incentives dictate platform decisions influencing billions and then building a future true to the liberatory vision which birthed this transformative technology.
How To Not Be An Asshole
Here’s the good news.
All is not lost. Outrage may grab headlines, assholism may be easy, but the internet remains filled with knowledge, creativity, and human connections transcending division. Constructive paths forward exist if we have the courage and wisdom to take them. And — as you’ve probably guessed — those paths are dictated by our ability or willingness to build infrastructure for empathy.
First, we must advocate for reforms of online architecture. Platforms should be pressed to improve content moderation, reduce anonymity, and provide algorithmic transparency. Business models maximizing addiction and outrage should face regulation to align with social good. Section 230 protections are not absolute — accountability can be demanded.
But top-down fixes alone are insufficient. Lasting progress requires a cultural change in how we approach online spaces. We can build habits of critical thinking and emotional self-regulation to resist reactivity. Seek shared truth grounded in evidence, not tribal confirmation bias. Assume good faith until proven otherwise.
And proactively build connections across lines of difference. Share stories and follow accounts of those unlike you. Join forums fostering nuance and perspective-taking. Small acts of openness accumulate into bulwarks against polarization.
Education provides another key lever, equipping the next generation of digital citizens with the ethics and emotional intelligence needed to resist online harm. Schools should teach critical thinking, media literacy, and self-reflection — identifying misinformation and manipulation while finding inner resilience.
Finally, we must remember our common hopes and humanity. The same internet accelerating humanity’s worst also holds breathtaking potential for good. And each of us retains agency to choose how we inhabit virtual worlds. Building a just and truthful online community requires wisdom, courage, and faith in human decency. If we persist, a brighter future remains possible.
At their core, today’s problems originate less in the technology itself than the human choices shaping it. We face a design challenge — conceptualizing and implementing sociotechnical systems aligned with ethical values, open discourse, and the public good. Getting there will be a continual struggle. But one worth waging for the world we hope to leave our children.
This is the central project of our information age, to which each of us is called: rebuilding online ecosystems true to the enlightenment ideals which unleashed humanity’s creative potential — spaces connecting us in meaningful pursuits of knowledge, justice, and understanding across barriers.
We have all become assholes.
We have made a botched civilization online.
It is within our power to remake it.