Cover photo

The Loss of Immunity

While the world has been agog with the news of the arrest of the CEO of Telegram followed in quick succession by the decision to boot X (formerly Twitter) out of Brazil, a decision in the US is likely to have a much more far-reaching impact on how content is delivered online.

This is a link-enhanced version of an article that first appeared in the Mint. You can read the original at this link.


At the dawn of the internet, almost all web-based businesses were located in the US. In order to protect the nascent online space, the US government enacted Section 230 of the Communications Decency Act, a provision that shielded network operators from being punished for what their users posted. As the internet spread around the world, other countries followed suit—as a result of which social media platforms enjoy similar immunity wherever they operate.

Durov Arrested

Last week, as he stepped of his private jet in France, Pavel Durov, billionaire owner of the messaging app Telegram, was denied that immunity. He was arrested and charged with operating a platform that was being used to commit a litany of crimes. With that, he has joined a small but growing club of tech CEOs who have been held responsible for what others do on their platforms.

There has always been a tension between social media platforms and the nations in which their users reside. Platforms want their users to have the freedom to post whatever they choose, so that the widest cross-section of interests are available on their sites. Nations, on the other hand, want the content on these platforms to comply with national laws, and are often willing to go to considerable extents to ensure as much.

For the most part, this manifests itself through take-down notices or instructions issued to platforms to remove illegal content. Most laws offering network service provider immunity also stipulate that those protections will fall away if platforms do not take down content they know is illegal. Telegram had obstinately refused to comply with takedown notices, which is why Durov was arrested when he set foot in France.

Brazil and X

I thought this was going to be the biggest tech story of the week, until 12:01 on Saturday, when access to X (formerly Twitter) was suspended in Brazil. Anyone attempting to access it even through a VPN was liable to be fined nearly $9,000 a day. This order marked the culmination of an increasingly belligerent exchange between X owner Elon Musk and Justice Alexandre de Moraes, the person vested with the sole responsibility for keeping the Brazilian internet free of online misinformation. As part of his investigations, Justice de Moraes had ordered 140 accounts on X to be shut down, ruling that they were being operated by right-wing conservatives who were using them to question the 2023 election loss of former President Jair Bolsonaro and to sympathise with a mob that stormed Brazil’s Congress.

While X initially indicated it would comply, Musk did a sudden volte face, accusing Justice de Moraes of destroying free speech for political purposes and calling him a “criminal cosplaying as a judge.”

In response, Justice de Moraes extended his investigation to include Musk himself and threatened representatives of X in Brazil with arrest. He then froze the finances of another Musk company that provided Starlink satellite internet services in an attempt to collect the $3 million worth of fines he had levied against X.

If there is anything these incidents teach us, it is that countries around the world have finally reached the end of their tether when it comes to the manner in which content moderation is currently been carried out online. They are no longer willing to let tech companies determine what content their citizens get to consume. And an increasing number of them are willing to take stern action if required to ensure compliance, even if that means denying the immunity these companies have historically relied on.

As inconvenient as this might be, companies with global ambitions should be able to reorganise their operations to make sure that their operations align with the requirements of each country in which they operate.

Algorithms are First Party Speech

But there is also a less-known development that I fear will have a much more serious impact on how we consume content online. One that internet businesses might find much harder to adjust to.

In a judgement filed last week, the United States Court of Appeals for the Third Circuit ruled that recommendations generated by TikTok’s algorithm constitute expressive activity and as a result should be treated as “first-party speech.” Since Section 230 only protects platforms from any liability that arises from third-party content, this meant that TikTok would not be immune from the consequences of recommendations generated by its algorithm.

The US judgement was issued in a case brought by the mother of 10-year-old Nylah Anderson, who, after watching a video on the ‘Blackout Challenge’ (a TikTok trend that encourages users to record acts of self-asphyxiation), ended up hanging herself. While the video that resulted in her daughter’s death was created by a third party, TikTok’s algorithm had recommended and promoted it on a ‘For You’ page uniquely curated for Nylah. For this reason, Nylah’s mother was convinced that TikTok should be held liable for her death.

Today, almost all the content that we are presented with has been shaped, in some way or another, by an algorithm. That is the only way, in this age of abundant content, that anyone can reasonably hope to figure out what to consume next. If algorithmic recommendations are going to be treated as first-party speech and therefore denied immunity, web companies will balk at using them. This will have a significant impact on how the online world works. And I am not sure we are ready for it.

Loading...
highlight
Collect this post to permanently own it.
Ex Machina logo
Subscribe to Ex Machina and never miss a post.