Pseudonymity and Social Media After LLMs

AI is snaking its way through our social spaces Are we ready for mass autogenerated content? Or are we on the brink of information chaos?

With AI-generated content and profiles proliferating exponentially, I’ve been frequently asking myself “Is this human-generated?” and “Do I even care?”.

I believe that though seemingly simple, these are two of the biggest questions in determining the future of social media and guessing at what new customs will emerge from the coming chaos.


We are just entering the “confusion” stage with AI-generated content.

AI is leading an information revolution, rapidly democratizing the dissemination of information at a time when political and cultural fault lines are already being tested worldwide.

Mass autogenerated content creation that passes the Turing-test is here. Deep fakes are here. Democratized self-writing code is around the corner. Everyone in the world has access.

The tools to let anyone create are shifting from beta to production.

Credit: @pascal_bornet

Even our online forums and gathering places aren’t equipped for this, with Stack Overflow banning GPT-generated responses (game over for its current model) and LinkedIn turning into an even greater pile of crap as everybody posts generated garbage.

The Great Propaganda War

I believe that we haven’t seen anything yet, and the chaos will peak in a flood of propaganda machines. The mainstream will, for quite some time, repost deep fake and consume information without regard for its source.

The echo chambers will thrive on fake content and the Great Propaganda War will ensue (or continue?) as parties try to leverage their boosted voice for however long the window lasts.

Governments will be in favor of this happening, as it gives them a perfect backdrop for expanding power and censorship controls.

A New Order

These trends are unsustainable, which means new social and technological paradigms will soon emerge.

Let’s go back to my initial questions to help us make a few educated guesses about what this may look like.

Is this Human-generated?” and “Do I Even Care?

At least for me, the desire to be able to differentiate between human and machine depends on the context. I’m okay with non-human content on my Twitter timeline but would be disappointed to find out that I’ve been chatting with a bot.

I would break it down roughly according to the chart below. As long as sources are cited or the machine carries a positive reputation, news and alerts, advice, how-to videos, and so on, I don’t care if it’s AI-generated, and in many cases it will be preferred.

However, the accounts I follow for insights, new ideas, and contrarian views I expect to be human.

Likewise, I would not want to invest time in engaging with a bot. I would like to be heard (an important self-protection mechanism) and unless I’m practicing or learning, I hope that my words will have some influence on others (other humans!).

So what does this look like for the social media of the future when we put together the deep fakes and propaganda bots, human engagement preferences, and tech trends?

Death of the Forum

I may regret this, but I’m calling Reddit dead in its current form. Too much opportunity for propaganda and too little proof of humanity.

It may thrive as a news platform for specific niche communities, but why would you reply to comments knowing there’s a good chance they are bots?

A Hard Reversion to Primary Sources

We will (hopefully quickly) see a hard reversion to primary sources, where “primary sources” are either provably human or carry a reputation through proof of work (i.e. a developer of a popular open source tool).

Sufficiently Provable Humanity via Attestations

So we think that if there’s an expectation of engagement, we need some kind of proof of humanity.

Is it a choice between Worldcoin eye scanning and Twitter’s identity checks or will it be sufficient to structure around third-party attestations where the user has some choice?

Third-party attestations in my opinion will actually be worth more than simple proof of humanity. Bots will buy proof of humanity just like they buy verified accounts, and while that’s possible with attestations, structuring around attestations provides unlimited permutations.

For example, the following would be plenty sufficient for me if I wanted to only follow, see, and interact with accounts that I’d trust as being mostly human:

1) Staked at least $10 of BTC into their social account (first-party attestation)


2) Verified themselves as unique via 1 out of 10 trusted third-party attesters OR been deemed as human by a third-party human identification attester

While Twitter is focused on proving you are who you say you are, there is an opportunity for new and emerging social media platforms to build around a more customizable and experimental model.

Censorship Resistance

Bots aren’t the only ones that will be using AI. On the other side, we will have big state tech leveraging more advanced censorship models.

As censorship gets smarter with AI advancements, the need for censorship-resistant social media platforms increases.


We’re in for one hell of a ride! It’s hard for me to guess at the timeline here but it feels like within the next 5 years half of the major companies will either be disrupted or have changed their models significantly.

The current socio-political backdrop begs for distributed and censorship-resistant alternatives to social media; combined with the rapid-experimentation nature of open source software and the cypherpunk community, this may provide a real chance at clawing back some of the power from big (state) tech and evening the playing field of ideas.

The Tinkering Society logo
Subscribe to The Tinkering Society and never miss a post.
#thetinkeringsociety#decentralized finance#defi#web3#llm#chatgpt#ai#artificial intelligence#identity