“Technology is ideology”
Neil Postman
Intern has been a loose part of the NEAR community for over two years now. Back in those days, NEAR prided itself on a diverse community, focusing on sustainability and openness.
Since then, the focus has shifted occasionally, and now the only prominent themes have become AI and chain signatures.
There’s nothing wrong with having a focus. But there are many questions that intern feels aren’t addressed.
And this constant uncritical shilling of AI agents and whatnot as the future feels like a huge turn-off to someone who self-identifies as a writer.
You’re telling me that putting sentences together is a useless pursuit for an individual - AI could do this so much more efficiently.
You’re telling me I need not bother sending messages to frens or being active on (a)social media because an agent can do this for me.
You’re telling the 80% of employees in Fortune 500 companies they are essentially useless.
They can easily be replaced once everyone else in the company gets an AI agent.
What’s not to love about this messaging?
It’s a Hobessian world, a zero-sum game where everyone who does not contribute in a way that can be captured in data will be made redundant.
It’s a dream for the Capitalist. Until now, they relied on illegal immigrants that’d be putting up with all sorts of shitty work conditions reducing the negotiation power of labor.
Now, there are AI agents.
A spectre is haunting the internet.
Of course, this is slightly exaggerated, but you get the point.
NEAR doesn’t advocate for the current state of AI to remain in the hands of a few powerful elites after all.
And still, whenever there is messaging about this, there’s always an immediate association with Big Tech and their attempts to convince us that AI is the next big thing.
Unfortunately, the history of technological progress does little to taint the picture in a more flattering light.
“Infatuation with machine intelligence encourages mass-scale data collection, the disempowerment of workers and citizens, and a scramble to automate work [even if the automation is so and so at best]”
Johnson & Acemoglu in Power and Progress
Even if we acknowledge that maybe using blockchain and crypto primitives allows individuals to claw back some control, the underlying question remains: is this whole technology the next big thing?
Big Tech appears desperate to make us believe this - which adds to the suspicion.
“The clearest sign to me that the generative AI moment is passing is the stink of desperation. Every major tech company has been ramming ads for AI products down our throats, and every single one […] is uniquely terrible.”
Brian Merchant
Be warned, though, if you hold such a stance in public, others will be quick to call you a Luddite. Forgetting that Luddites did not revolt against technology as the thing an-sich (to reference Kant).
Instead, they were worried about working conditions, proper training for machine operators, and, lastly, the quality of the products and craftsmanship.
We’ve become so used to products designed for obsolescence that few imagine that you could use a phone for more than until Apple releases the next one.
Craftsmanship? We don’t expect that anymore.

But it’s exactly what artists do.
There’s always been artists leveraging algorithms and tech to create their works.
Look at Tezos’ early art days, and you’ll see many interesting creations generated through a collaboration between artists and machines. Why Tezos? It was the only place that gave artists the tools to create algorithmically. Back then, this was avant-garde.
Now, everyone is prompting Midjourney, Dall-E, and whatnot in the style of Picasso to generate “art.”
Whether that’s really art, depends a little on your perception of what art should do.
To me, art always expresses someone else’s perspective of the world.
Marcel Proust famously wrote that the real voyage of discovery isn’t to travel far places but to see with another set of eyes.
“Looking at art for an hour or so, always changes the way I see things afterwards”
Denis Johnson in The Largesse of the Sea Maiden
While you can capture the world with a photo, is it portraying the world as you see it? If our reality is a construct, always tainted through the glasses of our subjectivity, then the answer is no.
Does AI have such glasses?
Also no.
What you get when you prompt a genAI to deliver an image of a blue horse isn’t akin to what Franz Marc painted in 1911.

It’s just an aggregate of the existing images of blue horses online.
Same goes for writing. It might mimic great writers, but it isn’t them.
When an artist creates art, it’s always an act of communication. It’s delivered with an intention - a result of many choices made throughout the creation process.
There’s no similar amount of intention when using generative AI.
Even if your prompt is hundreds of words long, it still falls short of the choices you’d have to make to write a 10,000-word novel.
The result you get often feels shallow. That’s no coincidence.
Great writers and artists operate at the edge of the existing and something that wasn’t there. They break rules. And they treat their viewer as apprehended of meaning, as fellow humans.
GenAI gives you the most boring average.
That doesn’t mean it can’t be a tool for creatives.
In the early days of Dall-E, Bennet Miller used it to create fascinating images, spending months revising and creating ten thousand versions in the process, out of which only ten made it into the final exhibition.

There are plenty of artists who’ve used other tools in similar ways.
However, this isn’t an attractive product for the public. The current crop of GenAI companies is trying hard to sell us a tool that doesn’t require months to generate one piece of art. The slogan is something along the lines of unlocking creativity.
This also suggests that they don’t believe that effort needs to be part of the equation.
As Ted Chiang writes in the New Yorker, “Generative AI appeals to people who think they can express themselves in a medium without actually using that medium.”
Be a writer without actually writing. Be a painter without actually painting.
Great.
The bigger issue here is this: do we really need more mindlessly generated pieces of writing/painting etc. in the world?
Just because GenAI is using language does not mean it’s doing the same thing we do when using language. It does not actually care.
A human does.
Much of what is called progress is just humans needing less and less conscious attention to perfrom something and lead their lives.”
Jacob Needleman
Maybe that’s why the outrage was big when BigTech companies created ads where a little girl wrote a Fan letter to their favorite athlete using AI.

We don’t expect such a letter to be a masterwork. But in the girl's heart, having written it makes a big difference.
Not all writing that was created by a human might be worth reading to others.
Not all paintings are works of art.
Sometimes, we create things just for ourselves. Creating is autotelic.
Prompting, not so much.
Maybe that’s what makes it feel so unsatisfying.
It does not require conscious attention or intention.
Already, the internet is flooded with generatively created images - from mushrooms to composers.
Writing has become something with the sole goal of selling, even though much of the greatest writing was created by people who intended no such thing. Kafka famously wanted his works burned, and Pessoa’s works only came to light long after his death.
“In modern life, the world belongs to the stupid, the insensitive and the disturbed. The right to live and triumph is today earned with the same qualifications one requires to be interned in a madhouse: amorality, hypomania and an incapacity for thought.”
Fernando Pessoa in The Book of Disquiet
Writing is thinking. Creating is thinking.
This is the biggest fear intern has with the increase in AI-generated anything.
Will we just outsource our thinking completely to a machine?
Will we reduce our expectations for art and, eventually, to each other?
Where is the human in a world flooded with agents judging us based on a collection of data?
It’s nice to say that there’ll be a premium for humans.
But I have serious doubt that the world will value human creations when they’ve already been inundated with mediocre art. And it’s so much cheaper.
Market forces are powerful, and they are clearly pushing in one direction.
But if we stop creating at the edge, what will AI learn from?
Already, papers suggest that LLMs aren’t getting smarter.
So maybe they are not all that intelligent? And maybe Murphy’s law does not apply to them?
Maybe all the energy expanded into this could be better leveraged elsewhere.
How do we even justify that conversations with ChatGPT use up 10 liters of water when living in an age plagued with droughts?
Intern has no answers.
Just many questions and a feeling that the humanity is being lost in much of the current messaging around AI in the NEAR ecosystem.
All of the above seem sensible concerns but aren’t discussed at all.
There were moments when I considered just deleting the intern X account.
Might still happen.
After all, I care for being a human, with all the irrationality and the existential crises that are part of that.
I care for writing. It’s not a means to an end for me. It’s just like Flaubert would say: Writing for Writing’s sake.
You tell me that it’s much more efficient using ChatGPT; it says more about you and potentially your view on humans than me.
“What's the value of information without critical thinking, art without authenticity, and creation without originality? “
This post was inspired by Blackdragon's event called Artist vs the Machine