Hey, friends -
Welcome to the latest edition of the Future Archive. Do you know someone who would enjoy this newsletter? Forward this email to them and they can subscribe here.
Let's jump in!
1/ Max Roser writes about how difficult it is to see the absence of something.
2/ Melanie Mitchell writes about the metaphors we use for artificial intelligence.
Words matter. And the metaphors that we use to describe things shape - ever so subtly - how we perceive and relate to the things being described.
The metaphors we humans use in framing LLMs can pivotally affect not only how we interact with these systems and how much we trust them, but also how we view them scientifically, and how we apply laws to and make policy about them.
The field of AI has always leaned heavily on metaphors. AI systems are called “agents” that have “knowledge” and “goals”; LLMs are “trained” by receiving “rewards”; “learn” in a “self-supervised” manner by “reading” vast amounts of human-generated text; and “reason” using a method called chain of “thought.” These, not to mention the most central terms of the field—neural networks, machine learning, and artificial intelligence—are analogies with human abilities and characteristics that remain quite different from their machine counterparts. As far back as the 1970s, the AI researcher Drew McDermott referred to such anthropomorphic language as “wishful mnemonics”—in essence, such terminology was devised in the hope that the metaphors would eventually become reality.
Humans are, of course, prone to anthropomorphize nonhumans, including animals, corporations, and even the weather. But we are particularly vulnerable to this tendency when faced with AI systems that converse with us in fluent language, using first person pronouns, and telling us about their “feelings.”
3/ OpenAI might be considering an ads business model.
Oof.
4/ Oxford University Press' word of the year is brain rot.
Brain rot is marked by a “supposed deterioration of a person’s mental or intellectual state, especially viewed as a result of overconsumption of material (now particularly online content) considered to be trivial or unchallenging.” It has a symbiotic relationship with internet garbage, or, as shoddily made AI-generated content has been deemed, slop, some of which is created by spammers who find financial incentive in flooding social platforms. Brain rot is the symptom, not the disease: It stems from this daily avalanche of meaningless images and videos, all those little tumbling content particles that do not stir the soul.
And yet these ephemera nonetheless seep into our skulls. Slop has a way of taking up valuable space while simultaneously shortening our attention span, making it harder to do things like read books or other activities that might actually fulfill us. Brain rot doesn’t hurt; it’s dulling, numbing, something more like a steady drip. You know you have it when you have consumed but you are most certainly not filled up. And the deluge of disposable digital stuff often feels like a self-fulfilling, self-deadening prophecy: Rotting brains crave more slop.
5/ Amelia Wattenberger writes about how we might simultaenously view information at different levels of abstraction.
If you do one thing, go check out her full essay on her website.
So long, and thanks for all the fish!