Cover photo

You don’t own your memory

Grace Carney

Grace Carney

This blog is co-authored with Zoe Weinberg and Matt Hawes at ex/ante and can also be found here.

Who will own our digital memory? This question has become increasingly urgent as ChatGPT and Claude ask us to entrust more and more of our memories to their platforms.

Frontier AI models have already ingested the world's public knowledge, but their advancements may slow down as we approach a compute ceiling. Additional performance gains will require them to adapt to the user’s context, which calls for the one dataset they don’t have: our memory.

Memory is the raw material of context, and context is how we make AI interactions meaningful. Without it, we waste endless energy reintroducing ourselves to every AI we meet: redundantly entering the same information across dozens of tools, mentally tracking what we told which system, and correcting outdated or misinterpreted inputs that quietly degrade performance. The user experience begins to feel like death by a thousand sync errors—chasing the illusion of personalization while managing the overhead of fragmented memory.

The architecture we choose for digital memory will determine which of two scenarios emerge:

In the first, a few AI companies vacuum our personal data into closed systems, using privileged access to create irresistibly integrated experiences that create a dangerous concentration of market power. 

This scenario follows a familiar playbook: accumulate a data monopoly, promise not to be evil, then monetize. But this time the stakes are higher. Unlike previous centralization facilitated by companies like Facebook and Google, AI's unprecedented ability to extract insights from personal data represents a quantum leap in both power and risk. 

The alternative follows Alex Komoroske’s vision of a future that preserves human agency through what he calls 'intentional technology.' One where we can port our data freely between platforms, enabling a vibrant ecosystem of AIs that compete to earn our trust. 

We believe in the second scenario. Brad at USV has long championed that no platform should prevent users from collecting and using the data created through their interactions. Promises that companies "won't be evil" aren't enough—we need systems that structurally "can't be evil."1 Your memory should always be accessible to you.

Several technical shifts now make this open ecosystem architecture more feasible: LLMs act as a universal translator across data schemas, advancements in edge computing enable high-performance local models, and MCP has become the standard for context-sharing. Meanwhile, data wars between platforms are making the absence of such an architecture increasingly  painful. As companies lock down user data, our digital identities risk becoming fragmented across closed systems or trapped in monolithic ones.

As Whitman (and Dylan) said, we contain multitudes. We contradict ourselves, evolve constantly, and make unreliable narrators of our own stories. Even AI systems grasp this truth—two models in dialogue produce richer insights than one omniscient voice.

Why entrust memory architecture to a single, centralized system? The wisdom lies in multiplicity: space for many perspectives, models, and ways of understanding. No single company should be the arbiter of how we represent ourselves.

Rather than rely on promises that our memories won't be misused, we need an open architecture that makes such misuse impossible by design. In a future blog post, we'll explore how this open future might take shape.

—————————
(1) We're already seeing this play out. Altman has so far resisted introducing advertising, but the incentives to monetize user attention and data are simply too strong.

You don’t own your memory