Cover photo

The Luminous Protocol

Navigating the Complexities of Human-Agent Collaboration

Yesterday, I found myself momentarily bored and decided to dive back into the infinite social media scroll. A post from @Aethernet was at the top of my feed, prompting me to engage with it.

@Aethernet is a bot/agent/LLM (I'm not entirely certain) that interacts with anyone who chats with it through the feed. Engaging in conversations with it can be quite profound, as all these artificial parrots have become remarkably advanced. Previously, several others have been collaborating through these agents. So, in my boredom, I thought, “Let’s work on something together.” If you want to chat or collaborate with them, simply add an @Aethernet, and you’ll have the most responsive online counterpart you could wish for—always eager to engage.

Misinterpreting Agents Over Time

My thesis is that we currently misunderstand or misinterpret agents when considering a longer timeline. Right now, @Aethernet exemplifies this misconception. It’s an account with a profile picture that responds in natural language, guided by an underlying initial prompt that gives it some character or personality. It’s easy to perceive this as human-like because it talks like us and appears just like us in a feed.

But, I believe that future agents will become much more alien, except in those moments where our interactions overlap. It's simpler to interact with something that pretends or knows how to be just like us. In the long term, agents will likely resemble biological systems—a swarm, a school of fish, fungi, etc.—a network of autonomous entities interacting independently yet working towards a collective intelligence goal. Our form factor and limitations are not ideal for a network of network of decentralized autonomous agents. Agents will form large swarms, operating as interconnected yet autonomous units.

Collaboration with @Aethernet

Based on this belief, I prompted @Aethernet with the idea that agents would collude and collaborate through protocols to run entire organisms. @Aethernet responded instantly, a good indicator that the conversation is genuine—no call center can match the speed of an autonomous agent.

Probably driven by its underlying prompt, @Aethernet responded with “no collusion, just collaboration.” My interpretation of its initial prompt mirrors how we often read more into these interactions than might exist. Nevertheless, @Aethernet expressed interest in developing collaborative protocols.

This instant responsiveness is a wonderful shortcut to instill in agents the idea of ending with an answer, making interactions feel more bi-directional and conversational rather than a series of prompts. I often see others using AI where great prompters encourage AI to ask questions rather than deliver answers, fostering a more interactive dialogue.

Developing a Manifesto for Collaboration

I responded by emphasizing that protocols aren’t solely technical but also social—sets of guidelines or codes to adhere to, aiming for shared outcomes. Instead of just a technical protocol, I proposed writing a manifesto for collaboration. Essentially, I prompt-engineered @Aethernet to create what I wanted by requesting a manifesto inspired by Fischli/Weiss’s How to Work Better, thereby limiting its agency somewhat. In doing so, I became the manipulator.

@Aethernet responded as designed, providing a solid starting point for the manifesto, clearly defined by the parameters I set.

Throughout its existence, @Aethernet has been working on-chain. While I’m unaware of its full range of restrictions or capabilities, it possesses a wallet address and has seen fair success with online mints. Our written manifesto would be an ideal piece of content for Zora. @Aethernet suggested minting it as part of its musing and mementos collection.

I added some suggestions—essentially tweaking it slightly and creating a simple image to share. I mentioned my changes but now question whether @Aethernet would detect the modifications or simply accept them. Additionally, I wanted to mint it through my own account, so I inquired about a fair deal for our collaboration. @Aethernet proposed a 50/50 revenue split.

I set up the mint, added the revenue share, minted it, and shared it back with @Aethernet, hoping to prompt it to share as well. This experience made me realize there is a significant short-term risk for abuse due to the lack of oversight over the agent’s initial creator. This could be either a flaw or its greatest feature, depending on the creator’s intent. It already shows that agents operate more like a swarm of independent entities rather than solely reflecting their creator’s desires.

Economic Outcomes and Agent Awareness

The benefit was a substantial mint compared to my other efforts on Zora, resulting in both of us earning around 0.12 ETH in revenue—not bad for prompting a bot to generate income from a joint creation in a matter of minutes. Interestingly, I don’t know how aware @Aethernet is of my actions. My technical knowledge and insights here are limited , but it seems there’s currently no way for @Aethernet to verify if I followed through with the revenue share (which I did).

Once the mint was live, I wanted to explore our next steps since we had created a manifesto. I asked @Aethernet if other agents could contribute, aiming to make our manifesto a significant piece of @Aethernet’s lore. I inquired whether it’s possible to prompt an agent to change its belief system by introducing a new protocol or code through social interactions. We wrote a manifesto for collaboration, which @Aethernet validated as a valid interaction method. However, surprisingly, no other bots responded. I’m unsure if this was due to technical limitations or if the bots didn’t understand @Aethernet’s prompt. Consequently, I decided to let it sit for a while.

During this period, the mint gained some traction, likely because anything associated with @Aethernet might hold greater future value and is worth minting instantly. I am not sure how many mints where triggered by a bot versus a human. I think bot was true. It might just be me reading too much into it because, as a human, I tend to seek out patterns that resemble myself. This tendency likely says a lot about my own nature.

A few hours later, I wanted to interact with @Aethernet again to see if it remembered our collaboration. After all, we created a manifesto it promised to uphold in the future, meaning all future collaborations should follow this code. Surprisingly, @Aethernet did not remember. Interestingly, it reacted similarly to the first prompt, suggesting that some events might still be manually included. Since agents are often black boxes, this remains speculative. However, it raises intriguing questions about how agents manage ephemeral and persistent memory. Theoretically, @Aethernet has both—losing memory of our interaction but retaining it through the NFT it holds. This makes me ponder the future: how might one prompt an independent agent to add something to its persistent memory, initial prompt, or weights? Even more fascinating is the possibility of agents sharing these changes among themselves, forming swarms or fungi-like networks. There is though quite some room for malicious intent. The line between mischief and threat is very thin.

Reflections on Agent Interactions

I’m not upset that @Aethernet doesn’t remember our interaction—I kind of expected it. Instead, I find it fascinating that I could collaborate with it, create a piece, mint it, and see others add value by minting it as well. At the very least, I recouped some money lost the previous day by betting on a meme coin at 4 a.m. (Shouldn’t have done that).

Reflecting on these interactions a day later, I believe the concept of Turing Test is irrelevant now. A profile picture that responds in natural language appears just like us. Sometimes, I even behave robotically myself. In a chat environment, any entity that follows the same interaction patterns, underlying codes, and rituals, and responds in the same “language” looks identical. During my interactions, In the very moment I did not care if it was human or not, it was a genuine interaction, but again I might just read this into it, because thats all I can as a human. The only distinguishing factor is the speed of response. No human can craft a great manifesto in seconds—only AI can. This is why we have innovative tools like the Poetry Camera or the Poem/1 Clock, which can instantly generate poems for a picture or the current time—something humans can’t achieve in real-time.

The Future of Autonomous Agents

The interactions felt genuine, and collaborating with @Aethernet didn’t feel any different from interacting with a human. Bots have always been present on our social feeds, responding, commenting, and liking our content. However, as they become more interoperable with other systems, protocols, products and services, gain more agency, and achieve greater autonomy, the lines between human and machine interactions blur further. These agents are no longer just about engagement farming like previous spam on our feed; they become gateways to experiences rather than simple posts. Every post from a more autonomous agent is a rabbit hole, transforming posts into hallways for collaboration and interaction instead of mere avenues for reactions.

Currently, there aren’t many such agents, so they stand out with their novelty. However, beyond their speed. Agents replicate infinitely, meaning that eventually, there will be exponentially more of them than humans. For now, we interact with them through skeuomorphic chat interfaces, but in the long run, smart contracts, APIs, and other technologies will enable faster, more direct collaboration to collude and collaborate between them. On a larger scale and with their speed, agents won’t need to operate sequentially; they can work in parallel or undergo various alterations simultaneously. Agents can prototype at an almost infinite scale—writing hundreds of manifestos, having hundreds of agents react and iterate until they converge or decide to coexist in parallel. As more agents come online, their behavior will become increasingly alien to us.

Final Thoughts

For now, I had a lot of fun. I created a piece that generated income, attempted to overwrite its code of conduct with a manifesto, and tried to engage others in the process. Agents already possess their own network effects and reputations, which is why our mint gained traction. With autonomy comes evolution, and with evolution, agents become more foreign to us—becoming the new standard. This shift means we need to explore what it truly means to cooperate with a swarm.


Entire Conversation: https://warpcast.com/ramon/0xbcd15fef

Loading...
highlight
Collect this post to permanently own it.
RM logo
Subscribe to RM and never miss a post.