This is a short one because I've recently talked with several acquaintances about the different ways they use AI. It dawned on me that most people outside of the core tech bubble are still totally oblivious to the fact all these experimental AI companions - like AI girlfriends, AI coaches, and AI assistants - all these tools are collecting so much data, you don't even know about, that you should be (at least I would be) scared shitless. In many (maybe most) cases teams running them don't have data security as their primary objective.
We really need some more trustworthy ways to interact with AI that will increasingly be able to remember things about you specifically. I'm sure you've heard about OpenAI's "tests" with ChatGPT, having a memory to remember details about you and your conversations. Which is awesome for you as a creator and user but it's also scary.
You may say, well, it's just ChatGPT, I use it for coding or writing. Maybe you do, but many people use it to discuss intimate and personal things, to get relationship advice.
Have you ever tried voice mode on ChatGPT app? If you haven't I highly recommend to give it a try - it's almost like in HER (the movie). It's pretty cool even now and it's getting better by the day.
I don't know the right solution(s), maybe ZK (zero-knowledge) technologies could help create safe environments for using such services without being afraid of misuse, data sale, or just data accumulation.
When I imagine misuse in AI land, I see it as simple as receiving intentionally adapted answers or suggestions to my questions. It would be way worse than direct ads. I am well aware that a big part of my worldview can be manipulated in this way. It would be even more effective than Google or Facebook ever could dream about.
One of the ways I'm imagining safe AI interactions in the future is something like Farcaster Frames for such interactions (I know, I like Farcaster 😀) - I mean only I with my wallet can get access to my data, that is stored off-chain, and I can bring them back to my AI assistant's memory to be used during my session. My data use will be ZKed in a way that my data can't be used in any other LLM or non-LLM activities that I didn't authorize. And interaction with AI through frames would make it seamless
I'm no security expert, nor maxi, I am just educated observer and user (from my time in pharma where data rule everything). And I'm just saying that the golden age of personal data harvesting is still ahead of us. Let's try to be ready.
I hope this sparked your own imagination and you will stick around so we can connect a bit more.
If you like it - bring your friends - they may love it! Feel free to collect this writing too!
Find me as BFG (aka BrightFutureGuy) on socials and let's connect!
- on Farcaster: https://warpcast.com/bfg
- Web3 Magic Podcast on Substack - https://www.web3magic.xyz
- on X: https://twitter.com/aka_BFG