Worldcoin has been designed to address the concern that in a world saturated with artificial intelligence we are going to need a proof of humanness. As true as that might be, I believe we need to go much further. And also tackle truth in content.
This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here.
I am a sucker for new technology. Whenever a new gadget comes along, I feel almost compelled to buy it so that I can see for myself how it works and what it can do for me. As soon as a new digital service opens its doors to beta testers, I am almost always as far up at the head of the line as my time zone will permit. Which is why the moment I heard of WorldCoin, the first thing I did was identify the locations in India where registration services were being offered.
The inspiration behind the creation of Worldcoin is the very real concern that at some time in the not-so-distant future, the world will become so saturated with artificial intelligence (AI) that we will find it difficult—impossible even—to determine whether those we are interacting with are humans or not. The solution on offer is to create a verifiable digital identity that users anywhere in the world can use to identify themselves as human, so that we can all tell whether we are having a conversation with a bot or not.
The first step in the process of registering for a World ID involves visiting in-person one of the many locations in the world where a WorldCoin Orb has been placed. This is a device with special biometric imaging technology that can verify humanness and uniqueness in a privacy-preserving way.
After using its sensors to confirm that the person standing in front of it is a human, it captures a series of iris images that it then processes into a unique digital representation of your iris. Once this iris code is created, the image data is deleted from the Orb. Each iris code is compared with every other iris code created anywhere in the world in order to establish uniqueness. If you have never been previously verified, your World ID is added to the list of verified World IDs. The theory is that once digital services start to incorporate this identity into their systems, we will be able to tell whether the entity we are interacting with is a human or an AI bot.
As I thought about what Worldcoin was offering, I could not help but be struck by the similarity between this and India’s own digital identity service, Aadhaar. We too have establish our uniqueness through biometric verification (fingerprints and iris), and, in order to preserve the security and privacy of the individual, ensure that the raw image information is converted into a set of vectors that are used to uniquely identify individuals in much the same way that the Orb uses an iris code. Since Aadhaar is a truly digital identity, it can, like World ID, be incorporated by digital workflows to offer the same proof of humanness that World ID has been designed to provide.
All of which is to say that should we choose to use it in this manner, India already has a biometrically de-duplicated digital identity system in which over 1.4 billion people have already been enrolled. We could, should we so choose, use it to solve the exact same problem that WorldCoin has been created to address.
We Need To Do More
The more I reflected on what Worldcoin is setting out to do, I could not help but think that it is not being ambitious enough in what it is trying to achieve. At present, all it is doing is giving us tools to identify human actors in a world about to be increasingly populated by artificial intelligence agents. While this in itself is a worthy endeavour, it is necessary but not sufficient. In a world already saturated with untruths, the greater problem by far is figuring out what will happen to the problem of fake news when generative AI starts to exponentially expand the volume of fabricated text, audio and video content—producing imaginary information that is so utterly believable that it is indistinguishable from the truth.
This is the problem that I believe organisations like Worldcoin should be concentrating their energies on. Instead of merely distinguishing humans from bots, they should look to devise means by which we can more efficiently associate fake content with their purveyors. One way to achieve this would be to create digital identifiers for content—tokens if you will—that once inextricably linked to a unique digital identity will let readers not only know that it was created by a human, but also which one.
Once everyone can associate content with the specific human being who created it, trolls will no longer be able to hide online. The content they generate will be associated with them, and as more and more of their content is shown to be fake, consumers will slowly gravitate away from them.
When reputational risk automatically attaches to the very act of creating and disseminating content, creators will realise that they need to think carefully about what they say online if they want to retain credibility and attract the audiences they need. Those who today have built vast armies of followers using click-bait tricks and fake news will no longer be able to amass such numbers when it starts to become evident that the content they create cannot be trusted. If content originators can no longer hide behind the anonymity that the internet offers, a premium will start to be placed on the content of creators who have taken the trouble to release tokenised material that is verifiably trustworthy.
In the end, I did not get myself a World ID. It is not that I did not trust the technology behind the effort or worry that my personal information might be misused. Instead, it is because I believe that even if it is not the case right now, the digital ID I currently have will eventually be all that I will need to prove my humanness.