Subset is designed to enable the sharing and search of saved things amongst peers. But because the sharing and saving mechanics are not yoked to a particular platform there are trust issues. One of them is: how does a sender or recipient verify that the other party is in fact who they appear to be? Within the network that Subset catalyses, how do we allocate trust and thus capabilities?
This is not a new problem, by any means, but we have been thinking about models and frameworks and concepts to help us work through it. Below is one such thing.
Traditional trust models lean on a centralised party to either:
Supply a network from which it is possible to infer node-to-node connectivity and allocate trust and capabilities accordingly (e.g. LinkedIn's 1st, 2nd, 3rd degree connections, Facebook's friends of friends)
Act as an entity that assesses and authorises the claimed fidelity of nodes in the network (e.g. enterprise authentication solutions) based on some provided evidence
Subset, like other newer initiatives, can't lean on those prior methods because incumbent platforms are deemed at best indifferent and at worst hostile to the new initiative's mandated mission. So, what's the alternative?
One we've been thinking about is focused on the volume of channels that two parties share. A channel is simply a means for one user to send or receive a message from another. It's a transient rail for information exchange. A channel instance is counted when both users assert a positive identification of the other in that channel within some constrained window.
For example, you have a friend that you communicate with in multiple ways. You send WhatsApps, exchange emails and trade direct messages on X. You also have an an acquaintance who you only message on LinkedIn. You are connected to your friend via three channels and your acquaintance via one. There are also people of whom you are aware but with whom you do not communicate over any channel.
The trust and capabilities allocated to your friend thus exceed those allocated to your acquaintance which in turn exceed those of the people with whom you don't communicate. The allocation of trust and capabilities within such a network looks like this:
Adversarial: hostile sentiment between two users is detected—no trust or capabilities allocated
None: no channels between two users are established—minimal trust or capabilities allocated
One: one channel between two users is established—some trust or capabilities allocated
Many: more than one channel between two users is established—full trust or capabilities allocated
For Subset, this has two primary applications: sharing and search.
One of Subset's core capacities is humane routing—the ability to share something with someone via patterns. Rating within the ANOM model sketched above could determine the power dynamics of the sharing activity. For example, a sender that has just a single trusted channel with a recipient is rate-limited as to the volume and tempo of their sharing with that recipient, whereas a sender-recipient pair that share many channels may be able to override each other's preferences in defined situations.
It could also inform the results retrieved from a P2P network search query. If a searcher has many trusted channels with a peer, the search query could provide higher fidelity results from that peer's pool of relevant results. If they have no trusted channels with a peer then the results would possess more inherent obscurity.
The literal mechanics of something like this are still a work in progress but they're likely to take cues from existing prior art—such as public key cryptography—as well as the newer wave of cryptographic techniques and infrastructure. But one thing is clear: outsourcing user-user trust and verification to incumbent platforms and/or misaligned enterprise vendors is no longer the only option.