Machinic curation

Sometimes, an act of curation resonates. Other times, it doesn't.

If it comes from a peer, signals an outlay of effort, builds on an existing relationship, aligns with the recipient's interests, respects the recipient's preferences, is easily evaluable, is part of a larger, high quality sequence, and doesn't demand reciprocation? Likelihood of resonance goes up.

If it comes from an unfamiliar source, lacks personal relevance or prior connection, shows no effort, arrives through unwelcome channels, is difficult to digest, follows a pattern of misses, and imposes unwanted obligations on the recipient? Likelihood of relevance goes down.

We explored this in a previous blog that began with three scenarios:

  • The reception of an email digest

  • Group chat activity orientated around a live motorsport race

  • An in-real-life catchup at a cosy cafe

These acts of curation were undertaken by human actors. But what about curation via a machine agent? How does the machinic nature of the curator impact the likelihood that an act of curation will resonate, and the extent of that resonance?

Consider three machine curation scenarios:

  • Using Perplexity to research a task outside your domain of expertise

  • Scrolling through your X home feed

  • Adding your requirements to a comparison site to receive a custom quote

Now apply an analogous litmus test to the acts of human curation. Imagine that, during the act of curation, the machine agent in each of the three scenarios cited can...

  • Signal sufficiently accurate emulation of your worldview

  • Demonstrate computational expenditure on your behalf

  • Propagate context from prior you-machine interactions

  • Show relevance to your past, present or future personal interests

  • Engage you in a succinct format over a preferred channel

  • Cite an established record of previous high quality curation acts

  • Avoid any expectation or request for further engagement

Would the machine's act of curation resonate?

This is a rhetorical question; we don't have an answer for you. But we do think that the answer hinges on two things:

  • The equitability of one's stance towards machine agents

  • The equivalence of one's evaluation of a machine agent's performance

Equitability is simple to determine.

The quick-and-dirty assessment is to examine whether you perceive machine agents as butlers or as centaurs. A butler-ish perception sees machine agents as entities that slavishly accomplish arbitrary tasks. A centaur-ish perception sees machine agents as entities that extend and augment one's capacities. "Doing something for me" versus "allowing me to do even more", as Matt Webb puts it. A longer assessment is to read Hannes Bajohr's On Artificial and Post-Artificial Texts and note whether you end up in gloom-mode or bloom-mode as a result.

Approximately, if you see machine agents as butlers and end up gloomy after Bajohr's exploration of textual origin and provenance, then you'll set a higher bar for an act of curation via a machine agent in comparison to a human one. Whereas, if you see machine agents as centaurs and end up bloomy after the shifting Overton window of textual origins is assayed by Bajohr, then you'll set a similar bar for a machine agent's act of curation versus a human agent's act.

Speaking of bars; equivalence in evaluation is the other determining factor of whether a machine's act of curation resonates.

There's a tendency when evaluating the sophistication of machine agents and non-human intelligences to either escalate the standard once it's been surpassed and/or to advocate for the winning of a different game once the current game has been mastered by a machine agent. This is dubbed the AI effect:

[It] occurs when onlookers discount the behavior of an artificial intelligence program as not "real" intelligence.

The author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'

Equivalence of evaluation, in the context of curation, means judging a curatorial act by a machine agent with the same benchmarks as one would use for a human.

Imagine that, at the same moment, your friend sends you a link to a new EP from your favourite band whilst the new EP is surfaced front and centre of your Spotify home page. Which act of curation evokes the strongest response, and why? Do you see your friend's understanding of your taste in music as equitable to Spotify's personalised approximation of your taste? And do you judge their efficacy and impact in an equivalent manner?

Let us be clear. This is not to say that:

  • Human and machine agents are equal in status, capacity, required rights etc.

  • Human and machine agents can, will or should be judged by shared benchmarks

This is not to assert the priority of one agent type over another. But to is to say that where one sits on the equitability and equivalence spectra will fundamentally influence how one responds as a recipient of machine curation. And that matters because we're on the precipice of an even more machinic era of curation. Investigating, now and in advance, where one stands will help us to mitigate some of the inevitable risks of such a transition.

Subset logo
Subscribe to Subset and never miss a post.