(cover image from Matt Akamatsu's excellent talk at DeSci Denver 2024)
Hello sensemakers!
Sensemaking Networks (or SenseNets) is a collective intelligence system designed to radically enhance information sharing and collaborative knowledge synthesis among researchers by integrating decentralized semantic web technology with AI and social networks. In our work on SenseNets over the past few months, we have come to notice a puzzle at the heart of how we do science.
The puzzle, which we will explore in this post, is that many scientists value collaboration with other researchers, but the tools and metrics we use isolate us and promote competition. Why is that?
To feel into this tension, imagine you are doing a literature survey of a new field, and have a stack of research papers in front of you. You have to decide which ones to read and in what order. To help you decide, you have access to assessments of each paper, but you need to choose between GPT-generated assessments or human assessments made by peers and experts in their field. Which would you choose?
I asked the audience this question during a presentation of Sensemaking Networks at an Open Science night organized by Astera in May. I was surprised by how few people preferred the GPT option (hey, this is the Bay Area, after all š).
These two choices (GPT and peer assessments) represent two different approaches to sensemaking in science: the former is focused on empowering individual researchers in isolation, while the latter focuses on empowering collective, networked and social aspects of sensemaking.
Sure, phrasing the question as a binary (either one or the other) is a little unfair. Indeed, we are most excited by synergistic combinations of both approaches. However, the story is a useful demonstration of the puzzle we opened with: people actually really value social sensemaking, and yet technology is overwhelmingly tailored to individual sensemaking. Personal AI research assistants are a dime a dozen, while tools that help researchers share information and think better together are few and far between. The very lack of recognition for the role of social media in science is the raison d'ĆŖtre of Sensemaking Networks.
In fact, this isnāt just the tech focus: academic research itself is also primarily tailored towards augmenting individual researchers. Recent research even proposed developing a personalized āscience exocortexā using a swarm of AI agents to automate an ever-growing range of scientific activities while leaving just the highest level decisions for the human operator.
Whatās gives? Why are we so focused on the individual side of science? And, given that we intuitively really value working with other humans, why donāt we have more tools that improve collective sensemaking?
It turns out that there are a lot of intriguing answers to these questions. Weāll take a whirlwind tour through sociology, meta-science and collective intelligence theory, and reflect on what these diverse strands of research might imply for the future of SenseNets and science information systems more broadly.
Sociology and lone genius mythology
Part of the answer lies in the ālone genius mythologyā around science. We were all raised on stories of genius scientists who defied their doubters and against all odds made discoveries that advanced the frontier of human knowledge. These stories werenāt wrong, of course; these scientists were extraordinarily talented and determined individuals. Yet the stories are also incomplete, for example omitting reference to other crucial (e.g., social) factors that contributed to the process of discovery. An insightful critique of the recent Hollywood portrayal of Robert Oppenheimer contrasts Hollywoodās ālone genius and his blackboard scienceā with a perspective inspired by French sociologist Bruno Latour, according to whom āāThe father of the atomic bombā was no single parent, but rather a collective, networked oneā¦ā
In other words, the developers of sensemaking tools for individual researchers might be drawing more inspiration from Iron Man than from Latour.
But the focus on individual researchers is not just a product of the stories we are told about scientists; there are more pervasive forces at play.
Meta-science: Re-thinking the pecking order in science
A thought-provoking recent paper called āShifting the Level of Selection in Scienceā shows how an overfocus on individuals is built into the very fabric of science, by way of the reward structures used to evaluate scientists:
The predominant approach to scientific evaluation uses individual-level criteria, such as oneās number of first-authored publications, citations, h-indices, journal impact factors, and success in funding acquisitionā¦ This evaluation strategy implicitly assumes that identifying and rewarding the most accomplished individuals is the best way to generate scientific knowledge.
The authors show how individual-level criteria encourage competition and personally beneficial behavior among scientists. But, as they observe,
personally beneficial behaviors are only a subset of the behaviors that benefit science. For example, the scientific community plausibly benefits from the open sharing of information such as code, materials, and raw data, whereas individual-level competition disincentivizes information-sharing to hinder competitorsā success
It turns out that we have a lot to learn from chicken farmers:
the most productive hens in a coop are also the nastiest hens, feather-pecking and cannibalizing the other hens in their coop. Because individual hens who are most productive are those that harm others, selectively breeding the most productive hens can actually lead to lower overall egg production
Instead of selecting the most productive individuals, breeders have learned to select the most productive groups.
Perhaps science can do the same? The authors make a solid case for expanding evaluation mechanisms to incentivize group-level outcomes in addition to individual-level metrics. Importantly, such evaluation will require far greater recognition of diverse prosocial contributions, such as information sharing and other currently invisible āteam scienceā roles.
So we have another piece of the puzzle - in a world where researchers are being incentivized to conceal rather than share their insights, private AI assistants are a safer bet for tool developers, even though social sharing platforms might actually be more effective in advancing science.
Science as process, not product
Another fascinating line of research goes even more existential, in questioning the very aim of science. āShifting the Level of Selection in Scienceā proposed new group-level incentives, but still focuses on a productivity-based model where science is about producing new knowledge. A recent piece called āAn Epistemology for Democratic Citizen Scienceā makes a compelling case that sometimes the goal is the journey in science. Rather than just viewing the knowledge as the product of science (what they call āindustrial scienceā), they make the case for re-thinking the social and cognitive processes that are generating scientific knowledge (āecological scienceā). Ecological science, beyond traditional scienceās role as an inquiry into the natural world, is also āan inquiry into how to best cultivate and utilise humanityās collective intelligenceā.
On this view, the social tools we use to communicate about science would themselves be at the core of the scientific process. Those tools would function as large scale experiments in collective intelligence (CI), informed by the latest research. Science Twitter is cool, but leaves much to be desired as a tool for researchers. We have a lot more to learn from CI theory; in āScience Communication as a Collective Intelligence Endeavorā, the authors provide an outline of what CI systems for science might look like. In particular, such systems would (a) enable better aggregation of distributed knowledge, (b) involve a more diverse group of contributors and (c) encourage increased public participation in science.
Connecting this research to our opening question, perhaps many tool developers are building āindustrial scienceā tools for enhancing individual productivity, as opposed to āecological scienceā tools for enhancing collective intelligence.
Industrial science, like industrial agriculture, risks creating scientific monocultures where
some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors.
In contrast, ecological science is more like permaculture, reflecting a diversity of methods, contributions and perspectives. See Matt Akamatsuās talk at DeSci Denver for an exciting example of what this might look like.
Conclusion: from Genius Science to Scenius Science
For those of you who havenāt heard of the term āsceniusā, musician-activist Brian Eno coined it some 30 years ago:
I became (and still am) more and more convinced that the important changes in cultural history were actually the product of very large numbers of people and circumstances conspiring to make something new. I call this āsceniusā - it means āthe intelligence and intuition of a whole cultural sceneā. It is the communal form of the concept of genius.
Curiously, scenius has been around for a while but actually hasnāt taken off:.
But given the dominant focus on individuals that weāve seen, maybe we shouldnāt be too surprised at this point š¤·
Nonetheless, it feels like genius is fading and scenius is in the kairos. Converging evidence from across multiple fields suggests that if weāre really serious about enhancing science for the benefit of humanity, we should be thinking less about how to find geniuses and a lot more about how to create sceniuses.
I'm excited to be working on my project in our own scenius in the making š
Thanks to Kristen and Spencer for instigating this piece!