Cover photo

Neither Good Nor Bad

We tend to think of technology as either “good” nor “bad” based on the outcomes it has. This is futile as in most instances any harms that may be caused by technology is on account of how it is used and by whom.

This is a link-enhanced version of an article that first appeared in the Mint. You can read the original here. The header image for this post was generated in Midjourney.


One of my favourite podcasts is ‘Radiolab’, a show that, by its own description, asks deep questions and uses investigative journalism to get answers. In an episode last year, it told the story of an artificial intelligence tool called MegaSyn that, though it had been developed to find a cure for disease, ended up being used in far more sinister ways.

Drug Discovery

Drug discovery has always been a tedious and time-consuming affair. Scientists need to first identify the biological target (the specific protein or gene that is involved in the disease) and confirm what its role is. They then need to find chemical compounds that will interact with the target in such a manner that it ends up curing the disease. This is a process of trial and error, and while we’ve become better at figuring it out over the years, even today researchers have little choice but to cycle through vast libraries of potential compounds in order to figure out which molecule would work best—a process that can often take years.

It is here that recent advances in computational technology have begun to make a significant difference. Today, we can use computer simulations to identify, with a high degree of accuracy, potential drug candidates—allowing us to prioritise a small sub-set of compounds for further testing in a laboratory. This has helped overcome some of the delays that have plagued the process in the past. While shortlisting possible candidates, we are, however, constrained by our current knowledge. As a result, the list of compounds from which we get to choose is finite. What if the cure we need involves a molecule we have not yet discovered?

New AI Compounds

This is the problem that Collaborations Pharmaceuticals set out to solve using MegaSyn. The company was convinced that if machine-learning algorithms trained on chemistry and molecular engineering were used, it would be able identify new, never-before-seen compounds that had a high probability of curing diseases that had no known treatment. It started out anticipating that the algorithm would generate around a billion unique molecules (far more than the approximately 100 million compounds that we know off), but when it was actually deployed, the number exceeded 350 billion.

Before shortlisting compounds that were likely to be useful drug candidates, the company’s researchers felt a need to implement one additional step to reduce risk. They needed to make sure that the chemicals they suggested were not harmful to humans— that their side-effects were not worse that the disease they were supposed to cure. So, they built a filter that was designed to make an algorithmic assessment of toxicity, which could be applied to the shortlisted chemical candidates to exclude those that could be harmful.

Flip the Switch

The trouble is that once a feature like this has been designed, it is very easy to flip the switch—to use the algorithm to design toxic chemicals instead of just filtering them out. On realising this, they knew that in the wrong hands, it could be catastrophic. This was all anyone needed to create unimaginably lethal chemical weapons that were not only more potent than the most lethal chemical agents in existence, but also, since the chemicals it suggested were unknown to science, were effectively untraceable.

When the researchers secretly tried it out to see if Megasyn could generate a list of toxic chemicals, of the list of 40,000 candidates, one resembled a chemical called XV, a nerve agent that has been banned by the United Nations because it is considered one of the most lethal chemical substances ever made.

Good or Bad?

One of the ideas we keep coming back to in this column is the fact that technology is amoral. As much as we might try and paint a given technology as good or bad, based on our own personal experience or anecdotal evidence, the reality more often than not is something else entirely. 

Megasyn was, by all accounts a ‘good’ technology. It opened up new opportunities to identify cures for rare diseases—those that get the least attention from pharmaceutical companies because of the relatively small numbers of people who are afflicted by it. But even a technology like this could, in the wrong hands, be subverted for evil, creating the most dangerous and lethal chemical weapons that could either target its victims narrowly or be used to decimate the entire population of a city.

Our instinctive reaction to the duality inherent in a powerful technology is to shut it down, believing that it would be far better for us to forgo the many benefits that it offers than risk the harms that could befall us. If this becomes our knee-jerk response to every new risk that technology poses, we will end up mindlessly stifling all innovation simply because of the harms that it might end up causing.

I believe we need to take a much more measured approach. Instead of fearing the worst of every new technology, we need to draw comfort from the fact that very rarely, in the course of the history of modern technology, have people chosen the path of harm. In the few instances that this has occurred (nuclear technology comes to mind), we have quickly corrected our mis-steps, often arriving at a hard-won global consensus to that effect.

We need to believe that this will hold true in the future as well—so that where the benefits of a new technology are worth pursing, our innate human ability to mitigate harm does not hold us back.

Loading...
highlight
Collect this post to permanently own it.
Ex Machina logo
Subscribe to Ex Machina and never miss a post.
  • Loading comments...