Even health AI can be risky but innovation must win

One of my favorite podcasts is ‘RadioLab’, a show that, in its narration, asks deep questions and uses investigative journalism to get to the answers. In an episode last year, it told the story of an artificial intelligence device called MegaSyn that, although it was developed to find a cure for disease, was being used in far more sinister ways.

Drug discovery has always been a difficult and time-consuming affair. Scientists must first identify the biological target (the specific protein or gene involved in the disease) and confirm its role. Then they need to find chemical compounds that will interact with the target in a way that cures the disease. It’s a process of trial and error, and although we’ve gotten better at detecting it over the years, researchers today have little choice but to sift through vast libraries of potential compounds to figure out which ones. The molecule will work best—a process that can often take years.

This is where recent advances in computational technology have begun to make a significant difference. Today, we can use computer simulations to identify potential drug candidates, with a high degree of accuracy – which allows us to prioritize a small subset of compounds for further testing in the laboratory. This has helped in removing some of the delays that plagued the process in the past. However, while shortlisting potential candidates, we are constrained by our current knowledge. As a result, the list of compounds from which we have to choose is limited. What if the cure we need involves a molecule we haven’t discovered yet?

This is the problem Collaboration Pharmaceuticals set out to solve by using Megacin. The company was confident that if machine-learning algorithms trained on chemistry and molecular engineering were used, it would be able to identify new, never-before-seen compounds that had a high potential to cure diseases. who had no known treatment. It began by estimating that the algorithm would generate about a billion unique molecules (far more than the roughly 100 million compounds we know of), but when it was actually deployed, the number swelled to more than 350 billion.

Before shortlisting compounds that could be useful drug candidates, the company’s researchers felt the need to implement an additional step to narrow down the risk. They needed to be sure that the chemicals they recommended were not harmful to humans – that their side effects were not worse than the disease they sought to cure. Therefore, they created a filter designed to perform an algorithmic assessment of toxicity that could be applied to shortlisted chemical candidates to exclude harmful chemical candidates.

The trouble is, once such a facility is designed, it’s very easy to flip the switch—using algorithms to filter out toxic chemicals instead of just designing them. Realizing this, he learned that in the wrong hands, it could be disastrous. That was all anyone needed to create unimaginably lethal chemical weapons that were not only more potent than the deadliest chemical agents in existence, but since the chemistry it suggested was unknown, effectively were unavailable.

When researchers secretly tried to see if MegaSyn could generate a list of toxic chemicals, one of the 40,000 candidates on the list resembled a chemical called XV, a nerve agent that has been banned by the United Nations. Because it is considered to be one of the deadliest chemical substances ever.

One of the ideas we come across again and again in this column is the fact that technology is ethical. No matter how hard we try to describe a given technology as good or bad based on our personal experience or anecdotal evidence, the reality is often something else entirely.

The MegaSyn was, in every sense, a ‘good’ technology. This opens up new opportunities to identify treatments for rare diseases – which get the least attention from drug companies because the number of people who suffer from them is relatively small. But even such technology can, in the wrong hands, be used for evil, creating the most dangerous and deadly chemical weapons that can target either their victims narrowly or the entire population of a city. can be used to eliminate.

Our instinctive response to the inherent duality of a powerful technology is to shut it down, believing that it would be far better for us to give up the many benefits it brings rather than risk the harm it brings. If this becomes our quick response to every new risk posed by technology, we will stifle all innovation without thinking simply because of the harm it might cause.

I believe we need to take a much more measured approach. Instead of fearing the worst with every new technology, we need to take comfort in the fact that very few in the history of modern technology have chosen the path of harm. In the few instances when this has happened (nuclear technology comes to mind), we have quickly corrected our missteps, and often arrive at a hard-won global consensus to that effect.

We need to believe that this will hold true in the future – so that where the benefits of a new technology are achievable, our innate human ability to minimize the harm doesn’t hold us back.

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less

UPDATE: July 05, 2023, 01:05 AM IST