The world shouldn’t wait for another Hiroshima to regulate AI

It is telling that last month’s G7 summit in Japan focused on three big issues, two of which were the Ukraine war and the rise of an assertive China. The third was a bit unusual- Artificial Intelligence (AI) and how to regulate the most powerful technology ever created by man. It was also significant that the meeting was in Hiroshima, where the most destructive technology created in the 20th century was unleashed. Hiroshima was a ‘never again’ moment in nuclear warfare, and it led directly to global regulation with the International Atomic Energy Agency (IAEA) and the Non-Proliferation Treaty. Like the atom, AI is a dual-use technology, promising amazing scientific breakthroughs like cancer cures, but also raising the specter of a destructive super-intelligence and gross abuse. As AI, especially generative AI, races ahead, regulation seems to have been left far behind. In fact, I’ve often spoken about the need for humanity to tame this technology before we face another Hiroshima moment in AI.

There has been an abundance of opinion on this, and it seems the apprehensions have eclipsed the enthusiasm surrounding it. AI is not easy to regulate – it is a borderless technology moving at lightning speed in a geopolitically fragmented world. One of the clearest descriptions of regulating AI I found was by John Thornhill of the Financial Times, who wrote about the 4D challenge of regulating AI (bit.ly/3P3kx7a). The first D is discrimination, in his view—the power of AI and machine learning is how to spot outliers in data patterns. In this way one can identify defects in the production line, or cancer cells among normal cells; However, this same property can also lead to prejudice along racial, gender or nationalistic lines as the AI ​​may not see some of these as conforming to patterns. Propaganda is another issue; If we are to believe “WhatsApp University” as the most efficient disseminators of misinformation, meet the generator! Thornhill’s third D is displacement, mainly for jobs, powerful AI engines like ChatGPT intrude into tasks done by humans. The last one is catastrophe – the fear that super-intelligent AI will, intentionally or otherwise, lead to the destruction of the human race.

So how do we control AI and manage these four D’s? Several options are being discussed. The first is licensing, proposed by no less than Sam Altman, CEO of OpenAI. Altman suggested to the US Congress that AI companies would need some sort of license to operate so that they would be regulated to comply with licensing criteria. Unlicensed startups will not be able to create AI. Many see it as a self-service option, protecting existing ones against open source and new competitors, including OpenAI. Additionally, it is unlikely that China or Russia will cooperate with the US-led licensing regime. The second is a Food and Drug Administration (FDA) type of use-case-led regulation. Just like the FDA regulates new drugs and treatments in the US and asks for proof of efficacy and no harm, a body should regulate the use of AI in sensitive areas such as healthcare or aviation. To me, this proposal is dead on arrival, the time it would take to regulate the use case and the global cooperation it would require would make it impractical. The third is a CERN-like approach, where countries and companies come together and collaboratively do all the research, similar to how the Higgs boson or the God particle was discovered at CERN. A variation of this is the ‘isolated island’ approach, where all research takes place in a secure, air-spaced environment and goes out into the wider world only after it has proven beneficial in this protected environment. Again, the idea is noble, but its practical efficacy seems questionable.

Another proposal is to use the IAEA as a model for a global regulatory body on AI. Altman, among other industry veterans, has been talking about this on his World Tour (including India). The IAEA is not perfect, is toothless in many areas, and has created an unequal world between the nuclear haves and the have-nots. But to give credit where it’s due, there hasn’t been a nuclear war since Hiroshima. There are many differences between AI and nuclear power; For one, AI is much more democratized, any good software engineer is building new AI tools, and it doesn’t require investing in a nuclear reactor. The world is also not the same and it will be difficult to reconcile the world at this time. However, this is the most viable model, and I hope that the G7 summit is the ‘Hiroshima moment’ to start global regulation and nothing more catastrophic.

Jaspreet Bindra is a technology expert, author of ‘The Tech Whisperer’, and is currently pursuing his Masters in AI and Ethics from the University of Cambridge.

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less

Updated: June 09, 2023, 02:38 PM IST