Thoughts on Artificial Intelligence, as Friend or Foe

Artificial Intelligence (AI) is making headlines for its victory, and the apprehension expressed by many including some of the best minds in AI. Association for Computing Machinery issued a statement in October 2022 ‘Principles for Responsible Algorithmic Systems’, a broad class of systems that includes AI systems. Many leading AI experts and thinkers have been part of various cautionary messages about AI, issued by the Future of Life Institute, the Association for the Advancement of Artificial Intelligence, and the Center for AI Safety. There is a deep concern about AI among many who know it. What’s behind it?

Scope of Use, Limitations and AGI

AI systems are able to exhibit superhuman performance on specific or “narrow” tasks, which has made it into the news in the fields of chess, Go (a game several orders of magnitude harder than chess) and also in biochemistry for protein folding.

The performance and usefulness of AI systems improve as the task becomes narrower, making them valuable adjuncts to humans. Speech recognition, translation, and even identifying common objects like photographs are just some of the tasks AI systems tackle today, even surpassing human performance in some cases. Their performance and usefulness degrade on more “generic” or poorly defined tasks. They are weak at integrating inferences into situations based on the common sense of humans.

Artificial General Intelligence (AGI) refers to intelligence that is not limited or narrow. Think of it as human “common sense”, but absent in AI systems. Common sense will help a human save his life in a life-threatening situation while a robot can remain unperturbed. There have not yet been any credible efforts towards manufacturing AGI. Many experts believe that AGI can never be achieved by a machine; Others believe it may happen in the distant future.

The release of ChatGPT in November 2022 was a big moment for AI. ChatGPT is a generative AI tool that uses a Large Language Model (LLM) to generate text. LLMs are large artificial neural networks that ingest large amounts of digital text to build a statistical “model”. Many LLMs are created by Google, Meta, Amazon and others. ChatGPT’s stupendous success in producing flawless paragraphs caught the attention of the world. Writing can now be outsourced. Some experts also see a “spark of AGI” in GPT-4; In the near future AGI may emerge as a larger LL.M.

Other experts vehemently refute it based on how the LLM works. At a basic level, LLM simply predicts the most likely or relevant word to follow a given sequence of words based on a learned statistical model. They are just “stochastic parroting” with no meaning. They famously “hallucinated” facts, confidently (and wrongly) – generously awarding Nobel Prizes and attributing credible citations to non-existent academic papers.

If and when True AGI comes, it will be a big deal. Machines today outperform humans at almost every physical task, and AGI could lead to AI “machines” that outperform humans at many intellectual or mental tasks. Bleak scenarios of super-intelligent machines enslaving humans have been envisioned. AGI systems may be a better species created by humans outside of evolution. AGI will indeed be a significant development for which the world should prepare seriously.

I believe the current LLM and their successors are not even close to AGI. But will AGI come someday? I reserve my judgement. However, the hype and panic about LLM or AI directly leading to human extinction is unfounded. The chances of successors to the existing means of “capturing the world” are zero.

where there are dangers

Does this mean that we can live happily ever after without worrying about the effects of AI? I see three possible types of threats arising from AI.

Superhuman AI: The danger of a super intelligent AI enslaving humans. I wouldn’t worry about such a highly unlikely scenario.

Malicious humans with powerful AI: AI tools are relatively easy to build. Even narrow AI tools can cause serious damage when matched with malicious intent. LLMs can generate credible untruths in the form of fake news and cause deep mental anguish leading to self-harm. Public opinion can be manipulated to influence democratic elections. AI tools operate globally, taking little cognizance of boundaries and constraints. Personal malice can instantly affect the world. Governments may sanction or support such actions against “enemies”. We have no effective defense against malicious human behavior. The good guys have expressed concern about AI-powered “smart” weapons in the military. Unfortunately, ban calls are not effective in such situations. I don’t see any easy defense against malicious use of AI.

Highly Capable and Intuitive AI: AI systems will continue to improve and be employed to assist humans. They may inadvertently cause more harm to some classes than others, despite the best intentions of their creators. These systems are built using machine learning from the world’s data and can make data deficiencies permanent. They may present asymmetric behavior that goes against certain groups. Camera-based face recognition systems have been shown to be more accurate on fair-skinned men than on dark-skinned women. Such unintended and unknown biases can be disastrous in AI systems that drive autonomous cars and diagnose medical conditions. Privacy is an important concern as algorithmic systems constantly watch the world. Every person can always be tracked, violating the fundamental right to privacy.

Another concern is who develops these technologies and how. The most recent advances have come in companies with vast computational, data, and human resources. ChatGPT was developed by OpenAI which started as a non-profit and turned into a for-profit entity. Other players in the AI ​​game are Google, Meta, Microsoft and Apple. Business organizations with no effective public oversight are the centers of the action. Do they have an incentive to get the AI ​​system right?

Everything affecting humans requires public oversight or regulation. AI systems can have serious, long-lasting negative effects on individuals. Nevertheless, they can be quickly deployed on a large scale without any oversight. How can we introduce effective regulation without stifling creativity? What are the parameters about AI systems that need to be looked into carefully and how? There is little understanding of these issues.

Read this also | Learning, thinking, creative collaboration and other such human endeavors in the age of AI and ChatGPT

There have been many debates on social media about AI leading to destruction. In a doomsday scenario, solutions such as restricting or halting research and development in AI – as suggested by many – are neither practical nor effective. They can deflect attention from serious issues arising from insufficient scrutiny of AI. We need to talk more about the unintentional harm that AI can inflict on some or all of humanity. These are solvable, but require concerted efforts.

India should be prepared

Awareness and debate on these issues is largely absent in India. Adoption of AI systems is low in the country, but those used are mostly made in the West. We need a systematic evaluation of their efficacy and shortcomings in Indian conditions. We need to establish mechanisms of checks and balances before large scale deployment of AI systems. AI has tremendous potential in various fields such as public health, agriculture, transportation and governance. We need more discussions to make AI systems responsible, fair and just for our society as we reap India’s benefits in them. The European Union is on the verge of enacting an AI Act that proposes rules based on stratification of potential risks. India needs a framework for itself, keeping in mind that the rules have been heavily diluted as well as lax in the past.

PJ Narayanan is a researcher in computer vision and is professor and (ex-officio) director of the International Institute of Information Technology (IIIT) Hyderabad. He was the President of the Association for Computing Machinery (ACM) India, and currently serves on ACM’s Global Technology Policy Council. Views expressed are personal