No fear of extinction at the hands of artificial general intelligence

I recently participated in a panel discussion aptly titled: ‘Is the Singularity Here? AI vs Human: What will be the new reality?’ Elon Musk is considering the same singularity, he said that “the advent of artificial general intelligence is called singularity because it is very difficult to predict what will happen after that.” He also predicted that it would give us an “Age of Abundance”. But it had “some potential” that it “destroys humanity.” Geoffrey Hinton, the widely acknowledged ‘godfather’ of AI, recently confessed CBS News General purpose AI could be here in 20 years or less, not 20-50 years as they previously thought.

With the advent of the AI ​​and Generative AI tidal wave, it has become fashionable to talk about the Singularity and AGI (or Artificial General Intelligence). However, the concept is much older, and was probably coined by the brilliant polymath John von Neumann. In the 1950s, Newman talked about how “the ever-increasing progress of technology” might lead to “some necessary singularity in the history of the race”. ,bit.ly/3Pja8UJ, In recent years, futurist and computer scientist Ray Kurzweil has supported the singularity. Kurzweil, along with Peter Diamandis, founded Singularity University in the US, which advocates for the concepts of abundance, exponential technologies like AI, and of course, the singularity. (Disclaimer: I have attended SU’s Executive Program and am now an expert faculty member for it). In 2005, Kurzweil wrote, the singularity is near, Now, perhaps encouraged by the generic AI tsunami, he’s writing the singularity is near, For the record, in his previous book, Kurzweil stuck his neck out and declared that the singularity would arrive in or around 2045. Let’s see how close he considers it now.

But what is singularity? Like artificial intelligence, there is no single definition. It is widely believed that this is the moment when artificial intelligence will exceed human intelligence, and thus be smarter than us. AGI, or artificial general intelligence, has a similar description: when an AI agent can accomplish any intellectual feat that humans can. Opinion is divided on whether we should consider AGI or the Singularity to occur when an AI agent becomes smarter than the average or most intelligent human (Kurzweil?), or all humans combined. The term is derived from space science and the Big Bang theory, which states that about 14 billion years ago, the universe emerged from a singularity—a single point of infinite density and gravity, and before that space and time did not exist. Interestingly, Hinduism also refers to something similar to singularity, with some ancient texts talking about the universe and all consciousness arising from a single point origin: the primordial sound of Om.

Regardless of its origin and definition, what happens after the singularity is where there is a sharp difference of opinion. Optimists like Kurzweil, Sam Altman, and many Big Tech leaders tout how AGI will solve the world’s biggest problems like global warming and nuclear fusion, eliminate drudgery at jobs, and generally make the world a better place. Will make room On the other hand, the more cynical voices of Yuval Harari, Musk and now Hinton worry about the uncontrolled race for the singularity and the injustice, division and destruction it brings. Thus it was interesting enough that both optimists and pessimists came together to sign a single-line open letter issued by the Center for AI Safety, which stated that: “Reducing the risk of extinction from AI is more important than other social -level risks as well as a global priority in the form of pandemics and nuclear war.”

So, will AI become superintelligent and destroy us all? I believe we are heading towards some sort of alternate intelligence. Recent studies on large language models such as GPT4 have shown what are called ’emergent’ capabilities, or capabilities that were unexpected and went far beyond mere ‘autocomplete text’. Microsoft researchers revealed some surprising skills of GPT4 in the now famous ‘Sparks of AGI’ paper (bit.ly/3Pj6hHn, To me, it is inevitable that we will create highly intelligent AI. I’m even less sure whether these will ever be sentient. How do we put a machine in a machine if we still don’t understand our brain and consciousness – the ‘hard problem’ of philosophy? As far as super-intelligent AI destroying us, I’m equally skeptical. I am afraid of us humans only; We have more potential to destroy the human race. Like AI will not take away our jobs, AI will not kill us, but a human using AI wrongly can do that.

Jaspreet Bindra is a technology expert, author of ‘The Tech Whisperer’, and is currently pursuing his Masters in AI and Ethics from the University of Cambridge.

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less

Updated: June 22, 2023, 11:40 PM IST