How should humans respond to advances in artificial intelligence?

Ever since it came from tech-savvy forums to our mobile phones in the form of ChatGPT, AI has suddenly become as ‘real’ as it can get. Within months of its launch, in addition to its mass adoption, reams of digital paper have also been spent documenting its supernatural uses, promoting it as a quick fix to all faculty-related problems. is, the computer that hadn’t been hacked yet- critical thinking. Technology, to date, can digitize and reproduce a myriad of books. Yet, until recently, the ability to skim through thousands of written words to produce an original formulation was the sole purview of humans. That barrier has been broken. Now market analysis, decryption of ‘fedspeak’ has enough screenshots and videos to show the magic tricks it can pull out of a hat.https://www.ft.com/3Hryx60) and sentiment analysis for book summaries, financial planning, website building, economics research, and more. It thinks, plans, decides results, creates subtasks and auto-downloads resources to complete them.

Amid this flurry, we must not forget that AI is man-made. Creativity, in its true essence, was given to us divinely. AI does what it does today by taking advantage of the output of human creativity over thousands of years. Words, music and art are all created by humans and learned by AI. Now, as we stand at a juncture where the winning entry in the Sony World Photography Contest (bit.ly/3VzCk7f), unknown to its judges, was AI-generated, we’re looking at a different creative future—one that’s hard to predict.

Man is definitely a survivor. AI is definitely a massive computing power. Nor is it an infallible predictor of the future. Even more so when, oftentimes, the emphasis on economic outcomes stifles discussion around social ones. The economic fear of job loss is not wrong. As someone recently told us, “ChatGPT is like a smart intern.” Basic copy editing, content creation, websites and research tasks can be outsourced to AI even today, let alone after humanity has put its head together to make it to the stars. Musk already says that a ‘Universal Basic Income’ (bit.ly/3LHnS9B) will have to become a reality as some jobs become obsolete. In a consumerist pro-profit market, with short deadlines and large amounts of work requirements, AI is like manna to its users. This parallels the human story, as machines replaced earlier manual work – eliminating many jobs, but creating many more.

But, this time, the survival instinct of man will be tested against the self-generating and ‘thinking’ power of the machines. Before then, technology did our bidding. This time, we’re not sure.

The use of machines freed humans to pursue more ‘thoughtful’ ideas. It sounded sweet, but look at us today. We generate more content than we can consume—mountains of content, for example, of poor quality. Already, a young adult’s attention span is a fleeting moment. We struggle to find that one rare quality that will soon be in demand when our AI lords start producing drivel-conscience.

Unlike the technology of the past, AI has properties similar to bio-particles, a la covid virus. It can make decisions for itself and self-promote, which in the absence of discretion can lead to dire consequences. Meanwhile, humans will have plenty of time, free from the need to think about even the basics. But is it good for humanity? Most children today have difficulty producing a beautifully written or thoughtfully analyzed piece of work, let alone writing by hand. With AI at their disposal, mandated by the labor productivity creation required by corporates, their learning curve can be packed in a suitcase and stowed away.

All good things take patience and attention, like the works of Tolstoy, Valmiki, Shakespeare. Or Rembrandt’s painting and Newton’s calculus. This is the basis that AI is learning from today, but 50 years from now, in the absence of a human creativity-facilitating ecosystem, would it do us any good to have AI learn from AI?

Humans and accountability have had an interesting relationship. If ‘distance from direct action’ is mapped to the x-axis of the graph and ‘accountability’ to the y, then the further away humans are on the x count, the less directly accountable they feel. This is seen in the Nuremberg trials for war crimes, or the trolley problem (bit.ly/40R0ECq, We already know this through our everyday experience: think signing in ink versus Digi-Sign versus checking an ‘I agree’ box. Further action-distances As the AI ​​makes decisions, we remove ourselves from the ethical constraints that we used to have on decision-making. Say, for the “Improving Firm Profitability” directive, the AI ​​subtasks eliminate 50 employees. Advanced intelligence without discretion is a potentially lethal weapon, even in the human case, let alone when the AI ​​is opaque (https://www.ft.com/3AEadKm) and data-corruptors.

We asked ChatGPT what he thinks about the pitfalls of AI. The top answer was “lack of creativity and intuition”, followed by “ethical concerns”. This should be a sign. AI is a great tool as long as humans with a conscience own it.

V. Ananth Nageswaran and Aparajita Tripathi are Chief Economic Advisor to the Government of India and Director of KPMG, respectively. These are the personal views of the authors.

catch ’em all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less