Sympathy for the algorithm: it lacks originality and empathy

In its first week alone, ChatGPT attracted over one million users and was used to write computer programs, create music, play games, and take bar exams for law. Students found that it could write serviceable essays worthy of a ‘B’ academic grade – as did teachers, albeit slowly and to their great dismay.

ChatGPT is far from perfect as an AI software tool, any more than B-quality student essays are far from perfect. The information it provides is only as reliable as the information available to it, which comes from the internet. How it uses that information depends on its training, which involves supervised learning, or in other words, what is asked and answered by a human.

The weights that ChatGPT gives to its possible answers are derived from reinforcement learning, where humans evaluate responses. ChatGPT’s millions of users are prompted to up-vote or down-vote the bot’s responses every time they ask a question. In the same way that helpful feedback from an instructor can sometimes teach a B-quality student to write an A-quality essay, it’s not impossible that the ChatGPT will eventually get a better grade.

This rudimentary artificial intelligence (AI) forces us to rethink what tasks can be performed with minimal human intervention. If an AI is able to pass the bar exam, is there any reason why it can’t write legal statements or give solid legal advice? If an AI can pass my wife’s medical-licensing exam, is there any reason why it can’t provide a diagnosis or give sound medical advice?

One clear implication is more rapid displacement from jobs than in previous waves of automation, and more rapid restructuring of jobs that AI avoided. And the jobs that will be automated out of existence will no longer be limited to the low-skilled and low-wage.

What is less clear is who is protected from technological unemployment. What human characteristics, if any, would an AI be unable to emulate? Are those traits innate, or can they be taught?

The most secure jobs will be those that require empathy and originality.

Empathy is the ability to understand and share the feelings and emotions of others. It cultivates the mutual compassion and understanding that are fundamental to social relationships and emotional well-being. This is especially valuable in circumstances and times of difficulty. That’s why empathy is valued among religious leaders, caregivers, and grief counselors.

It is possible to imagine that with the help of facial recognition software, an AI could learn to recognize the emotions of its interlocutors (that it could learn what is known as ‘cognitive empathy’). But it obviously cannot share their feelings (it cannot learn ‘affective empathy’) in the same way that my wife, in her empathic moments, shares my feelings. Add this to the list of reasons why an AI cannot replace my wife, my doctor, or my rabbi.

There is no consensus about whether emotional empathy can be cultivated and taught. Some argue that affective empathy is triggered by mirror neurons in the brain that cannot be artificially stimulated or controlled. Empathy is just something we experience, not something we can learn. It follows that some of us are better wired than others to be caregivers, grief counselors, and so on.

Other researchers suggest that this emotional response can actually be taught. There is also a training company for medical practitioners called Empathetics Inc. If this is true, it could be possible that more people could be drawn to automation-safe jobs where emotional empathy is required.

But if humans can learn emotional empathy, why not algorithms?

The idea that tasks requiring affective empathy would be protected from automation assumes that people can distinguish true empathy from imitation.

In its original sense, originality means doing something that has not been done before—for example, creating a painting, composition, or newspaper commentary that is unlike anything that has previously appeared in print. Note that originality is different from creativity, which involves combining already existing elements in new ways.

Another OpenAI product, DALL-E, is capable of generating sophisticated images from text descriptions (“painting of an apple” or “‘Mona Lisa with moustache'”). This has caused some consternation among the cast. But are its responses, text-and-image pairs, derived using a large data-set of the original artwork?

It is questionable whether they are original in the sense of depicting an aesthetically pleasing image previously seen as opposed to a combination of existing visual elements associated with existing text. Artists who trade on originality may have nothing to fear, assuming that viewers can distinguish original artwork from the rest.

Again, there is no consensus on whether originality is innate or can be taught. Most likely the answer is that it is a bit of both.

How worried should we be? Type “Write 800 words comment on AI” in ChatGPT and decide for yourself. ©2023/Project Syndicate

Barry Eichengrin is Professor of Economics at the University of California, Berkeley and author of the recent In Defense of Public Debt.

catch all business News, market news, breaking news events and breaking news Update on Live Mint. download mint news app To get daily market updates.

More
Less