ChatGpt, ignore the bard’s ‘human characteristics’. AI has no consciousness

TeaThat artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned Google warns of the dangers of technology “becoming smarter than us”. His fear is that AI will one day be successful in “manipulating people to do what it wants”.

There are reasons why we should be concerned about AI. But we often talk about or treat AI as if they are human. Preventing this, and understanding what they really are, can help us maintain a fruitful relationship with technology.

In a recent essay, American psychologist Gary Marcus advised us Stop treating AI models like people, By AI model, he meant the large language models (LLMs) like ChatGPT and Bard, which are now used by millions of people on a daily basis.

He cites egregious examples of people “over-attributing” human-like cognitive abilities to AI that have had many consequences. most amusing was the US senator who claimed ChatGPT “taught myself chemistry”, The report of a Belgian youth was the saddest who is said to have taken his own life After a long conversation with an AI chatbot.

Marcus is right in saying that we should stop treating AI like people – conscious moral agents with interests, hopes and desires. However, many people will find this to be difficult to nearly impossible. This is because LLMs are designed – by people – to interact with us as if they are human, and we are designed – by biological evolution – to interact with them in a similar way.

does good mimicry

What is the reason that LLM can mimic human conversation so convincingly?
Profound insight by computing pioneer Alan Turing, who realized that it is not necessary for a computer to understand an algorithm in order to run it. This means that while ChatGPT can produce paragraphs full of emotional language, it does not understand any of the words in any sentences it produces.

The LLM designers successfully turned the problem of semantics – the arrangement of words to create meaning – into a statistical, matching of words based on frequency of prior use. Turing’s insight echoes Darwin’s theory of evolution, which explains how species adapt to their surroundings, becoming more complex, without needing to understand anything about their environment or themselves.

cognitive scientist and philosopher Daniel Dennett Coined the phrase “competence without understanding”, which perfectly captures Darwin and Turing’s insight.
Another important contribution of Dennett is his “Deliberate Stance”, It essentially states that in order to fully explain the behavior of an object (human or non-human), we must treat it like a rational agent. This is often manifested in our tendency to anthropomorphize non-human species and other inanimate entities.

But it’s useful. For example, if we want to beat a computer at chess, the best strategy is to think of it as a rational agent that “wants” to beat us. For example, we can explain that the reason the computer castled was because “it wanted to protect its king from our attack”, without contradicting it.

We can say of a tree in the forest as “willing to grow” towards the light. But neither the tree, nor the chess computer represent those “wants” or causes in themselves; Only that the best way to explain their behavior is to treat them as they did.

intent and agency

Our evolutionary history has provided us with mechanisms that lead us to find intention and agency everywhere. In prehistory, these mechanisms helped our ancestors avoid predators and develop altruism toward their closest kin. These mechanisms are what cause us to see faces in clouds and anthropomorphize inanimate objects. When we mistake a tree for a bear we are not harmed, but quite the opposite happens.

Evolutionary psychology shows us how we are always trying to interpret anything that might be human as human. We unconsciously take an intentional attitude and attribute all our cognitive abilities and feelings to this object.

With the potential disruption caused by LLMs, we must realize that they are merely probabilistic machines with no intent, or concern for humans. We must be extra careful about the language we use when describing LLMs and human-like feats of AI more generally. Here are two examples.

the first one was recent study ChatGPTs were found to be more empathetic and provide “higher quality” answers to patients’ questions than doctors. By using emotive words like “empathy” for AI we impute to it the ability to think, reflect and have genuine concern for others – which it doesn’t have.

The second was when GPT-4 (the latest version of the ChatGPT technology) was launched last month, with capabilities for greater skills in creativity and reasoning being attributed to it. However, we are only seeing a scaling of “capacity”, but still no “understanding” (in Dennett’s sense) and certainly no intention – just pattern matching.

Safe

In his recent comments, Hinton raised the near-term threat of “bad actors” using AI to wreak havoc. We can easily imagine an unscrupulous regime or multinational organization deploying AI trained on fake news and lies to flood public discourse with misinformation and deep fakes. Fraudsters can also use AI to prey on vulnerable people in financial scams.

Last month, others including Gary Marcus and Elon Musk signed open letter Called for an immediate stop to the further development of LL.M. Marcus has also called for an international agency to promote “safe, secure and peaceful AI technologies” – dubbing it “CERN for AI”.

Furthermore, many have suggested that anything generated by AI should be get a watermark So that there is no doubt whether we are talking to a human or a chatbot.

Regulation lags behind innovation in AI, as it often does in other walks of life. There are more problems than solutions, and this gap is likely to widen before it narrows. But in the meantime, repeating Dennett’s phrase “competence without understanding” may be the best antidote to our innate compulsion to make AI behave like humans.

Neil SaundersSenior Lecturer in Mathematics, University of Greenwich

This article is republished from Conversation Under Creative Commons Licence. read the original article,


Read also: Google I/O 2023: From AI-Powered Search to Pixel Fold