This is how ChatGPT works to respond to all human commands

In an autobiography of American author Mark Twain, he quotes—or perhaps misquotes—the former British Prime Minister Benjamin Disraeli: “There are three kinds of lies: lies, damned lies, and statistics.” An amazing leap forward, artificial intelligence combines all three in one tiny package.

chatgpt, and other generative AI chatbots like it, are trained on vast datasets across the internet to produce the statistically most likely response. Its answers are not based on an understanding of what makes something funny, meaningful, or accurate, but rather on the vocabulary, spelling, grammar, and even style of other webpages.

It presents its responses through what has been called an “interactive interface”: it remembers what a user has said, and can interact using contextual cues and clever moves. It’s statistical pastiche plus statistical panache, and that’s where the trouble lies.

Random but reassuring When I talk to another human being, it refers to my lifetime of experience in dealing with other people. So when a program speaks like a person, it’s very hard not to react as if someone were engaging in a real conversation — taking in something, thinking about it, reacting in the context of both of our thoughts.

Yet, that’s not exactly what’s happening with the AI ​​talker. They cannot think and have no sense or understanding of any kind.

Presenting information to us as a human, in conversation, makes the AI ​​more convincing than it should be. The software is pretending to be more credible than it is, as it is using human tactics of rhetoric to fake trustworthiness, competence and understanding far beyond its own capabilities.

There are two issues here: is the output correct; And do people think the output is correct? The interface side of the software is more promising than the algorithm-side, and the developers know it. Sam Altman, CEO of OpenAI, the company behind ChatGPT, admits that “ChatGPT is incredibly limited, but good enough at a few things to create a deceptive impression of greatness.” It still focused on its user-facing products (including Microsoft’s Bing search), in an effort not to be left out.

fact and fiction

Sometimes the AI ​​is going to be wrong, but the conversational interface produces output with the same confidence and polish when it gets it right. For example, as science-fiction author Ted Chiang points out, the device makes errors when doing sums with large numbers, because it doesn’t have any logic to actually do the math.

It only matches pattern-matching examples seen on the web that include joins. And while it can find examples of more general math questions, it hasn’t seen the training text include a large number.

It doesn’t “know” the rules of math that a 10 year old child would be able to use clearly. Yet the conversational interface certainly delivers its feedback, no matter how inaccurate, as reflected in this exchange with ChatGPT.

User: What is the capital of Malaysia? ChatGPT: The capital of Malaysia is Kuala Lumpur.

user:27 what’s up 7338? ChatGPT: 27 7338 is 200,526.

This.

Generative AI could mix real facts with made-up facts in a biography of a public figure, or cite plausible scientific references for papers that were never written.

It makes sense: Statistically, webpages note that famous people have won awards more often, and letters usually contain references. ChatGPT is only doing what it was created to do, and is assembling material that may be improbable, even if it is true.

Computer scientists call this an AI hallucination. The rest of us can call it a lie.

scary outputs

When I teach my design students, I talk about the importance of matching process with output. If an idea is at a conceptual level, it shouldn’t be presented in a way that makes it appear more sophisticated than it actually is – they shouldn’t render it in 3D or print it on glossy cardstock . A pencil sketch clarifies that the idea is preliminary, easy to change and should not be expected to address every part of a problem.

The same is true of conversational interfaces: when technology “speaks” to us in a well-crafted, grammatically correct or conversational tone, we interpret it with much more thoughtfulness and logic than is actually present. do as. It’s a trick—the artist must use it, not the computer.

AI developers have a responsibility to manage user expectations, as we may already be predisposed to believe whatever the machine tells us. Mathematician Jordan Ellenberg describes a kind of “algebraic intimidation” that can overwhelm our better judgment by claiming to engage in mathematics.

AI, with hundreds of billions of parameters, can disarm us with a similar algorithmic threat.

While we are tailoring the algorithms to create better and better content, we need to make sure that the interface itself doesn’t over promise. Conversation in the tech world is already filled with overconfidence and arrogance – perhaps AI could use a little humility instead.


Smartphone companies have launched many attractive devices in the first quarter of 2023. What are some of the best phones launched in 2023 that you can buy today? we discuss it of classGadgets 360 Podcast. available on orbital Spotify, Sing, JioSaavn, google podcasts, apple podcast, amazon music And wherever you get your podcast.
Affiliate links may be automatically generated – see our moral statement for information.