Will machines think like humans with GPT-4?

OpenAI has released Generative Pre-trained Transformer (GPT-4), its much-anticipated large language model – one that can learn and understand. Some have suggested that GPT-4 will make machines sentient, or think like humans. is that so? Mint explains.

Why is everyone excited about GPT-4?

It is the most advanced language model for Artificial Intelligence (AI). Geoffrey Hinton, the godfather of deep learning, tweeted, “Caterpillars extract nutrients that later transform into butterflies. People have produced billions of nuggets of understanding and GPT-4 is humanity’s butterfly.” Unlike GPT-3 (text only), GPT-4 can handle both text and images. It also scores on ChatGPT. “GPT-4 can apply to Stanford as a student now. AI’s reasoning ability is off the charts….” said Jim Fan, an AI scientist at Nvidia. For now, however, only ChatGPT Plus customers have GPT-4 access.

How does it score on previous versions?

GPT-4 passed the mock bar exam with scores around the top 10% of test takers. GPT-3.5 (which was used to build ChatGPT) scored a bottom 10%. It is also more reliable, creative and capable of handling more precise instructions. It surpasses ChatGPT in advanced reasoning capabilities. In 24 out of 26 languages ​​tested, according to OpenAI, GPT-4 performed better in English than GPT-3.5 and other large language models (Chinchilla, PaLM). OpenAI claims that GPT-4 is 82% less likely to respond to rejected content requests and 40% more likely to provide factual responses than GPT-3.5.

Is GPT-4 already being used by companies?

BeMyEyes, an app that deals in vision, has developed a GPT-4-powered ‘virtual volunteer’ for its blind and low vision users. Morgan Stanley is using GPT-4 to help with wealth management. Khan Academy has used it as a customized tutor. Iceland is offering GPT-4 training on Icelandic grammar. Kisan GPT, a chatbot, plans to up its game with GPT-4.

Are there any limitations of GPT-4?

OpenAI acknowledges that GPT-4 has the same limitations as earlier GPT models and is still not completely reliable because it “delusions” (confidently reacts with fabricated answers) and makes logic errors. GPT-4 also lacks knowledge of current events as it has trained on data only up to September 2021, and it does not “learn from its experience”. GPT-4 ignores double-checking work when errors are likely to occur. However, it hallucinates less than previous models.

So will machines start thinking like us?

Many speculated that GPT-4 would bring humanity closer to the ‘Singularity’ – a hypothetical time when artificial general intelligence (AGI) would give machines human intelligence. Much of this hype stems from the belief that GPT-4 will be released with 100 trillion parameters—which is 500 times larger than GPT-3. But OpenAI gave no such indication. In fact, OpenAI CEO Sam Altman previously tweeted: “We don’t have AGI, and people … are begging to be disappointed”.

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & Live business News,

More
Less