‘It’s a dead end’, researchers share their opinion on ChatGPT-4

What do scientists think?

may not be useful for research

Researchers have pointed out that the human ability to write texts, like codes, has the potential to transform science, but few have yet been able to access the technology, its underlying codes or information on how it was trained. Scientists say this raises concerns about the safety of the technology and makes it less useful for research.

Wonderful

An article published in the journal Nature, quotes a scientist who has seen a demo of ChatGPT4 as saying, “We saw some videos in which they demonstrated the capabilities and it is amazing.” A website, which GPT-4 uses to produce the computer code needed to build that website, as a demonstration of its ability to handle images as input.

frustration over privacy

The research world is frustrated because of the extreme secrecy surrounding OpenAI’s ChatGPT4. “All these closed-source models, they’re essentially dead-ends in science,” Nature Quoted by Sasha Lucioni, a research scientist specializing in climate at HuggingFace, an open-source-AI community. “They [OpenAI] You can build on your research, but for the community at large, it’s a dead end.”

Wasn’t impressed at first, but….

Another researcher, Andrew White, a chemical engineer at the University of Rochester, was given access to ChatGPT-4 as a ‘red-teamer’: a person paid by OpenAI to test the platform and make it do something bad. pays.

They told Nature That initially he was not impressed with the chatbot. “Initially, I wasn’t really impressed,” says White.

However, when he gave GPT-4 access to scientific papers, things changed dramatically. “It made us realize that these models might not be that great on their own. But when you start connecting them to tools like retrosynthesis planners, or calculators, to the internet, all of a sudden new kinds of capabilities emerge.”

fake facts

Another researcher has pointed out that models such as GPT-4, which exist to predict the next word in a sentence, cannot be corrected by coming up with fake facts – known as hallucinations. “You can’t trust these types of models because there’s a lot of hallucination involved,” she says.

Giving wrong information is another problem. Lucioni says that models like GPT-4, which exist to predict the next word in a sentence, may not be okay with coming up with fake facts—known as hallucinations. “You can’t trust these types of models because there’s a lot of hallucination involved,” she says.

Without access to the data used for training, OpenAI’s assurances about security are low for Lucioni. “You don’t know what the data is. So you can’t improve it. I mean, it’s completely impossible to do science with a model like this,” she says.

The mystery of how GPT-4 was trained is also a concern for van Dis’s colleague in Amsterdam, the psychologist Claudie Bocking. “It’s very difficult as a human being to be accountable for something you can’t take care of,” she says. Without being able to access the code behind GPT-4 it is impossible to see where the bias might arise, or to correct it, explains Lucioni.

ethics discussion

Bocking and van Dys are also concerned that these AI systems are increasingly owned by big tech companies. They want to ensure that the technology is properly tested and validated by scientists. “It is also an opportunity as collaboration with big tech can definitely speed up the processes,” she adds.

Van Dis, Bocking and their colleagues earlier this year argued for an urgent need to develop a set of ‘living’ guidelines to govern how tools such as AI and GPT-4 are used and developed. They are concerned that any legislation around AI technologies will struggle to keep up with the pace of development. Bocking and van Dys have organized an invitational summit at the University of Amsterdam on 11 April to discuss these concerns, with representatives from organizations including UNESCO’s Science-Ethics Committee, the Organization for Economic Co-operation and Development and the World Economic Forum.

Despite the concerns, says White, GPT-4 and its future iterations will shake up science. “I think this is really going to be a major infrastructure change in science, almost like the Internet was a major change,” he says. It won’t replace scientists, he says, but could help with some tasks. “I think we’re going to start to realize that we can use the papers, the data programs, the libraries that we use and do computational work or even robotic experiments. “

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less