ChatGPT vs Bard: Microsoft, Google’s growing chatbot rivalry and its hidden dangers

New Delhi: The race to build generative artificial intelligence (AI) tools has kept the two biggest tech giants, Microsoft and Google busy ever since OpenAI, a startup introduced ChatGPT (Generative Pre-Train Transformer) in November last year.

Now, Google has Introduced its own version of a chatbot called Bard. However, the announcement faltered right off the bat. During a live demo, Google’s Bard AI made a factual mistake that hurt investor confidence, resulting in Google’s parent company Alphabet $100 billion loss in market value on Wednesday. At the same time, Microsoft has announced that its search engine Bing and browser Edge will be powered by ChatGPT.

Although chatbots existed in the past, OpenAI’s ChatGPT was the first interactive tool, powered by AI, that was easily accessible. This “Natural Language Processing” tool is capable of gathering information from the Internet and generating inputs in a single place through a question-answer format in near real time.

While the tool generated substantial hype and excitement for venture capitalists and investors, it has also generated many suspicions of inaccuracy, misinformation or misinformation, and bias. The education sector, for example, is already facing problems regarding plagiarism with multiple students Using this tool to directly solve problems and complete assignments.

chatbot rivalry

Microsoft CEO Satya Nadella said, “The race starts today, and we’re going to go further and faster.” reboot of bing search engine This week.

Bing will now be a tool with a combined experience of an AI chatbot and a search engine.

Google’s Bard is also an experimental conversational AI service with a similar chatbot-like mechanism that claims to provide factual input on complex topics. It is based on the company’s Language Model for Dialog Applications, or LaMDA.

Even Chinese tech firm Baidu has End Its internal testing for ‘Ernie Bot’, which is a ChatGPT-style project.

Venture capitalists like Sequoia have shown interest in investing in generative AI tools like ChatGPT, saying the discovery will set a new paradigm for human and technological learning.

“As investors, when we see any superpower in the city and people are able to harness it, big companies are formed and we participate in them and invest in them and make money from them, so we Absolutely curious what this could become,” Anandamoy Raychaudhuri, Surge Partner, Sequoia Southeast Asia, told ThePrint.

“The reason everyone is so excited about ChatGPT and generative AI isn’t because it’s super smart or irrefutable, it’s our first real attempt at making something that isn’t human.”

“This technology learns how humans learn,” he said. “We’ve been able to discover a larger language model that can basically learn from the Internet, learn from what we’ve done in the past and build on our own knowledge of our own history.” can help.”

Although this technology is not new, Google claims that its “Transformer research project” in 2017 It was the basis for many of the generative AI applications seen today.

Sundar Pichai, CEO, Sundar Pichai said, “We re-orientated the company six years ago around AI – and why we see it as the most important way to fulfill our mission: to organize the world’s information. And to make it universally accessible and useful.” Alphabet Inc., in a blog post.

“Advanced generative AI and large language models are capturing the imaginations of people around the world,” he added.

The TRANSFORMER research project explored how to translate, interpret and understand language from the vast amount of texts and information in the public domain.

Upasana Dash, founder of Jazbor Brand Consultancy, a communications and a brand building company, believes that generative AI has been around for a long time, but this time easy access is a key differentiator.

Dash explained, “They’ve equalized the Internet by making it a form in which one can ask a question and get answered on a basic level.” “I think that’s the biggest difference. It’s providing you with customized content and resources for any topic. It actually does the work of gathering the analysis and insights available.

“The tool eliminates some of the steps for the user as we are used to finding answers from multiple platforms using the Google search engine,” she said.


read this also, Budget 2023: Modi govt focuses on upgradation of skills and technology, 3 new AI centers coming up


bias and accuracy

However, all the hype and excitement around AI tools comes with concerns over misuse of this tool, as well as questions about the tool’s accuracy, ability to be socially sensitive, and handling of personal data.

According to Akash Karmakar, Partner, Law Office of Panag & Babu, ChatGPT scours the internet for answers to questions. From a cyber security perspective, algorithms that provide artificial intelligence (AI) based natural language processing (AI-NLP) tools can potentially be used to create malicious code.

This, he explained, requires the creation of proportionate and appropriate end-use restrictions, which would, for example, prevent similar devices from collecting personal data that could be used for cyberstalking, addressing vulnerabilities in cyber security frameworks. exploit, or scour the Internet for pirated content.

Karmakar went on to explain how there may be a need for regulation that takes care of ‘prejudices’.

“With the increase in use cases For AI in risk-judgment, creditworthiness determination and healthcare prioritization, bias can creep in during training and adversely affect individuals,” said Karmakar. “This is where the regulation of AI becomes important because biases or prejudices such as racism or sexism are often evident in the production of such tools.”

In the field of education too, plagiarism is being talked about.

According to Jaideep Kevalramani, Head of Employability Business and COO of TeamLease EdTech, there is a serious need for more policy support at the institutional level.

“In the education sector, because this is general AI, the applications are far and wide,” Kevalramani said. “The obvious ones have already been talked about. But what people are not talking about are two things. One, how do we now accept this as a new normal and address it at the policy level, pedagogy How do you integrate at the baccalaureate and class levels?

For example, he says, if there is a policy that says academic institutions must determine their own AI reference framework, then giving them autonomy could be an option. Educational institutions will then have to decide for themselves whether they want a complete ban or whether they want to use this tool to enhance learning,Kevalramani said.

However, there is also a perspective on how these concerns can be “alarmist” and an example of a “classic fear of change”.

“In any wave of technology, we see that some companies are serious and some companies use them as window dressing,” Roychowdhury said. “Most alarming stuff is overblown, a classic fear of change. The reality is different. In the history of human endeavor, a tool appears and is disruptive, whether it’s a hammer or a car or a computer or an airplane. We basically tool and harnessed the power of that tool,” he explained.

“Some of it is good and some of it is bad and then society has changed. The big language model will have a certain impact on white-collar work habits, etc.,” he said.

Still others are concerned about how the device could be a game changer in employment and how it could create fewer job roles, especially in the communications sector.

Dash said, “I have been talking to stakeholders in our industry because there is concern that this could eliminate roles and manpower.”

“You have to start thinking of it like a relay race. I don’t think that you, as companies or as human beings, can completely count on taking certain aspects off the platform, but we have to look at this as a tool that Which is not making anyone obsolete…the technology is not harmful but there are use cases and it is necessary to examine these specific use cases which can be harmful.”

(Edited by Geetalakshmi Ramanathan)


read this also, There’s a new shrink in town, and it’s AI chatbots. Patients enjoy more privacy, no favoritism