Meta puts Facebook AI chatbot Blenderbot 3 to further testing as Google races

Meta already understood that language models performed better when they learned to chat with people. It helps to understand the experience model based on the feedback received from the users

Meta already understood that language models performed better when they learned to chat with people. It helps to understand the experience model based on the feedback received from the users

Facebook owner Meta on Friday tested its latest AI chatbot. The social media firm released codes, model weights, datasets and model cards to the scientific community to understand the model’s potential and limitations. The chatbot is currently open to users in the US.

(Sign up for today’s cache, our technology newsletter, for insights on emerging topics at the intersection of technology, business and policy. Click Here To subscribe for free.)

BlenderBot 3 is said to be capable of searching the web and having virtual chats on any topic. The conversational AI model is designed to improve your skills by learning from the feedback you get through online discussions.

The third iteration of Facebook’s Blenderbot was trained on a dataset containing 175 billion tokens and performance, and is on par with OpenAI’s GPT-3, which has been pre-trained in as many parameters as Facebook’s latest chatbot .

Meta already understood that language models performed better when they learned to chat with people. It helps to understand the experience model based on the feedback received from the users.

In the case of Blenderbot 3, Facebook found that its Chat AI sometimes mimics and generates “unsafe, biased or offensive comments”. The social media firm claimed that despite security measures built into the model, it “can still make harsh or offensive comments, which is why we’re collecting feedback that will help improve future chatbots.”

Read also | As AI language skills grow, so do scientists’ concerns

Facebook has a built-in user feedback mechanism within the bot. Therefore, one can click on the thumbs-up or thumbs-down icon to note whether the model has given the correct response. Thums-down Button Thumbs-down lets the user explain why they disliked the message: whether it’s off-topic, redundant, rude, spammy, or something else.

The California-based company first made its conversational AI open source in April 2020, culminating in years of research into natural language processing (NLP). The first Blendbot was trained on only 9.4 billion parameters. At the time, Google’s Mina model was the most advanced chatbot ever. The conversational agent was pre-trained on 2.6 billion tokens and performed, which was 8.5 times the data OpenAI used to train its GPT-2 model.

Two years and three months later, Facebook lags behind Google’s language model. In December, the search giant introduced the Generalist Language Model (GLaM), which was built on a dataset of 1.6 trillion tokens. Google claims that the model’s performance is comparable with the GPT-3 with “significantly improved learning efficiency across 29 public NLP benchmarks across seven categories”.

The race for the big language model continues.