AI-Powered Tools Deepfake Internet Users Challenge Misinformation

Artificial intelligence, deepfakes and social media… little understood by the masses, the combo of the three has become a mysterious obstacle for millions of Internet users, caught in the everyday battle of sifting the real from the fake.

The fight against misinformation has always been challenging and since the development of AI-powered tools, it has become more difficult to detect deepfakes on many social media platforms. AI’s unintended ability to create fake news – faster than it can stop it – has worrying consequences.

“In India’s rapidly changing information ecosystem, deepfakes have emerged as a new frontier of misinformation, making it difficult for people to differentiate between false and true information,” said Syed Nazakat, Founder & CEO of Dataleds , a digital media group building information literacy and information literacy. Infodemic Management Initiative, told PTI. India is already battling a deluge of misinformation in various Indian languages. This situation will get worse with various AI bots and tools running deepfakes on the internet.

“The next generation of AI models, called generative AI – for example, DAL-e, ChatGPT, META Make-a-Video, etc. – do not require a source to transform. Instead, they are based on cues. can generate an image, text or video. These are still in the early stages of development, but one can see the potential for harm as we will have no original material to use as evidence,” Azhar Machwe Joda, who worked as an enterprise architect for AI at British Telecom.

what is deepfake

Deepfakes are photos and videos that realistically swap one person’s face for another. There are many AI tools available to internet users on their smartphones almost for free. In its simplest form, AI can be explained as using computers to do things that require human intelligence. A notable example might be the ongoing competition between Microsoft’s ChatGPT and Google’s Bard.

While both AI tools automate the creation of human-level writing, the difference is that BARD uses Google’s Language Model for Dialogue Applications (LAMDA) and is based on real-time and current research pulled from the Internet. can react. ChatGPT uses its Generative Pre-Training Transformer 3 (GPT-3) model, which is trained on data from before the end of 2021.

recent examples

Two synthetic videos and a digitally altered screenshot of a Hindi newspaper report shared last week on social media platforms, including Twitter and Facebook, show the unintended consequences of AI tools in creating morphed images and doctored videos with misleading or false claims. exposes.

Synthetic video is any video generated with AI without cameras, actors, and other physical elements.

A video of Microsoft co-founder Bill Gates being heckled by a journalist in an interview was shared as genuine and later found to be edited. A digitally altered video of US President Joe Biden calling for a national draft (compulsory enrollment of individuals in the armed forces) to fight the war in Ukraine was shared as authentic. In another instance, an edited photograph was widely circulated as a Hindi newspaper report to spread misinformation about migrant workers in Tamil Nadu.

All three examples – two synthetic videos and digitally altered screenshots of a Hindi newspaper report – were shared on social media platforms by thousands of internet users who thought they were genuine.

The issues escalated into stories on social media and mainstream media outlets, highlighting the unintended consequences of AI tools in creating altered photos and doctored videos with misleading or false claims.

PTI’s Fact Check team fact-checked the three claims and debunked them as ‘deepfakes’ and ‘digitally edited’ using AI-powered tools readily available on the internet.

AI and Fake News A few years ago, the introduction of AI into journalism raised hopes of a revolutionary upheaval in the industry and the way news is produced and distributed. It was also seen as an effective way to stop the spread of fake news and misinformation. “A weakness of deepfakes has been that they require some of the original material to work. For example, the Bill Gates video covered the original audio with fake audio. If the original can be identified, these videos It’s relatively easy to debunk, but it takes time and the ability to discover original content.”

He believes it is easier to track deepfakes shared on social media recently, but he was also concerned that debunking such synthetic videos would be challenging in the coming days.

“Transforming the original video may result in defects (such as light/shadow mismatch) that AI-models can be trained to detect. These resulting videos are often of low quality to hide these flaws from the algorithms (and humans),” he explained.

According to him, fake news floats around in many forms and deepfakes are created these days by very basic AI-powered tools. It is relatively easy to debunk these videos.

“But there cannot be 100 per cent accuracy. For example, Intel’s version promises 96 per cent accuracy, which means 4 out of 100 will still get through,” he said.

The way forward Most social media platforms claim to reduce the spread of misinformation at the source by creating fake news detection algorithms based on language patterns and crowd-sourcing. This ensures that misinformation is not allowed to spread, rather than being detected and removed after the fact.

While examples of deepfakes highlight the potential dangers of AI in generating fake news, AI and machine learning have provided journalism with a range of work-facilitation tools from voice-recognition transcription tools to automatically generate content .

“AI continues to help journalists focus their energy on developing quality content as technology ensures timely and quick content delivery. Human-in-the-loop will be required to check the consistency and veracity of the content shared in any format – text, image, video, audio etc,” Azhar said.

Deepfakes should be clearly labeled as ‘artificially generated’ in India, which had over 700 million smartphone users (two years and above) in 2021. A recent Nielsen report said that rural India had over 425 million internet users, up 44 per cent from 295. million people in urban India use the Internet.

“Humans are prone to join ‘echo chambers’ of others who think alike. We need to incorporate media literacy and critical thinking curriculum into basic education to promote awareness and protect people from misinformation.” A proactive approach can be built in to help protect against.

“We need a multi-pronged, cross-sector approach across India to prepare people of all ages to be vigilant against deepfakes and disinformation for the complex digital landscape of today and tomorrow,” Najakat said.

For a country as large as India, the changing information landscape has created an even greater need for information literacy skills in all languages. He said that every educational institution should give priority to information literacy for the next decade. pti prn min min min

,

,

read all Latest Tech News Here

(This story has not been edited by News18 staff and is published from a syndicated news agency feed)