Google Partners With Indian Govt To Combat ‘Synthetic’ Media: Here’s How – News18

Google will address safety and security risks associated with synthetic media.

Google will address safety and security risks associated with ‘synthetic media,’ during the upcoming Global Partnership on Artificial Intelligence (GPAI) Summit.

To combat deepfake videos, the US-based tech giant Google is testing for a wide range of safety and security risks, including the emergence of new forms of AI-generated, photo-realistic, synthetic audio or video content known as ‘synthetic media.’

“While this technology has useful applications, it raises concerns when used in disinformation campaigns and for other malicious purposes, through deep fakes. The potential for spreading false narratives and manipulated content can have negative implications,” the company said in a blogpost.

In collaboration with the Indian government, the tech giant will address safety and security risks associated with ‘synthetic media,’ during the upcoming Global Partnership on Artificial Intelligence (GPAI) Summit.

“Our collaboration with the Indian government for a multi-stakeholder discussion aligns with our commitment to addressing this challenge together and ensuring a responsible approach to AI. By embracing a multistakeholder approach and fostering responsible AI development, we can ensure that AI’s transformative potential continues to serve as a force for good in the world,”

In the fight against fake images, Google has introduced protective measures, including SynthID. SynthID is an embedded watermark and metadata labeling solution designed to identify images created using Google’s text-to-image generator, Imagen.

Alongside this, Google utilizes a combination of machine learning and human reviewers to quickly identify and eliminate content that violates guidelines. This approach enhances the accuracy of Google’s content moderation systems, ensuring a more effective response to misleading or harmful visual content.

Google is contributing $1 million in grants to the Indian Institute of Technology, Madras, to establish a groundbreaking multidisciplinary center for Responsible AI.

This initiative aims to bring together researchers, domain experts, developers, community members, policymakers, and others to collaboratively ensure the responsible development and localication of AI in the Indian context.

For YouTube, Google is introducing disclosure requirements for creators using altered or AI-generated content. Creators will be mandated to inform users by adding labels to the description panel and video player.

The platform is also developing a ‘privacy request process’ enabling users to take down content that utilizes AI to imitate an individual’s face or voice.