Meta works to comply with India’s data protection Act

However, ensuring compliance poses operational complications, including aspects of Meta’s policies that are not fully aligned with Indian laws, Nick Clegg, president of global affairs at Meta, said in an interview.

Speaking about the potential timeframe that the ministry of electronics and information technology (Meity) is expected to offer Big Tech firms to comply with rules under the DPDP Act, Clegg said, “The devil is in the details here. Of course, we can implement existing solutions that we already have faster than creating bespoke India-specific solutions. Our India team is working with government stakeholders to implement the legislative recommendations under the DPDP Act as rapidly as we can, based on the expectations that have been set for us.”

However, compliance, Clegg said, “depends massively on each different aspect of the law.”

“Some aspects are easier to comply with than others since we’d have also dealt with some aspects in other geographies. Where the adaptations to the Indian law are not very extensive, these aspects would be more complicated,” he added.

On 14 September, Mint reported that Meity is expected to offer a shorter timeframe to Big Tech firms, as well as large corporations, to comply with India’s data protection law. While Meity has not decided the exact timeframe as yet, Big Tech companies such as Meta are expected to be offered six months to become compliant with the DPDP Act.

Meta is cumulatively the world’s largest social media platform—data from market researcher Statista from January this year states that it has 6.9 billion monthly active users (MAUs) globally. India is Meta’s biggest market globally in terms of user base, with over 480 million MAUs of WhatsApp and 230 million of Instagram in the country, as per Statista data.

This makes compliance crucial for Meta, which has had regulatory run-ins with the Centre on multiple previous occasions. The company has also been questioned globally, over time, on its role in elections—something Clegg said Meta is already taking note of.

“We’re conscious of elections coming up next year in India, Europe, Britain, Mexico and even the US. For this, we’ll be publishing more information on how we’re developing new technology to go after misinformation and label it even more rapidly than we’ve done before. People often worry about what AI (artificial intelligence) will create, thus leading to misinformation—they forget that AI is actually a sword and not a shield. It’ll allow us to track or label misinformation, and even track it in multiple different languages—perhaps even hundreds,” he said.

To do this, Meta is working on a watermarking mechanism for global coordination, which Clegg said will raise the need for a global AI framework—“because a lot of content will flow from one platform to another, and there’s no point in us watermarking our content if other content isn’t similarly marked.”

Speaking about the possibility of a global AI regulatory framework, Clegg said it is “both possible and desirable”.

“The technology is evolving fast and doesn’t respect borders. So, the sooner we get a common alignment, particularly in the largest techno-democracies globally, such as the US, EU and India, the better. There are many ways to achieve this—through OECD, G7, G20, then legislative processes in the US, the UK, India, and even Japan. What we’re all trying to do as an industry is to urge everyone to converge on a set of basic standards—particularly that of transparency,” he said.

He also called for “common standards on detectability and watermarking across the industry”—which can further boost the transparency of AI.

Speaking about India’s regulatory approach, Clegg said it has proceeded on the right note—especially in the area of nascent technologies such as AI.

“It makes sense to try and regulate the outcomes of technologies like AI but not legislate the technology itself. In India, neither the DPDP Act nor the proposed Digital India Act or the upcoming Telecom Bill are trying to regulate the technology itself, which would be a futile thing to do since the tech is so fluid,” Clegg said.

To ensure that this approach is maintained, he said that industry consultations would be key. “Where India’s legislation seeks to affect generative AI is through outcomes, and the Digital India Act will be subject to extensive consultation once the draft bill is released—much like the DPDP Act. These consultations typically hugely improve legislation, as seen in the DPDP Act—particularly on things like cross-border data flows, pragmatism on defining age in technology, and so on. The DPDP Act will enable the success of the Digital India vision and its target of India becoming a $5 trillion economy by 2024,” he said.

On Wednesday, Meta announced multiple AI initiatives and features for all of its platforms, which include AI assistants for search and image generation, watermarking to segregate images and, eventually, policies to define transparency of AI usage on mainstream platforms.

However, Clegg claimed the company’s initiatives were not a move to “win at search” against more prominent offerings such as Microsoft’s OpenAI-powered Bing search, and Google’s Search Generative Experience powered by its AI platform Bard and the Pathways Language Model (PaLM) large language model (LLM).

While Clegg said Meta’s goal is “to win at personalized use of AI,” the company, which has had a dubious history of how it has dealt with its user data, is cautious about the use of personally identifiable information in its AI personalization offerings. “We’ll be publishing in full, in relation to AI, how we use or don’t use data. In the data we use to train our AI systems, we’ve been clear that this did not include people’s personal messages between friends and family—most of this is derived from publicly available data. We exclude websites and other data sources that include a lot of personally identifiable and sensitive data,” he said.

However, the rollout of the AI assistants will remain slow and be limited only as a beta in the US for now. “It’s very early days, and sometimes these assistants will come out with inaccurate and inappropriate responses. That is why we’re rolling out these products very slowly,” said Clegg, adding that increasing reiterations of the assistants, based on the platform’s Llama 2 LLM, will help it improve transparency, safety, integrity and bias.

An increasing number of top global executives have called for establishing industry-wide and international collaborations on a common framework for responsible development of AI. On 4 May, top chief executives Sam Altman of OpenAI, Dario Amodei of Anthropic, Satya Nadella of Microsoft and Sundar Pichai of Google’s parent company Alphabet met US vice-president Kamala Harris to discuss potential industry collaborations on AI development.

India’s G20 Leaders’ Declaration, published on 9 September, highlighted the consensus among member nations to “work together to promote international cooperation and further discussions on international governance for AI”. The declaration also committed to a “pro-innovation regulatory and governance approach that maximizes the benefits and takes into account the risks associated with the use of AI”.

On 28 August, Microsoft president Brad Smith told Mint in an interview that the safety of operations and global standardization could be key tenets of establishing a common global framework for the development of AI.

Clegg, in this regard, added that Meta is “taking steps” to address AI bias—factoring in more data and algorithms designed to catch harmful responses in order to improve their models.

“Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!” Click here!

Catch all the Corporate news and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less

Updated: 28 Sep 2023, 12:01 AM IST