Laying the foundation for a future ready Digital India

‘Proposed changes include categorizing digital intermediaries into different categories’ | Photo credit: Getty Images/iStockphoto

The Ministry of Electronics and IT is actively conducting consultations on the proposed “Digital India Bill” to build ideological alignment on a new law that will replace India’s 23-year-old Information Technology (IT) Act. It aims to upgrade the existing legal regime to deal with emerging challenges such as user harm, competition and misinformation in the digital sphere. Union Minister of State for Electronics and Technology Rajeev Chandrasekhar said the first draft of the bill should be out by the end of June. It is a much-awaited piece of legislation that is likely to redefine the framework for regulating technology not only in India but globally. The proposed changes include categorizing digital intermediaries into different categories such as e-commerce players, social media companies and search engines so that different responsibilities and liabilities are placed on each type.

Why is the present regime unstable?

The current IT Act defines an “intermediary” to include any entity between a user and the Internet, and the IT regulations subclassify intermediaries into three main categories: “social media intermediaries” (SMIs), “significant Social Media Intermediaries” (SSMI). ) and recently notified, “Online Gaming Intermediaries”. SMIs are platforms that facilitate communication and information sharing among users, and SMIs whose user base is very large (above a specified threshold) are designated as SSMIs. However, the definition of SMI is so broad that it can include various services such as video communication, matrimonial websites, email and even online comment sections on websites. The rules also set strict obligations for most moderators, such as a 72-hour deadline for responding to law enforcement queries and resolving ‘takedown’ requests. Unfortunately, ISPs, websites, e-commerce platforms, and cloud services are treated the same.

Consider platforms like Microsoft Teams or a customer management solution like Zoho. Being licensed, these intermediaries have a closed user base and less risk of harm from information going viral. Treating these intermediaries like traditional social media platforms not only increases their cost of doing business, but exposes them to greater liability without meaningfully mitigating the risks presented by the Internet.

So far, only a few countries have taken a clear stand on the issue of proportional regulation of intermediaries, so there is no need to rely too much. The Digital Services Act of the European Union is probably one of the most developed frameworks for our consideration. It provides some exemptions and creates three tiers of intermediaries with increasing legal obligations – hosting services, online platforms and “very large online platforms”. Australia has created an eight-tier classification system, which includes separate industry-drafted codes governing categories such as social media platforms and search engines. Intermediaries are required to conduct a risk assessment based on the likelihood of exposure to harmful content such as child sexual abuse material (CSAM) or terrorism.

Focus Areas for India

While a broad, product-specific classification could improve online accountability and security, such an approach may not be appropriate for the future. As technology develops, the typical categories we define today may not work in the future. Therefore, we need a classification framework that creates some defined categories, requires mediators to make risk assessments and uses that information to divide them into relevant categories. As far as possible, the goal should be to reduce obligations on intermediaries and ensure that regulatory demands are commensurate with capacity and size.

One way of doing this would be to exempt micro and small enterprises, and caching and conduit services (the ‘pipes’ of the Internet) from any core liability, and communication services (where end users interact with each other) among others. must be clearly distinguished from Forms of intermediaries (such as search engines and online-marketplaces). Given the reduced risks, the obligations imposed on intermediaries that are not communications services should be lower, but still require them to appoint a complaints officer, cooperate with law enforcement, identify advertising and remove problematic content within reasonable timeframes. Removal may be required.

Intermediaries providing communication services may be asked to conduct a risk assessment based on their number of active users, risk of harm and the likelihood of harmful content going viral. The largest communication services (platforms such as Twitter) may be required to comply with special obligations such as appointing India-based officers and setting up in-house complaint appellate mechanisms with independent external stakeholders to enhance trust in the complaint process. Alternative approaches to curb virality, such as circuit breakers to slow down content, can also be considered.

For the proposed approach to be effective, the metrics and appropriate threshold for risk assessment will have to be defined and reviewed from time to time in consultation with the industry. Overall, such a framework can help establish accountability and online security while reducing legal liability for a large number of intermediaries. In doing so, it can help create a regulatory environment that helps achieve the government’s policy goal of creating a secure internet ecosystem, while also allowing businesses to thrive.

Rohit Kumar is a founding partner of The Quantum Hub (TQH), a public policy firm. Kaushik Thanugonda is a senior analyst at The Quantum Hub (TQH), a public policy firm