India can blend EU and US model of AI regulation

During his visit to India, Sam Altman once again called upon governments around the world to regulate artificial intelligence (AI). At first glance this proposition may seem contradictory. Corporations generally prefer a laissez-faire approach from governments, favoring a minimum of regulations that allow businesses maximum freedom. After all, more regulation often translates into more bureaucratic oversight, increased compliance costs, and potential constraints on innovation. Inventors may be discouraged from exploring uncharted territory for fear of non-compliance or cumbersome regulatory procedures.

On the other hand, bureaucrats may welcome such advocacy from tech industry influencers like Altman. As Max Weber elaborated in Economy and Society: A Framework for Interpretive Sociology, bureaucracy, at its core, seeks to create order, uniformity, and predictability. A regulated AI landscape can only be a rational, orderly environment in which traditional bureaucratic systems will be comfortable to operate.

As we stand on the cusp of an AI revolution, emerging technology enterprises including AI are grappling with the ‘liability of novelty’. The hurdles these enterprises face are formidable – from securing the resources needed to stay afloat, to establishing their legitimacy among various audiences, including consumers, regulators and government bodies. Creating a unique identity in a complex industry structure presents another battleground.

In addition, the opacity of the market structure, the battle of the underprivileged against large established companies to have a voice in policy-making, the looming threat of competition from powerful incumbents, and resistance from public-policy advocates, all combine to create a veritable storm. Are.

However, Altman’s stance, which could easily be misunderstood as counter-intuitive, can instead be seen as a forward-looking perspective. The presence of regulation, although potentially slowing the pace of innovation and raising high barriers to entry, provides the industry with a veil of legitimacy and calms uncertainties.

The powerful and unpredictable societal impact of AI calls for greater regulatory involvement. Governance systems must rapidly adapt to the pace of AI development in order to minimize unintended consequences. AI has a growing ability to manipulate language and influence society by changing beliefs, from politics to relationships. Given the potential misuse of AI for misinformation and the risk of perpetuating its bias, effective regulation is critical. However, designing such rules is a complex and time-consuming task, with five inherent challenges.

The inherently fast-paced and ever-evolving nature of AI often outpaces legislative processes. Thus, with the rapid advancement of technologies, especially those related to generic AI systems, regulatory bodies often struggle to catch up. The ability of these systems to generate reliable but fictitious text and images introduces new layers of complexity that make regulation even more challenging.

Even defining AI remains an elusive goal. It covers a broad spectrum, from straightforward automation algorithms to complex machine learning models. The lack of a universally accepted definition complicates efforts to establish clear regulatory parameters, hindering the development of precise and relevant regulations.

The diverse nature of AI challenges a comprehensive regulatory approach, leading to the risk of over-regulation in some areas and under-regulation in others. For example, AI in generative language models requires a different regulatory touch than AI that could endanger infrastructure safety or human life.

AI needs international consensus for effective regulation. However, achieving global standardization is no small feat. Different regulatory approaches and varying stances between countries pose a significant obstacle to this effort. The EU and the US have opposite approaches to AI regulation. The EU is proactive, aiming to create a strong, sustainable framework for AI that protects individual rights and data privacy, minimizes risks and promotes AI adoption. The EU approach blends specific and general regulations, addresses cyber security concerns, and encourages innovation through a regulatory sandbox. Meanwhile, the US favors a decentralized approach, in which responsibilities are assigned to specific federal agencies to avoid overregulation and foster innovation.

Furthermore, we cannot allow AI players to self-regulate themselves or become pseudo-regulators. An example of this are AI startups involved in the creation of deepfakes. AI self-regulation raises issues of accountability and transparency. Although it can foster flexibility and innovation, it is not bulletproof; Studies show that this can create oversight gaps, promote unethical practices or biases.

India should consider a balanced approach to AI regulation, learning from the EU and US models. Like the European Union, India needs a robust framework that protects individual rights, mitigates risks and ensures data security without hindering AI development. Adopting decentralized aspects of the US model could help avoid overregulation, allow specific sectors to address AI-related issues based on their specific needs, and also foster innovation.

catch ’em all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less

Updated: 06 July 2023, 10:58 PM IST