The EU’s approach to AI rules is too complicated

America innovates, Europe controls. Just as the world is beginning to come to grips with OpenAI, whose boss Sam Altman has leapfrogged the competition and advocated for global regulations, the European Union has responded with the Artificial Intelligence Act, its own push for AI superpower status. The bid is first. Set minimum standards. [A draft was approved by the European Parliament], Still, we’re a long way from the deceptively simple world of Isaac Asimov’s bot stories, which saw sentient machines leverage powerful “positronic minds” with just three rules: don’t harm humans, obey humans, and control their lives. protect existence. AI is too important not to be fully regulated, but the EU should work to reduce the complexity of the Act while promoting innovation.

The AI ​​Act has some good ideas on transparency and trust: chatbots must declare whether they trained on copyrighted material, deep-fakes must be labeled as such, and new obligations for generative AI-type models are in place. A major effort will be needed to catalog data-sets and take responsibility for how they are used.

It’s a good idea to lift the lid on opaque machines that process large amounts of human produce. As the legislation’s co-reporter Dragos Tudorache told me, it aims to foster “trust and confidence” in a technology that has attracted vast amounts of investment and enthusiasm, yet also produced blinding failures. Self-regulation is not an option – neither is “running into the woods” and doing nothing out of fear that AI might one day wipe out humanity.

However, the Act is too complex, and runs the paradoxical risk of setting the bar too high to foster innovation, but not enough to avoid shock consequences. The main approach is to classify AI applications into buckets of risk from low (spam filters, video games) to high (workplace recruitment) to unauthorized (real-time facial recognition).

This makes sense from a product-safety perspective, with providers of AI systems expected to meet regulations and requirements before rolling out their products. Still, the range of high-risk applications is broad, and the downstream chain of responsibility in an application like ChatGPT shows how technology can blur the product-security framework. When a lawyer relies on AI to create a proposal that is inadvertently filled with made-up case law, are they using or abusing the product?

It’s also unclear how exactly this will work with other data-privacy laws like the EU’s GDPR, which was used by Italy as justification for a temporary block on ChatGPT. While greater transparency on copyright-protected training data makes sense, it could conflict with previous copyright exceptions granted for data mining, when AI was viewed less nervously by creative industries.

This means there is a real possibility that the actual outcome of the AI ​​Act could lead to increased EU dependence on large US tech firms, from Microsoft to Nvidia. European companies are making little effort to harness the potential productivity benefits of AI, but it is likely that large incumbent providers will be most equipped to handle the combination of upfront compliance costs and non-compliance fines, estimated at at least $3 billion. Will be in good condition. Up to 7% of global revenue.

Adobe has already offered to legally compensate businesses if they are sued for copyright infringement over any images created by the Firefly tool. fast company, Few companies can afford to avoid the EU altogether: Alphabet Inc. has not yet made its chatbot Bard available there.

The EU has a lot to do as final talks begin on the AI ​​Act, which may not enter into force until 2026. for small businesses. Bloomberg Intelligence analyst Tamlin Besson sees a possible ‘middle ground’ on sanctions. Initiatives should be taken to promote new technological ideas such as fostering an ecosystem that connects universities, startups and investors. There should also be greater global coordination at a time when concerns about AI are widespread—the G7’s new Hiroshima AI process looks like a useful forum for discussing issues such as intellectual property rights.

Perhaps the one good news is that AI is not going to destroy all the jobs previously held by human compliance officers and lawyers. Technology consultant Barry Scannell says companies will consider hiring AI executives and drafting AI impact assessments, as was done post-GDPR. These new robots require more human brain power to rein in—perhaps a twist you won’t find in an Isaac Asimov story. ©Bloomberg

Lionel Laurent is a Bloomberg Opinion columnist covering digital currencies, the European Union and France

catch all business News, market news, today’s fresh news events and Breaking News Update on Live Mint. download mint news app To get daily market updates.

More
Less

Updated: June 16, 2023, 12:22 AM IST