The EU’s AI ACT will stifle innovation and won’t become a global standard

February 5, 2024 – On February 2, the European Union’s ambassadors green lit the Artificial Intelligence Act (AI Act). Next week, the Internal Market and Civil Liberties committees will decide its fate, while the European Parliament is expected to cast their vote in plenary session either in March or April. 

The European Commission addressed a plethora of criticism on the AI Act’s potential to stifle innovation in the EU by presenting an AI Innovation package for startups and SMEs. It includes EU’s investment in supercomputers, statements on Horizon Europe and Digital Europe programs investing up to €4 billion until 2027, establishment of a new coordination body – AI Office – within the European Commission.

Egle Markeviciute, Head of Digital and Innovation Policies at the Consumer Choice Center, responds:

“Innovation requires not only good science, business and science cooperation, talent, regulatory predictability, access to finance, but one of the most motivating and special elements – room and tolerance for experimentation and risk. The AI Act is likely to stifle the private sector’s ability to innovate by moving their focus to extensive compliance lists and allowing only ‘controlled innovation’ via regulatory sandboxes which allow experimentation in a vacuum for up to 6 months,” said Markeviciute. 

“Controlled innovation produces controlled results – or lack thereof. It seems that instead of leaving regulatory space for innovation, the EU once again focuses on compensating this loss in monetary form. There will never be enough money to compensate for freedom to act and freedom to innovate,” she added.

“The European Union’s AI Act will be considered a success only if it becomes a global standard. So far, it does not seem the world is planning on following in the EU’s footsteps.”

Yaël Ossowski, deputy director of the Consumer Choice Center, adds additional context:

“Despite optimistic belief in the ‘Brussels effect’, the AI Act has not yet resonated with the world. South Korea will focus on the G7 Hiroshima process instead of the AI Act. Singapore, the Philippines, and the United Kingdom have openly expressed concern that imperative AI regulations at this stage can stifle innovation. US President Biden issued an AI Executive Order on the use of AI back in October of 2023, yet the US approach seems to be less restrictive and relies upon federal agency rules,” said Ossowski.

“Even China – a champion of state involvement in both individual and business practices is yet to finalize its AI Law in 2024 and is unlikely to be strict with AI companies compliance due to their ambition in terms of global AI race. In this context, we have to acknowledge that the EU has to adhere to already existing frameworks for AI regulation, not the other way around,” concluded Ossowski.

Originally published here:

The CCC represents consumers in over 100 countries across the globe. We closely monitor regulatory trends in Ottawa, Washington, Brussels, Geneva, Lima, Brasilia, and other hotspots of regulation and inform and activate consumers to fight for #ConsumerChoice. Learn more at