Europe boasts ‘historic’ AI landmark

  • The European Commission’s AI Act is close to becoming law
  • It takes into account the emergence and use of ‘general purpose’ AI tools as well as more specific systems
  • It will be some time before the act comes into full force, which means it could be unfit for purpose by the time it is applicable 
  • But at least one experienced analyst says the act is ‘fairly realistic’ in terms of what can be achieved and is a more practical approach than the one taken by China

The European Commission (EC) and European Parliament have reached a “political agreement” on the Artificial Intelligence Act (AI Act), which Ursula von der Leyen, president of the European Commission, has described as “the first-ever comprehensive legal framework on artificial intelligence worldwide”.  

Following some marathon negotiation sessions, including one that lasted for 22 hours, there is now unanimity on the key elements that will form the basis of an “historic” new act to regulate artificial intelligence (AI) across the European Union’s 27 member states. As a result, this is something that all EU-based companies and organisations, including the many that are already making significant headway with AI in the telecom sector, will need to factor into their plans and operations. (To keep up to speed with AI developments in the telecom sector, check out our dedicated news channel.) 

The act is one of the first truly comprehensive attempts by a governmental body to codify how member states will govern AI, social media and search engines The purpose is to control and superintend AI to ensure that it is safe and beneficial to humanity and human endeavour rather than unsafe, detrimental and dystopian. 

For that to happen, the trick will be to anticipate likely future developments in AI globally and be ready very quickly to encapsulate regulatory responses to them. At best it will be an extremely difficult and expensive exercise in perpetual catchup or, at worst, just impossible. 

Indeed, the path to an agreed legal framework on which to regulate AI in the EU has already been long and tortuous. It started taking shape back in 2018, but the first draft of the legislation did not emerge until 2021 and was written well before the emergence of the now globally available “general-purpose” AI systems that are able to perform generally applicable functions, such as image/speech recognition, audio/video generation, pattern detection, question answering, translation and so on. General purpose models are the foundation of today’s generative AI exemplified by the likes of OpenAI’s ChatGPT. 

Those general-purpose AI systems are now accounted for in the act by “dedicated rules… that will ensure transparency along the value chain,” according to the EC. “For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders, together with the commission,” it added. 

In general, the act deals with AI systems in different ways depending on what they do and how they are used. The EC notes that most AI systems, such as recommendation engines and digital message system spam filters, pose minimal risk and will not be subject to obligations, but AI systems identified as high risk “will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems,” noted the EC. 

Examples of high-risk AI systems include those used to manage critical infrastructures “for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk,” according to the EC. 

There will be an outright ban on AI systems “considered a clear threat to the fundamental rights of people”, such as those that “manipulate human behaviour to circumvent users’ free will,” and systems that allow ‘social scoring’ by governments or companies and certain applications of predictive policing. 

In addition, the EC has tried in the act to deal with transparency issues, especially those related to ‘deep fakes’ which, along with other AI-generated content, will need to be clearly identified as such, while users interacting with chatbots will need to be made aware that they are doing so. Here, the EC is clearly attempting to address the impact of AI systems being developed by the big tech community.  

The act’s rules will be applied at a national level by the relevant local authorities of the EU member states, while “the creation of a new European AI Office within the European Commission will ensure coordination at European level,” the EC noted in this announcement.

Companies not complying with the rules of the act will be financially penalised, with fines ranging from €35m or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15m or 3% of turnover for violations of other obligations, and €7.5m or 1.5% of turnover for supplying incorrect information. Startups or small businesses are expected to be subject to lower financial penalties if they infringe the rules.  

The political agreement is now subject to formal approval by member states (which could take some time) and by the European Parliament and the EC: It will come into force 20 days after publication in the EC’s Official Journal. The AI Act would then become applicable two years after its entry into force, except for some specific provisions, which means it won’t be enforceable until late 2025 at the very earliest, by which time entire sections of it could well be obsolete. 

But despite that, none can deny that the EC is in the vanguard in attempting to rein-in AI and its leaders are more than happy to sing their own praises. 

“Artificial intelligence is already changing our everyday lives,” noted von der Leyen in this EC announcement. “And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome [the] political agreement by the European Parliament and the council on the Artificial Intelligence Act. The EU’s AI Act is the first-ever comprehensive legal framework on artificial intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, [the] agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and takeup of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.” 

And despite the obvious challenges the EC will face in applying the act in such a fast-moving R&D environment, at least one experienced technology industry analyst, Radio Free Mobile’s Richard Windsor, described the act as ”far more realistic and business friendly than I had previously feared.”

He noted in his latest blog that the rules being applied to general purpose AI systems are “fairly realistic in terms of what can and cannot be achieved and should also not be too onerous to comply with, meaning that small companies are not meaningfully disadvantaged.” Importantly, he added that the rules as they stand mean that generative AI models that are free and open source are exempt from the rules: As such, he believes the “real winner” from the AI Act is Meta Platforms, because its foundation models are all available in the open-source community. 

Windsor also noted that the EC has taken a more practical approach to AI regulation than the Chinese authorities, which have taken a “harder line” that will likely impede AI developments in China.  

This isn’t the end of the matter, of course. The big tech outfits are lobbying hard for self-regulation and some are even threatening to go to court to file legal challenges to the proposed act and, as noted, it will be some time before the EC rules come into play. However, there’s no doubt this is a landmark moment in the development of the AI sector.

 - Martyn Warwick, Editor in Chief, TelecomTV

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.