Skip to main content

insight

After years in the making, the EU AI Act has achieved its final approvals for enactment into law. Kubrick Head of Market Intelligence Simon Duncan has been following the developments of the regulation closely. He explores the reality of the Act coming into effect and reflects on its journey to reach this historic moment.

Last week (21/05/2024), the Council of the EU presented the world with a press release titled: ‘Artificial Intelligence (AI) act: Council gives final green light to the first worldwide rules on AI.’

What this means - in actuality - is that the presidents of the EU Parliament and Council have agreed to sign this Act, thereby granting its publication in the EU’s Official Journal ‘in the coming days’, where it will enter into force 20 days later, and then, in general, will apply 24 months after this. What the press release represents, however, is a statement to the world that says the EU are taking control as the leading authority on AI regulation.

The release puts emphasis on the Act’s aim to promote ‘safe and trustworthy’ AI systems that ‘ensure respect of fundamental rights of EU citizens’, while still driving investment and innovation. The crux of their commitment to AI safety is their approach to categorising AI systems by their risk level and applying stricter regulation to higher risk systems. The press release refers to the complexity and nuance of this approach, mentioning that there are ‘some exceptions for specific provisions’ for reaching enforced regulation in 2 years’ time. Specifically, Article 113 of the Act gives detailed timelines for each chapter within the all-important Article 6, which provides those definitions of risk, particularly those identified with high-risk models. This grants 36 months for the regulation to be applied after the Act comes into force.

Another outlier to the enforcement timeline is the AI Liability Directive - which has still to be agreed. This Directive, it is intended, will specify those issues of causality, fault, and harm linked to AI Systems. The pending content and sign-off of this Directive is a reminder of the journey this important Act has been on since it was first proposed in April 2021.

The issue of liability for AI systems that are publicly accessible was certainly one of the many controversial aspects of this Act – and debate over how to regulate it loomed large throughout the negotiations. It begs the question of whether, despite the importance of transparency and consumer protection, which are so deeply intrinsic to each piece of EU legislation, the issue of liability will be placed on the provider of the AI system or another causal link in the chain.

Although the pursuit of a completed Act is not yet done, what we do have is very much what the title of this release claims - ‘… the first worldwide rules on AI.’ This is not a Presidential Directive nor is it a set of guidelines like the Bletchley Park statement, it is full-throated legislation that:

  • Places the identification and categorisation of AI systems/models by risk.
  • Respects the fundamental rights of the bloc’s citizens.
  • Contains onerous penalties to support enforcement.
  • Is not constrictive enough to suppress innovation; and
  • Outlines a governance architecture that is more extensive than that applied to GDPR.

This governance structure will establish an AI Office, a scientific panel of independent experts, an AI Board, and an advisory forum for stakeholders. This structure implicitly also recognizes the ambition within the EU to create an intrinsic infrastructure within their borders that will permit the confident exchange of data and model outputs between the public and private sectors. This will be a key aspect of the growth and success of this political consensus, allowing not just the growth of EU-based AI commercial leaders but also a catch-up to the US around this issue of infrastructure and scale.

At Kubrick, we have followed and written about the EU AI Act as it has progressed through the various stages of the legislative process. This journey still has further to travel: not only shall we continue to provide insight and opinion on the regulations on AI systems but we shall also, now that this Act will be in force, start writing about the interdependence between those regulations around ESG, AI, AI Governance, and Ethics that will, we believe, provide the landscape for that 3rd Horizon of regulatory direction.

Kubrick is a technology consultancy who accelerate delivery and build amazing teams, specialising in data, AI, and cloud solutions. With a key focus on realizing lasting value from tech, their pioneering Next-Gen Consulting model challenges the status quo by embedding consultants directly into their clients' teams and empowering clients to retain their consultants as FTE employees.

Kubrick’s Data & AI Governance practice takes a specialist approach to enabling organisations to achieve regulatory compliance and drive safe, robust, and trustworthy data and AI practices that drive innovation. To learn more about Kubrick’s capabilities in data and AI governance and ethics, get in touch: speaktous@kubrickgroup.com