In an era where AI is reshaping the global landscape, the European Union stands at the forefront with its revolutionary approach to AI legislation. The draft Artificial Intelligence Act (often abbreviated as the EU AI Act) sets a potential benchmark for global AI regulations, aiming to establish precedents in the rapidly evolving AI domain.
Balancing Act: Safeguarding Values & Fueling Innovation
The EU AI Act strikes a delicate balance between protecting society, the economy, fundamental rights, and European values against potential AI risks while still encouraging innovation. This act acknowledges AI’s transformative potential while implementing safeguards to manage risks and protect critical infrastructure. The development of the EU AI Act has been a complex journey since its proposal in April 2021, involving multiple amendments that reflect the dynamic discussions surrounding AI technology and its societal implications.
The law was thrust back into the spotlight when its unofficial text was leaked on January 22. While it is possible that the final law may be slightly edited, the leaked text still gives us a very detailed idea of what to expect.
Risk Levels Under the EU AI Act
The EU AI Act encompasses a broad definition of AI, covering technologies from simple chatbots to sophisticated models like ChatGPT. It adopts a risk-based approach, regulating AI systems based on their associated risk levels. This differentiation ensures that only those systems posing specific risks fall under regulatory scrutiny.
The EU AI Act categorizes AI systems into different risk levels:
- Unacceptable risk: totally prohibited under the new law. Examples include facial recognition software and social scoring.
- High risk: subject to regular conformity assessments to ensure compliance. High-risk AI systems face stringent compliance requirements, including quality data set standards, detailed technical documentation, event recording, user transparency, and human oversight. Examples include credit scoring systems and automated insurance claims.
- Limited risk: required to be totally transparent with users about data collection and how the AI is used. Examples include chatbots and deepfakes.
- Minimal risk: encouraged to abide by a self-imposed code of conduct. Examples include AI-based spam filters or video game integration.
This tiered approach dictates the regulatory requirements for each category, from outright prohibition to mandatory conformity assessments and transparency obligations.
Global Implications: Beyond the EU Borders
AI regulation is still relatively new. Multiple nations have proposed their own legislation with varying requirements. The EU AI Act shows the European Union’s determination to take the lead on global AI legislation, with influence that will extend far outside the EU.
A big part of this influence is the financial penalties outlined in the Act. Currently, GDPR violations can result in fines of €20 million (approximately $22 million USD) or 4% of global profits, whichever is greater. The EU AI Act allows authorities to impose fines of €30 million (approximately $33 million USD) or 6% of global profits, a 50% increase over the GDPR. This further demonstrates the EU’s commitment to not only passing binding legislation, but putting real force behind it.
Preparing for the EU AI Act: 3 Essential Steps
As AI continues to transform the global landscape, organizations must proactively engage with AI governance, adopting a collaborative, multi-stakeholder approach to address the complex challenges and opportunities presented by AI.
With the passing of the EU AI Act in the not-too-distant future, how can your company get ready to ensure compliance right away?
First, you can review the self-assessment guide provided by the French CNIL (Commission Nationale Informatique & Libertés, France’s data protection authority). Though the guide was written with the GDPR in mind, its recommendations can help bring your systems into compliance with the EU AI Act.
Second, you can download the AI and data risk protection toolkit provided by the ICO (Information Commissioner’s Office in the UK). The toolkit contains practical help to reduce the risk that AI poses to your organization or your clients.
Third, you can implement a privacy compliance software such as our flagship program: 4Comply. This will help you keep on top of changing privacy regulations and stay in compliance. To schedule a free demo or to learn more, contact us today.