top of page
Search
Writer's pictureRóbert Barcík

The EU AI Act Was Approved: What Every Data Professional Must Know

Updated: May 29

As a freelance trainer and consultant in the topics of data science and artificial intelligence, I have been closely following the developments of the new EU-wide regulation “The AI Act”. To the surprise of many, Parliament ratified it a few weeks ago (13.3.2024, link & link to adopted text), ahead of the anticipated mid-2024 timeline. The regulation is rather complex, and at many spots vague. The Act is intricate and, in several places, ambiguously defined. 


Before we delve deeper, a word to my corporate clients: if your organisation runs even the simplest machine learning model (like a linear model for risk calculation), this regulation likely concerns you—and its impact is imminent.


Below, I've distilled the most critical points into a digestible overview, ensuring you're quickly up to speed with the latest developments (TL;DR of sorts).




Is it Artificial Intelligence (AI)?

When you look at some “smart” system, or algorithm around you, “Is it AI?” is the first question that you should be asking. The definition of what constitutes AI is currently from my perspective a bit blurry. I believe the key lies in the condition for “adaptiveness”, which can have several interpretations. I expect any system that uses self-learning means might fulfil this definition, possibly also simpler rule-(expert-)based systems.


Title I, Article 3:

‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;


Note: In the upcoming articles, I plan to closely elaborate on the topic of which systems might fulfil this definition.


Are you a provider, or a deployer?

The primary responsibilities are placed upon the providers (developers) of high-risk AI systems. The term 'deployers' (users) refers to natural or juridical entities that employ an AI system within a professional context, excluding the ultimate end-users. Deployers of high-risk AI systems bear certain responsibilities, albeit fewer than those of the providers.


Any AI that “touches” EU is regulated

This regulation governs the introduction or implementation of high-risk AI systems within the EU, regardless of the provider's location. It also applies to third-country providers if their system's outputs are used within the EU.


Classification: Banned, high-risk, limited risk

The AI Legislation delineates AI systems based on their level of risk:

  • Banned: AI systems posing unacceptable risks (e.g., social scoring, behavior manipulation).

  • High-risk: Subject to heavy regulation.

  • Limited risk: Subject to basic transparency requirements.


Long story short - imagine traffic lights. Screen and audit your systems for whether you do not have any prohibited practices (red). If yes, remove it instantly. If you discover some high-risk AI (orange), expect a lot of obligations - both legal AND technical. For casual AI applications (green), not much is happening.



Red light: Banned AI Practices 

Be mindful that these are presented as practices that are prohibited, not solely as dedicated AI products/use cases which are prohibited. As to my view, I would recommend to audit AI systems for the possibility of banned practices occurring within the first two months, since the finalisation. The issue is that even though prohibited practices are primarily aimed at harmful intended purposes (Title II, Article 5 - “...with the objective…”), but are also after the possibility of causing unintended harm (Title II, Article 5 - “...or the effect of…”).

Hence, my recommendation for an audit of whether some unintended harm that would fall under prohibited practices does not occur. WARNING: Prohibited practices enter into force within 6 months from finalising, unlike the majority of the regulation that enters into force within 24 months. 


Banned practices, picking from Title II (paraphrased text):

  • Manipulative AI: Systems that deceptively alter behaviour, harming a person's ability to make informed decisions.

  • Exploiting Vulnerabilities: AI that targets individuals based on age, disability, or economic status for harmful purposes.

  • Intrusive Biometric Tracking: Deducing sensitive personal information (ethnicity, beliefs, sexual orientation, etc.) using biometrics, with limited exceptions for approved law enforcement use.

  • Social Scoring Systems: AI that judges individuals based on social behaviour, leading to negative consequences.

  • Profiling for Crime Prediction: Using AI solely to predict criminal behaviour without supporting evidence.

  • Indiscriminate Facial Recognition: Building facial recognition databases from untargeted internet or CCTV image collection.

  • Emotion Tracking Without Consent: Deducing emotional states in workplaces or schools without health/safety reasons.

  • Unrestricted 'Real-Time' Biometric Identification (RBI): The use of RBI in public areas with limited exceptions.


Orange light: Is it high-risk AI?

Title III discusses the criteria and stipulations for AI systems deemed high-risk.


Classification rules (paraphrased text):

  • High-risk AI systems are identified as those that either function as a safety component of a product regulated by specific EU legislation listed in Annex II and are mandated to undergo an external conformity evaluation as per those Annex II regulations (e.g. Machinery, Medical devices, Toys, Lifts, Pressure equipment, Recreational watercraft, Civil aviation security); or

  • Are associated with the use cases outlined in Annex III (see below), with certain exceptions:

  • If the AI system carries out a specific procedural task;

  • Enhances the outcome of an activity previously completed by humans;

  • Identifies patterns or deviations in decision-making without intending to substitute or sway the prior human judgement without adequate human oversight;

  • Executes a preliminary task for an evaluation significant to the purposes outlined in Annex III.

  • Providers of AI systems falling under Annex III, which believe their system does not constitute a high-risk, are obligated to document such an assessment prior to its market launch or service provision.

  • AI systems that profile individuals, defined as the automated processing of personal data to evaluate various aspects of an individual's life such as work performance, financial status, health, personal interests, reliability, behaviour, locations, or movements, are invariably classified as high-risk.


Orange light: Annex III 

…outlines specific use cases for AI systems that are considered high-risk (but are not prohibited). These include (paraphrased text):

  • Creditworthiness Assessment: AI systems used to evaluate creditworthiness (excluding financial fraud detection).

  • Insurance Risk Assessment and Pricing: AI tools in calculating risks and setting pricing for health and life insurance policies.

  • Recruitment and Performance Monitoring: AI used in hiring (job ads, resume screening), task allocation, and employee performance evaluations.

  • Remote Biometric Identification: Systems that go beyond simple identity verification, posing potential privacy concerns.

  • Identifying Sensitive Attributes: Biometric systems inferring protected traits like ethnicity, beliefs, etc.

  • Emotion Detection Systems: AI used to analyze emotions, raising potential ethical issues.

  • Access to Essential Services: AI determining eligibility for public benefits, impacting individuals significantly.

  • Critical Infrastructure Management: AI in vital infrastructure like power, water, etc..

  • Education and Training: AI in student admissions, assessments, and behavior monitoring.

  • Law Enforcement and Migration: AI for crime prediction, polygraphs, asylum applications.

  • Administration of Justice and Democratic Processes: AI in legal decision-making and potentially influencing elections.


Orange light: If high-risk, then…

For providers of high-risk AI systems, Articles 8-25 detail the necessary requirements (paraphrased):

  • Develop a comprehensive risk management system: Plan for potential harms throughout the entire AI system lifecycle.

  • Ensure high-quality data: Use relevant, accurate, and representative data sets for training and testing.

  • Maintain detailed documentation: Create records to prove compliance for potential audits.

  • Enable system monitoring: Log key events to identify risks and changes.

  • Provide clear instructions for users: Help those deploying your system understand its proper use and limitations.

  • Prioritise human oversight: Ensure humans remain in control and can intervene when needed.

  • Guarantee accuracy, robustness, and security: Meet high standards to minimise errors and prevent cyberattacks.

  • Implement a quality control system: Have processes in place to ensure ongoing compliance.


I would like you to provide with a tangible imagination of how much costs these obligations might generate for your AI use cases that fall into the high-risk category. As we are this early in the process of the regulation, my best attempt on the benchmark would be a training course that I am preparing for practitioners. I currently offer the course as 3-5 full days, just to convey the necessary practices to participants. The actual implementations of these practices can from my perspective easily be estimated for a ten-fold of this effort, per use case (hence 30-50 person days).


Timeline and next steps

After entry into force, the AI Act will apply:

  • 6 months for AI systems with prohibited practices.

  • 24 months for high risk AI systems 

  • (further detailed timelines exist)


Let's now be practical. It seems like plenty of time, doesn't it? This is what we all thought when GDPR was coming. Let's take that regulation as an inspiration to foresee what will happen with the compliance of the AI Act. Below, you can see Google Trends data for GDPR. As you can see, most companies waited until the last moment to handle compliance. It created a frenzy that many of us vividly remember. At the beginning of 2018, all of a sudden “everyone” at once wanted to become GDPR compliant. Unfortunately, compliance might take longer than a few months, external specialists might be already booked out etc. My honest recommendation is - start preparing now and avoid the frenzy.



Actionable next steps for teams and companies

  • Focus on “prohibited AI practices”. 6 months is a short time (e.g. cancelling a productive use case might take more than that). Run an informative session with a data/AI community on this topic specifically.

  • Map systems where you are a deployer and where you are a provider.

  • Keep an informal documentation of everything that happens with regards to this topic. 

  • Follow your national regulator. As soon as they form governing bodies in any form, such as “sandboxes”, get in touch. 

  • Dedicate multiple people from among developers and auditors to get thoroughly informed on this topic.

  • Don't get defensive such as “we want to avoid the AI Act by not using AI”. That would be missing out on a huge competitive advantage.


 

Need help with EU AI Act compliance?


If you're looking for support, I offer training, consulting, and audits. You may send your inquiries directly to my email robert@barcik.training .


If you are an individual who would like to simply get informed about this regulation in more detail, I have published a Udemy course, which I am committed to keeping up to date with the regulation progress.


11 views0 comments

Recent Posts

See All

コメント


bottom of page