The EU AI Act blog post announcemet image

The EU AI Act: A Pioneering Regulation or a Milestone in Innovation?

Introduction

On April 21, 2021, the European Commission proposed a rule known as the European Union Artificial Intelligence Act, or EU AI Act. Then, on December 9, 2023, it was adopted by the Council Presidency and the negotiators from the European Parliament. Thierry Breton, a commissioner, called the agreement “historic”. Also, he added that the rule was “much more than a rulebook.”

Both the European Parliament and Council must formally adopt and publish the agreed text in the Official Journal. So, it can become EU law. The majority of the responsibilities will then become binding when member states have two years to domestically implement the new regulations.

The rapid advancement of AI technology has brought with it a variety of risks and challenges. Therefore, the regulation has adopted a risk-based approach. There are some difficulties in establishing a written regulation for the rapidly evolving AI ecosystem. So, the regulation can be a first step in anticipating future requirements.

The highlights of the Act can be summarized as follows:

Terminology

The EU AI Act defines 44 terms such as ‘artificial intelligence system’, ‘validation data’, and ‘serious incident’ to clarify concepts related to artificial intelligence systems. By taking into account criticisms raised since 2021, the regulation seeks to give a more precise definition of AI. Some phrases, including “AI regulatory sandbox,” are not defined in the legislation, though.

Risk-based Approach

The regulation takes a risk-based strategy. It makes a distinction between three types of AI uses: (i) high danger, (ii) unacceptable risk, and (iii) moderate or little risk. The policy intends to exclude and subject low-risk AIs (i.e., those not high-risk) to voluntary regulation while outlawing AIs judged intolerably dangerous. The majority of stakeholders who supported a risk-based approach had an impact on the regulation’s formulation. The regulation already requires that the list of high-risk AIs be updated as needed under the following eight topics. Thus, it can be said that the regulation does not cover the entire use of AI, but a much more limited use.

Completely Prohibited AI Applications

Applications falling into the unacceptable risk category face a complete ban. They may also be subject to severe financial penalties that might cost them up to 30 million euros. The regulations, outlined in Title II, specifically prohibit certain AI practices.

    1. AI systems that can manipulate human behavior to overcome the free will of end users.
    2. AI programs that take advantage of the sensitivities of a certain group of people based on their age, disability, etc. to distort the behavior of a person belonging to this group in order to cause harm.
    3. The systems that public authorities use to assess people’s trustworthiness based on their social behavior (social scoring).
    4. The use of real-time’ remote biometric identification systems in public places. There are some exceptions for specific purposes. These include searching for missing children by law enforcement agencies, the prevention of certain crimes, or the identification of perpetrators.

High-Risk AI Applications

Title III of the regulation sets out “high-risk AI systems.” Also, it states that this list may be expanded depending on the developing technology. The list in Annex III includes a limited number of accepted “high-risk AI systems” as follows:

AI systems that

    • create real-time and remote biometric identification of individuals,
    • manage critical infrastructures (road traffic management, water, gas, heating and electricity supply),
    • assess students in educational institutions and evaluate test takers,
    • do not cover recruitment, selection, promotion, termination decisions, task allocation, and performance monitoring
    • focus on basic services such as citizens’ eligibility for social benefits, assessment of creditworthiness, and dispatch of emergency services,
    • is for individual risk assessment, polygraph use, deep-fake detection, evidence reliability assessment, profiling, and crime analytics by law enforcement agencies,
    • is applied in migration, asylum, and border control,
    • assist judicial authorities in interpreting and applying the law to cases.

Title III also includes a set of regulations for the use of “high-risk applications” in the EU. These regulations can be summarized as follows:

    • Establishing a risk management system and identifying, monitoring, and analyzing foreseeable risks,
    • Conformance of data sets (training, test, and validation data) to quality criteria,
    • Preparation and updating of technical documentation before the launch of the application,
    • Keeping log records to check that the application is working as intended,
    • Transparency to end users (identities of application owners, purpose of the application, foreseeable risks, performance, etc.) and the creation of a user manual,
    • Designed to be effectively auditable by real people while in use,
    • Ensuring cyber security,
    • Immediate reporting of emerging risks to the member state using the system;
    • Cooperation with the competent authorities.

Low-Risk AI Applications

The regulation emphasizes “high-risk AI systems” more than other topics. In addition, Title IX provides that non-high-risk AI systems (low-risk applications) are encouraged to voluntarily implement the mandatory requirements for high-risk AI systems. As a result, companies offering “non-high-risk applications” can develop and execute their codes of conduct.

Transparency Obligation

Title IV of the regulation sets out transparency obligations for certain AI systems. Users should therefore be made aware of this by AI systems that communicate with people, classify social media content based on biometric and emotion detection information, and create deepfakes. After that, end users will have the information necessary to decide whether to keep using the application or stop. Administrative fines of up to €20 million may be imposed for failure to comply with this provision.

Exclusive Database

Title VII requires the establishment of an EU database for “high-risk AI systems” and the registration of the data listed in Annex VIII in this database. Failure to fulfill this obligation could result in administrative fines of up to €20 million. Some NGOs demand that all AI systems in the public sector, regardless of the level of risk, be registered in this database. There are also opinions, especially on social media, that this database will not be of much use.

Penalties

Title X sets out criminal penalties. Accordingly, there are administrative fines of up to

    • €30 million (or 6% of global annual turnover) for non-compliance with the ban on Totally Prohibited AI Systems or failure to fulfill the obligations set for High-Risk AI systems,
    • €20 million (or 4% of global annual turnover) for non-compliance with other specified obligations (e.g. compliance, database reporting),
    • €10 million (or 2% of global annual turnover) for providing false, incomplete, or misleading information to notified bodies and national competent authorities.

Conclusion

In conclusion, the EU AI Act stands as a pivotal and historic regulation. Because it reflects the European Commission’s commitment to addressing the challenges posed by the rapid evolution of AI technology.

This regulatory framework not only sets a precedent for responsible AI development but also paves the way for continued collaboration and adaptation in the dynamic landscape of artificial intelligence.

Sources for EU AI Act: [1], [2], [3], [4]

Mysoly | Your partner in digital!

Serkan Kilic
Serkan Kilic
Senior Data Engineer | AI Researcher
Serkan Kilic
Serkan Kilic
Senior Data Engineer | AI Researcher