Is your organisation ready for the AI Act?

Is your organisation ready for the AI Act?

On Tuesday (21/5), the EU Council finally gave the green light for the new EU AI Act, which means that the act will be published in the Official Journal of the EU in the coming days, after which the act will come into force 20 days thereafter. Below is an introduction to the new EU AI Act!

What is the AI Act?

EU’s new AI Act is considered the world’s first comprehensive regulatory framework regarding AI, developed with the aim of ensuring that the development and use of AI within the EU are safe and legal, while respecting individuals’ fundamental rights. The Act is also intended to promote the development of AI within the EU. The AI Act adopts a risk-based approach, categorising AI systems and general-purpose AI models into different categories based on the risk; the higher the risk of causing harm, the stricter the obligations.

Who will be affected by the AI Act?

The Act introduces several obligations for various actors operating within the AI field, and it makes a clear distinction between, among others, providers, deployers, importers, and distributors of AI systems or AI models.

  • A provider refers to someone who develops an AI system or an AI model, and who puts it into service or places it on the Union market.
  • A deployer refers to someone who uses an AI system outside of a personal non-professional activity.
  • An importer refers to someone who is located or established within the EU and places an AI system on the Union market for someone established in a third country.
  • A distributor refers to someone, who is not a provider or an importer, who makes an AI system available on the Union market.

The Act applies to providers regardless of whether they are established within the Union or in a third country, and in the case where the place of establishment is in a third country, it is sufficient that the output produced by the AI system is used in the Union for the Act to be applicable. The latter about output also applies to deployers of AI systems, who otherwise are only covered by the Act provided that they have their place of establishment in the Union.

What are the exceptions from the scope of the AI Act?

  • Exclusively for military purposes, defense purposes, or purposes related to national security
  • Use for scientific research and development (R&D) purposes only
  • Research, testing, or development activities prior to placing on the market or putting into service
  • Purely personal non-professional activities of natural persons
  • AI systems under a free license with open-source, under certain conditions

How is AI defined according to the AI Act?

Defining AI has been one of the most debated and perhaps most complex parts of the legislative process, but also one of the most important, as the definition of AI is crucial for the scope of the AI Act. Two key definitions incorporated into the Act to define AI are AI system and AI model.

  • An AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
  • A general-purpose AI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”

It is important to distinguish between what constitutes an AI system and an AI model according to the Act. An AI model does not constitute an AI system in itself, but often forms an important component of an AI system by being integrated into the AI system. For an AI model to become an AI system, the addition of further components, e.g., a user interface, is required. AI models have a particular impact on the AI value chain as they can be found as underlying components in a range of downstream AI systems.

What does a risk-based approach mean?

The risk-based approach means that AI systems and AI models are categorised based on risk, where the extent of the requirements according to the Act is determined by which category an AI system or general-purpose AI model belongs to. AI systems are categorised according to unacceptable risk, high risk, limited risk, and minimal risk, where the last category is considered to pose such a small risk that such AI systems are not covered by the provisions of the AI Act. Regarding the high-risk category, there are certain classification rules in the Act to assess whether the AI system constitutes a high-risk system. AI models are categorised according to AI model with systemic risk and AI model without systemic risk.

What does the AI Act require of organisations?

AI Systems with Unacceptable Risk

Prohibited AI practices due to their deemed unacceptable risk to individuals’ fundamental rights. The prohibition means that it is not permitted to place on the market, put into service, or use such AI systems covered by the prohibition, regardless of whether the organisation is a provider, deployer, or other actor. The prohibition includes a number of specifically mentioned AI practices, which among others include AI systems that exploit subliminal techniques, intentionally manipulative or misleading techniques, such as dark patterns, or AI systems for social scoring, etc.

Providers of High-Risk AI Systems

Providers of high-risk AI systems are the category that is subjected to the largest regulatory compliance burden according to the AI Act. Providers are covered by both technical requirements, concerning the design and development of AI systems, and organisational requirements. The AI Act requires, among other things, that the provider establishes comprehensive risk management systems and quality management systems, implements robust methods for data governance, data management, and cybersecurity, establishes technical documentation, ensures log-keeping and human oversight, and establishes instructions for use to ensure transparency in the AI value chain.

Deployers of High-Risk AI Systems

Deployers, i.e., organisations that use AI systems in their operations, are also imposed obligations according to the Act. Deployers must take both technical and organisational measures to ensure that the use of AI systems complies with the accompanying instructions of use. They must also monitor the system and report any problems, and in the event that the deployer has control over the input data, ensure its relevance and its representative capability. Public authorities or private entities providing public services must also, before using a high-risk AI system, conduct a fundamental rights impact assessment regarding its impact on fundamental rights according to certain criteria.

Providers and Deployers of AI Systems with Limited risk

Constitute a special category of AI systems specifically mentioned in the Act, which are subject to specific transparency obligations for both providers and deployers. This includes ensuring transparency when humans interact with an AI system, such as with a chatbot, or ensuring transparency when content is artificially generated or manipulated, such as in the creation of deepfakes.

Providers of AI Models

In relation to providers of AI models, the Act introduces specific obligations and requirements, which differ depending on whether it concerns an AI model without systemic risk or an AI model with systemic risk. All AI models are subject to certain basic requirements, such as providing a summary of the AI model’s training content and conducting model evaluations, while for AI models with systemic risk, there are certain additional requirements, due to these AI models being considered to pose particularly high risks.

What penalties apply according to the Act?

Non-compliance with the AI Act can lead to significant penalties, the size of which is determined by the nature of the violation.

  • The highest of 35 million EUR or 7% of the total worldwide annual turnover, for non-compliance with the prohibition of AI practices
  • The highest of 15 million EUR or 3% of the total worldwide annual turnover, for non-compliance with the obligations in the Act
  • The highest of 7.5 million EUR or 1% of the total worldwide annual turnover, for providing incorrect or misleading information

When will the AI Act begin to apply?

After the AI Act has entered into force, which is expected to happen in June, the obligations in the Act will begin to be applied gradually starting six months after the entry into force, when the provisions on prohibited AI practices and penalties begin to apply. After 12 months, provisions on AI models will begin to apply, and after 24 months, requirements and obligations regarding providers of high-risk or limited-risk AI systems will begin to apply. Obligations for deployers will begin to apply 24–36 months after the entry into force.

Is your organisation ready for the AI Act?

Most organisations are currently evaluating the use and development of various AI applications in their operations. To succeed with AI projects, knowledge is needed, and working methods that take into account the applicable regulatory framework. Do you need to update your knowledge about the legal aspects of AI or legal support in your AI project? We are IT law specialists who work extensively with various AI legal assignments. We are also often hired as speakers in various contexts in the field of AI & Law.

Please do not hesitate to contact us if you have any questions related to the AI Act or AI Law in general.

Christina Wikström                                             Anton Karlsson
Lawyer & Managing Partner                        Associate
christina@wikstrompartners.se                   anton@wikstrompartners.se
+46 70 691 68 00                                                  +46 70 148 00 09