Blog

What Is The EU AI Act? Summarizing Takeaways In 5 Minutes

Publish DateJuly 18, 2024

Blog post cover

The European Union (EU) introduced a comprehensive directive on the use of artificial intelligence (AI), known as the EU AI Act.

It encourages Europeans to trust AI systems and general-purpose AI models (GPAI) that pose comparatively lesser risk. This article summarizes the Act and makes it simpler for professionals and common individuals to understand.

Disclaimer: Chatsimple complements the obligations of the EU AI Act but doesn’t offer any legal advice. Consult a lawyer if you need any advice or suggestions related to the act, or refer to the actual European Union Artificial Intelligence Act.

What is the EU AI Act?

The European Union Artificial Intelligence Act will govern AI's development and/or usage. It applies different rules and regulations to artificial intelligence based on the risk it poses.

It applies to providers, deployers, importers, distributors, and product manufacturers involved in AI and comprises all parties who develop, use, import, distribute, or manufacture AI systems. The act doesn’t limit itself to the boundaries of the EU. If any region produces an output outside the EU, the act will be applicable when the output is intended to be used in the EU region.

It’s a comprehensive regulatory framework for AI that implements strict governance to manage risk and transparency for parties involved in AI. Depending on the type of non-compliance, the penalties for not complying with the Act can be between EUR 7.5 million, or 1.5% of worldwide annual turnover, and EUR 35 million, or 7% of the turnover.

Exploring the need for EU Artificial Intelligence Act

Reverse engineering the predictions that AI makes is tricky. You might not be able to pinpoint exactly why a certain decision was made. Although most AI systems pose limited to no risk, there might be some risky ones that you can’t trust while making critical decisions. For example, in a hiring decision, it would be tricky to understand the why behind selecting a particular candidate. You won’t know for sure if someone was unfairly disadvantaged.

The AI Act helps Europeans trust AI systems with minimal or limited risk and value their desirable contributions to human and machine operations. The Act clarifies what’s accepted and what’s prohibited in AI practices and systems.

The AI Act helps businesses and institutions with

  • Prohibiting AI practices that pose unacceptable risks
  • Determining high-risk AI systems and setting precise requirements for them
  • Defining obligations for high-risk AI system providers and deployers
  • Requiring conformity assessment before placing the AI system in service or on the market.
  • Establishing governance structure at a national and European level.

This governance helps Europeans trust AI systems and use them with confidence. At the European level, the European Artificial Intelligence Board will advise and assist the Commission and the Member States in ensuring that the regulations are enforced consistently.

The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation.

Who would be affected by the EU AI Act?

The EU AI Act’s obligations will be applicable to:

  • Any provider established or located within the EU or in a third country that places or puts into service an AI system or general-purpose AI (GPAI) model on the EU market is a provider.
  • Any deployers of AI systems with a place of establishment or in the EU. They can be located in the EU.
  • If the provider or deployer of an AI system has their place of establishment in a third country (or is located there) and produces an output from the AI system that will be used in the EU.

What does the AI Act require?

The European Union Artificial Intelligence Act requires businesses and institutions to:

Understand the current state

See if you already have AI systems in use or development. Check whether you’ll soon be able to procure any AI system from a third-party vendor. If yes, add these systems to your model repository.

If you don’t have a model repository, consider creating one after understanding your exposure to AI systems. Even if you don’t use AI systems presently, you will likely use them in the near future, considering rising AI adoption. It’s best to prepare the model repository considering present and potential exposure to AI systems.

Classify risks in your AI systems

Under the EU AI Act, there are four risk categories: Unacceptable risk, high risk, limited risk, and minimal risk.

Source: EY

Unacceptable risks

AI systems that pose unacceptable risks are prohibited. These systems might use: 

  • Remote biometric identification (RBI) in public space. These systems use AI to recognize people using their biometric data in public spaces. Facial recognition is most common, but it can include identification based on physical, physiological, or behavioral characteristics. 
  • Vulnerabilities of a natural person. These systems exploit a person or a group because of their age, disability, or social or economic condition. They’re prohibited under the EU AI Act. 
  • Subliminal influencing techniques. Systems that exploit the vulnerabilities of a specific group at a level below their conscious recognition are prohibited. This includes deceptive techniques that aim to distort a person's or a group's behavior by impairing their decision-making abilities. 
  • Social scoring system. Such systems assign a score to each citizen based on their positive and negative actions. Positive actions are desirable, like paying taxes, while negative actions include criminal offenses.

High risks

The EU AI Act permits high risks, but they must comply with multiple requirements while undergoing conformity assessment. You must complete the assessment before releasing the product to the market.

You’ll need to register such systems in an EU database. These systems can include:

  • Infrastructures that can put citizens’ lives and health at risk.
  • Educational or vocational training determines people’s access to education and professional courses. For example, exams.
  • Products with safety components using AI
  • Systems that work with employment or control someone’s access to employment.
  • Essential public and private services like credit scoring systems that manage the chance of someone getting a loan.
  • Systems that control and manage the migration of individuals or provide them asylum.
  • Administration systems that work with justice and democratic processes.

These systems are subjected to strict obligations like risk assessment and mitigation. They require feeding from high-quality datasets and keeping a trace of logging activities. When you’re working with high-risk AI systems, keep everything documented and ensure a high level of security and accuracy.

Limited or minimal risk

For limited risk systems, you need to maintain transparency with the user. They should know that they’re interacting with AI. Common examples include chatbots that aren’t high-risk. They must let the users know that they’re working with AI.

When it comes to minimal risk, the EU AI Act allows its free use. These systems can include spam filters or AI-enabled games.

Comply with the AI Act

If the system is in the high-risk bracket, make sure your AI practices complement the AI Act and follow the necessary regulations.

When does the EU AI Act take effect?

The European Parliament and the EU Member States approved the Act on April 22, 2024, and May 21, 2024, respectively. After the law is published in the Official Journal of the EU, it will come into effect after 20 days.

  • After six months, the prohibitions on systems unacceptable risk will take effect.
  • General purpose AI (GPAI) systems that have been on the market for the past 12 months since the Act was enacted will have 36 months to comply.
  • After 12 months, the rules related to high-risk AI systems will take effect.
  • At the end of 36 months, AI systems that are products or safety components of a product will become regulated under the EU AI Act.

Enforcing The EU AI Act

Different authorities enforce the AI Act. Every EU Member State establishes or designates at least one notifying authority and a marketing surveillance authority, ensuring they have the right resources to perform their duties.

The notifying authority impartially sets and conducts required assessment and designation procedures. On the other hand, the market surveillance authority enforces the Act at the national level. This body might vary for high-risk AI systems. When an AI system is non-compliant, the market surveillance authority will report to the Commission and relevant national authorities annually.

If there’s a non-compliance with EU AI Act’s obligation or a high-risk AI system complies with the Act but still poses a risk to persons’ health and safety, the market surveillance authority can:

  • Require a relevant operator to take corrective measures, ensuring the AI systems do not present similar risks.
  • If the operator fails, the concerned AI system can be withdrawn from the market.

At the EU level, an AI Office within the Commission enforces the regulations. This commission is advised and assisted by the Member State to effectively enforce the AI Act. The commission and the AI Board get technical expertise from a forum of advisory stakeholders. The National courts of EU Member States will implement the AI Liability Directive if non-contractual fault-based civil law claims come before them.

National courts can also demand evidence on high-risk AI systems suspected of causing harm.

Using AI that complies with the Act

Chatsimple, an AI sales agent for your website, abides by the obligations of the EU AI Act. If you want to add AI’s sales capabilities to your website, try Chatsimple without worrying about compliance with the new AI Act in the EU region.

With a Chatsimple’s AI sales agent, you can:

  • Engage every visitor with AI to help them find answers in seconds.
  • Reduce the bounce rate while building trust and credibility.
  • Increase lead generation and set meetings with qualified leads seamlessly.

Chatsimple AI is safe and effective under EU’s AI Act.

Set 30 minutes to understand how to make the most of it in your business.

AI Chatbot

AI CHATBOT FOR YOUR BUSINESS

Convert visitors to
customers even
while you sleep

chatsimple