The EU AI Act Decoded: How to Get Your Business Ready

Orange, violet and blue abstract shapes
iGenius
September 17, 2024
·
5 min

On August 1st, 2024, the European Union signed and launched the first-ever Artificial Intelligence Act, reaching a significant milestone in the regulation of AI technologies. This legislation establishes a common regulatory and legal framework to ensure the safe, ethical, and transparent use of AI across Europe. 

The Act will be fully enforced within 36 months and will apply to both public and private sectors, inside and outside the EU, as long as the AI system is available on the European market and/or impacts people based in the EU. 

As such, companies that develop AI systems and businesses that leverage AI in their day-to-day life will be required to comply with new rules.

The EU AI Act’s provisions foster a new balance between regulation and innovation. On one hand, it will lay the groundwork for the safe and responsible growth and development of the AI industry and will raise awareness of the systems deemed to pose risks. 

On the other hand, it will enhance the uptake of AI, mainly in two ways. Firstly, the regulations will make the use of AI for businesses more trustworthy, which will encourage companies and institutions to leverage the technology.

Secondly, it will provide incentives for businesses and investors by offering a stable and safe environment in which to conduct research and foster innovation. By promoting responsible AI, the Act will contribute to a boost of the sector across Europe. 

This article delves into the key takeaways of the EU AI Act and explores how your business can prepare to implement and navigate the new regulatory framework over the next three years. 

What is the EU AI Act?

The Artificial Intelligence Act is a comprehensive EU regulation that addresses risks associated with AI systems while promoting innovation and ensuring that AI technologies are developed, deployed, and used responsibly.

According to the European Commission, the AI Act’s primary goal is to mitigate risks linked to health, safety, and fundamental rights, while also protecting democracy, the rule of law, and the environment

The Act categorizes AI systems into different risk levels, establishing rules for each category. 

While the Act will apply to all organizations either developing, deploying, distributing, selling, or using AI, it will exclude systems used for military or defense objectives, as well as for research or innovation purposes. 

AI as Defined by the EU AI Act

According to the EU AI Act, an Artificial Intelligence System is defined as:

“Any machine-based system that operates autonomously and can generate output, such as predictions, recommendations, or decisions influencing physical or virtual environments.” 

The EU member states already championed the protection of fundamental rights for non-discrimination. This Act takes their commitment one step further. By integrating accountability and transparency criteria for high-risk AI systems, security breaches will be easier to track down and investigate, simplifying the process of determining system compliance. 

Why is the EU AI Act Important?

The EU AI Act stipulates that Europeans should be able to trust what technology has to offer. While the majority of AI systems are safe and propose countless contributions, some might create risks and should be addressed to avoid consequences.

As the use of AI technologies becomes more widespread in the workforce and in private everyday lives, certain AI-powered technologies could pose potential threats, such as privacy violations or biased decision-making. The AI Act aims to mitigate those risks and protect users, enhancing the reliability of AI for business usage. 

Let’s take the example of an HR department using an AI tool to automatically analyze job applications and resumes, in order to streamline the selection process. To prevent discrimination, the Act will require a transparent and thorough assessment of algorithms used for HR purposes, limiting the biased use of AI in HR activities. 

Likewise, if a bank uses AI to analyze credit risks and assess loan approvals, certain algorithms could reflect biases based on historical data, which could unfairly penalize certain populations. The AI Act aims to prevent this discrimination by mandating rigorous audits and ethical evaluations to ensure the algorithm is fair and non-discriminatory.

In other words, the new regulations serve to raise the standards, safety, and reliability of systems created and used across the EU, prohibiting AI practices deemed too risky, and defining specific obligations for users and suppliers of AI systems. 

Objectives of the EU AI Act

The key objectives of the Act include:

A. Safeguarding security and fundamental rights

First and foremost, the act aims to protect businesses and people from the threats that some AI systems might cause to their health, safety, and fundamental rights. These include decision-making fairness, data protection, and governance, aiming to make AI systems more trustworthy and reliable for users. 

B. Promoting innovation

Besides establishing rules for high-risk AI systems, the Act also seeks to provide a clear, regulatory framework to foster innovation and encourage investment in European AI technologies. Companies with a clear grasp of legal regulations will be able to prosper and innovate, being fully conscious of the related terms and conditions. 

C. Increasing transparency and accountability 

Provisions for increased transparency and accountability in the Act include documentation requirements, record-keeping, and clear information sharing with users about the AI system they use. For example, by being transparent about the involvement of AI to generate content or provide services (e.g. a chatbot), companies will improve their reputation and gain the trust of their customers. 

D. Reinforcing governance

Along with the regulations, the Act has established a governance structure both at the EU and national levels for all 27 member states, to oversee the correct implementation and enforcement of the regulations. Moreover, it’ll be responsible for ensuring that all systems on the market are compliant. As such, when an EU-based business decides to implement an AI tool, it will be required to comply and select a system conformed with the Act. 

Types of Risk 

The EU AI Act has identified four types of risks, with different rules applicable to each AI system operating in Europe:

Unacceptable Risk

Any AI system defined as posing “unacceptable risk” will be banned due to its potential to cause harm. These include systems used for predictive policing, biometric real-time identification systems, targeted reading of facial images from the Internet, or video surveillance to create facial recognition. This category will include many exceptions and many systems will be evaluated on a case-by-case basis.  

High Risk 

While they aren’t banned, AI systems deemed to pose a high risk will be subject to rigorous regulations. Organizations that foster such systems should prepare for strict compliance reviews, including carrying out a risk assessment of fundamental rights, training and support for staff responsible for monitoring high-risk AI systems, as well as keeping track of the results generated by these systems. They include systems used in the areas of biometrics, critical infrastructure, education, and law enforcement. 

Limited Risk

These include systems for image and video processing, as well as chatbots. The regulations provided for these include informing users that they are interacting with an AI system, and labeling all audio, video, or photo content as “generated by AI”, for full transparency and accountability. 

Minimal Risk

AI applications with minimal risks include spam filters or video games, and they won’t be subject to any particular regulatory requirements. 

How the EU AI Act will impact businesses

To continue enjoying the benefits of AI safely, businesses operating in the EU will be impacted in the following ways: 

Market access and opportunities

The good news is that the Act will improve market access for AI companies operating within the EU, as it will reduce uncertainty. Therefore, it will make it easier for companies to develop and deploy AI solutions on the market. At the same time, the Act could slow down innovation, especially involving high-risk systems, making our product offer less attractive to foreign investors. This remains to be proven, as there is no clear evidence today.

Ethical considerations

As the Act strongly emphasizes the importance of ethics, companies will be encouraged to consider the ethical impact of their AI system on their business sector and society at large. In particular, ethical questions around the issue of bias, fundamental rights, and transparency. Companies that prioritize ethical practices may gain a competitive advantage by building trust with their audience and customers, and reputation with prospects.

Global implications

The EU AI Act is the first of its kind, but its impact will not be limited to Europe. It will likely become a reference and benchmark for AI regulations that will follow. Just like companies wishing to operate in or within Europe will need to respect the Act’s requirements, compliant companies will be well-positioned to attract investors eager to develop an ethical AI ecosystem.  

Budget allocation 

Companies developing high-risk AI systems will need to dedicate a budget to meet the compliance requirements, for fees associated with risk assessment, documentation, and to ensure that the system meets the requirements of the EU AI Act. These costs will vary depending on the size of the company and its AI systems but will be essential to ensure compliance and prevent sanctions and fines. 

How your business should prepare for the EU AI Act

Following the implementation of the AI Act, your company will have up to 36 months to prepare, change infrastructure, and make sure your system complies with the new regulations. 

Here’s what you can do to prepare for the new regulations:

  1. Create a detailed inventory of the AI systems used across all departments, to determine whether or not they fall within the scope of the EU AI Act. Ask yourself, what is the purpose of this system? How often do we use it? How does it interact with users and stakeholders? (e.g. a Chatbot for customer service, a Decision Intelligence tool for data analysis). 
  1. Assess and determine the risk level of your AI systems and identify which compliance requirements apply. For example, if your company uses a high-risk system, such as an AI-based medical diagnostic tool to help detect diseases, it will be subject to regular risk-assessment reviews. If your company uses AI to generate images, you will have to add a disclaimer to your content.
  1. Develop and execute a plan to ensure that the proper risk management, control system, monitoring, and documentation are ready for implementation for when the Act becomes effective. Make a checklist of all additional steps or requirements, and integrate them into your regular action plan, for a smooth and seamless implementation. 
  1. Keep up to date with the evolution of the EU AI Act and AI-related updates. The implementation of the Act is a first step towards more regulation, but it won’t be the last. For example, the Act requires large algorithm developers to draft a self-regulatory code to reaffirm their commitment to social and environmental sustainability, as well as risk prevention, among others. By keeping up with industry updates, you will be more prepared to comply with the latest amendments and requirements. 
Image of a timeline divided into four boxes.
EU AI Act implementation timeline.

Europe is leading the way in AI regulation, setting a standard for responsible AI development. Yet, as technology evolves rapidly, it is essential to update the EU AI Act regularly to keep it relevant. Over the next three years, the document will be revised and updated with benchmarks, recommendations, and adjustments

However, it is crucial for businesses to start preparing for these changes today.

The EU AI Act doesn’t just provide a set of rules but offers opportunities for AI developers and businesses. It provides best practices and ethical guidelines, such as fairness, transparency, and respect for fundamental rights, which can help shape responsible AI development.

Moreover, the Act could bring about a competitive advantage for the European industry. Clear and well-defined AI regulations can be a strength for the EU, attracting investments and fostering an ecosystem focused on ethical practices. This would position Europe as a leader in responsible AI innovation.

Finally, the EU AI Act is likely to create a more democratic AI landscape, benefiting everyone and promoting a more inclusive future. By encouraging the widespread development and adoption of AI, the Act will foster innovation and research, enabling companies to engage with this transformative technology with confidence, and ensuring that its benefits are distributed across industries and society as a whole.

***

Frequently Asked Questions
What is the AI Pact?
The AI Pact is a voluntary commitment initiated by the EU. It encourages major tech companies to adhere to ethical guidelines ahead of the formal adoption of the EU AI Act. The pact aims to strengthen collaboration between the tech industry and regulators, setting the groundwork for more responsible AI development and practices across Europe. 
What is a General Purpose Al? 
General Purpose AI (GPAI) refers to AI systems designed to perform a wide range of tasks across several domains, instead of being limited to a single function. The advantage of GPAI is that it can adapt to various tasks, learning and applying knowledge in a more flexible way. For example, ChatGPT, Google speech-to-text, AI-generated games, or recommendation engines that leverage users’ algorithms to suggest products (e.g. streaming platforms, social media, or e-commerce).
What are the sanctions for not complying with the EU AI Act? 
The Act includes different sanctions for companies not complying with the regulations. These include fines – up to 35 million Euros or 7% of the company’s global turnover – market bans, and restrictions on deploying systems, which would prevent them from being sold, distributed, or used. 

Read the article
Share this post
Subscribe to our newsletter
Receive our latest news and product updates in your email inbox.
it