
EU AI Act comes into force in August 2024
Following the European Commission’s initial proposal in April 2021, the European Parliament adopted the EU AI Act, which was published in the Official Journal of the European Union in July 2024 and is available in all member state languages. The Act will officially enter into force in August 2024, with most of its provisions applying two years later, although some specific provisions have different deadlines. The Regulation sets out obligations for companies providing and/or using artificial intelligence systems in the European Union.
EU AI Act Timeline
The EU AI Act will be developed and implemented according to the following schedule:
April 2021: First proposal by the European Commission
March 2024: Adoption by the European Parliament
July 2024: Official publication in the Official Journal of the EU
August 2024: Law enters into force
Mid-2025: Full implementation in all EU Member States
What is the European AI Act?
The European AI Act 2024 is a regulation introduced by the European Commission to ensure that AI systems are used in a “safe, transparent, traceable, non-discriminatory, and environmentally friendly” manner. This regulation sets out to manage how AI systems can compliantly be handled by “providers” and “deployers” of AI systems depending on the level of risks posed by the AI system. Being a “provider” of AI systems in a broad sense measn that your business provides or develops an AI system under your own brand. Deployers are the ones who make use of AI technologies. Thus, any business can be included in this group. The higher the risk the AI system has, the stricter the regulatory requirements.
Some key points of the EU AI Act
- Risk-based classification: The AI Act classifies AI systems into different risk levels, each with specific regulatory requirements to manage their potential impacts.
- Transparency requirements: High-risk AI systems must meet strict transparency and record-keeping requirements to ensure accountability and traceability.
- Human oversight: Certain AI systems must be overseen by humans to mitigate risks and maintain ethical standards.
Risk levels in the EU AI Act
The AI Act defines four risk levels for AI systems, each of which is associated with specific regulatory requirements:

Source: Official publication of the European Commission on the AI Law
Unacceptable risk
AI systems that fall below this risk level pose a clear threat and are strictly prohibited. Examples include the manipulation of behavior through cognitive techniques such as voice-controlled toys that entice children to engage in dangerous behavior, or social rating systems that categorize people based on their behavior or personal characteristics.
High-risk AI systems
AI systems in this category can have significant impacts on health, safety or fundamental rights. Examples include AI in critical infrastructure management, education, employment and law enforcement. All high-risk AI systems must undergo rigorous testing before they are launched on the market and throughout their life cycle. Individuals have the right to report concerns about AI systems to the relevant national authorities.
Limited risk
These AI systems pose a low risk and are subject to transparency requirements. For example, users who interact with chatbots must be informed that they are talking to an AI. Providers must also ensure that AI-generated content, particularly on topics of public interest, is clearly labelled as artificially generated, regardless of whether it is text, audio or video.
Minimal or no risk
AI systems with minimal or no risk are not subject to additional regulatory requirements. Examples include AI-controlled video games and spam filters.
Compliance and AI: What companies should do
As the European Council stated in a press release in May, it is important to assess the impact on fundamental rights before a high-risk AI system is deployed by certain companies providing public services.
There are a few points that providers of AI systems should consider:
- Conduct a risk assessment by determining the risk category of the AI system and implementing the necessary protective measures.
- Prepare technical documentation to demonstrate compliance with the regulations and submit it to the authorities for review.
- Design the AI system to automatically log events to detect risks and system changes.
- Create guidelines for operators to ensure requirements are met.
Although operators do not have the same obligations as providers, the AI Act requires them to adhere to usage guidelines, ensure organizational and technical compliance, and conduct a data protection impact assessment before deploying high-risk AI systems.
Failure to comply with the EU AI law can result in fines ranging from €35 million, or 7% of global turnover, to €7.5 million, or 1.5% of turnover, depending on the severity of the violation and the size of the company.
As Europe’s Digital Targets for 2030 and the European ‘Data Strategy’, aim to promote fair competition and increase transparency in online services, companies need to ensure that their processes support these data protection values. Recent legislation such as the Digital Services Act and the Digital Markets Act, further emphasise the importance of fair competition and transparency. By addressing and reviewing internal processes early, both providers and users of AI systems can avoid fines and build consumer trust. Strengthen your compliance processes today by clicking here to get started.