Step 2
Conduct a risk assessment
1. Conduct a risk assessment
To meet the requirements of the AI Act, start by identifying the risk level associated with each AI system or model your organisation uses. This involves:
Determining applicable requirements
Identify the specific requirements of the AI Act that apply to each AI system based on its risk category and your organisation’s role - whether as a provider, deployer, importer, distributor, or product manufacturer of the AI technology.
Gap analysis and governance planning
Conducting a gap analysis will reveal areas where current practices fall short of compliance. This assessment will guide your organisation’s governance and risk management measures, which are addressed further in Step 4.
Prioritising high-risk areas
Given the potential complexity of AI Act compliance, focus first on high-risk areas to align with regulatory priorities. Addressing high-risk categories early on helps mitigate compliance risks effectively.
Determining applicable requirements
Identify the specific requirements of the AI Act that apply to each AI system based on its risk category and your organisation’s role - whether as a provider, deployer, importer, distributor, or product manufacturer of the AI technology.
Gap analysis and governance planning
Conducting a gap analysis will reveal areas where current practices fall short of compliance. This assessment will guide your organisation’s governance and risk management measures, which are addressed further in Step 4.
Prioritising high-risk areas
Given the potential complexity of AI Act compliance, focus first on high-risk areas to align with regulatory priorities. Addressing high-risk categories early on helps mitigate compliance risks effectively.
Understanding risk categories under the AI act
The AI Act uses a risk-based approach, classifying AI systems into four categories, each with distinct regulatory obligations:

Unacceptable risk:
AI systems in this category, such as social scoring by governments, are outright prohibited due to their potential to harm safety or fundamental rights.
High risk:
High-risk systems include those that could impact safety or rights, such as biometric identification or healthcare applications. These systems mst meet rigorous regulatory standards.
Limited risk:
These systems, including chatbots or biometric categorisation, require transparency measures but are not subject to the same level of scrutiny as high-risk systems.
Minimal or no risk:
The lowest-risk systems have minimal obligations and face fewer regulatory requirements.

The Act also sets out rules for General-Purpose AI (GPAI) models, often called foundation models (eg, GPT-4). These models, trained on extensive datasets and designed for versatile applications, may face specific obligations depending on their deployment context.
Clarifying your company's role
The AI Act’s obligations differ based on an organisation’s specific role with respect to AI systems. Identifying your role is key to developing a targeted compliance strategy:
Providers
As developers or maintainers of AI systems, providers bear most of the Act’s compliance requirements, including testing, documentation, and reporting obligations.
Deployers (users)
Organisations using or implementing AI systems must meet transparency and usage obligations to ensure responsible deployment.
Importers and distributors
Entities bringing AI systems into the market or distributing them must confirm that these products comply with AI Act standards.
Providers
As developers or maintainers of AI systems, providers bear most of the Act’s compliance requirements, including testing, documentation, and reporting obligations.
Deployers (users)
Organisations using or implementing AI systems must meet transparency and usage obligations to ensure responsible deployment.
Importers and distributors
Entities bringing AI systems into the market or distributing them must confirm that these products comply with AI Act standards.
Product manufacturers and authorised representatives
Manufacturers incorporating AI into products and representatives acting for providers are responsible for verifying that these products meet applicable regulatory standards.