Step 1
AI mapping
1. Comprehensive AI mapping
Objective
Identify all instances of AI usage within the organisation to understand what AI systems are currently in use, in development, or planned for future implementation.
Actions to take
- Conduct surveys or interviews across departments to gather information on all AI activities.
- Categorise the AI systems based on its purpose (eg, customer service, data analytics, security).
- Document the type of data each system processes and any specific regulatory requirements associated with it.
Benefit
Clearer visibility into which teams are using or developing AI, ensuring that responsibilities and compliance expectations are understood across departments.
Objective
Identify all instances of AI usage within the organisation to understand what AI systems are currently in use, in development, or planned for future implementation.
Actions to take
- Conduct surveys or interviews across departments to gather information on all AI activities.
- Categorise the AI systems based on its purpose (e.g., customer service, data analytics, security).
- Document the type of data each system processes and any specific regulatory requirements associated with it.
Benefit
Clearer visibility into which teams are using or developing AI, ensuring that responsibilities and compliance expectations are understood across departments.
2. Internal policy and responsibilities assessment
Objective
Review and assess existing policies regarding AI to determine if they meet the AI Act’s standards.
Actions to take
- Identify whether a comprehensive AI policy exists for development, deployment, and distribution.
- Evaluate if responsibilities for AI governance are clearly defined within the organisation.
- Determine if the policy aligns with AI Act requirements, such as ethical considerations, transparency, and risk management.
Benefit
Departments gain clarity on their roles and responsibilities in relation to AI, reducing confusion and ensuring that each team understands the compliance requirements relevant to their work.
Objective
Review and assess existing policies regarding AI to determine if they meet the AI Act’s standards.
Actions to take
- Identify whether a comprehensive AI policy exists for development, deployment, and distribution.
- Evaluate if responsibilities for AI governance are clearly defined within the organisation.
- Determine if the policy aligns with AI Act requirements, such as ethical considerations, transparency, and risk management.
Benefit
Departments gain clarity on their roles and responsibilities in relation to AI, reducing confusion and ensuring that each team understands the compliance requirements relevant to their work.
3. Risk identification and assessment
Objective
Analyse and document potential risks associated with each AI system, particularly regarding health, safety, and fundamental rights.
Actions to take
- Conduct risk assessments for each AI application, evaluating possible impacts on privacy, safety, and user rights.
- Establish a risk matrix to prioritise high-risk AI systems, ensuring focused compliance efforts.
- Identify mitigation strategies or controls in areas with elevated risk.
Benefit
This step helps teams understand the specific risks tied to their AI systems and apply safeguards, which builds confidence in responsible AI practices and compliance.
Objective
Analyse and document potential risks associated with each AI system, particularly regarding health, safety, and fundamental rights.
Actions to take
- Conduct risk assessments for each AI application, evaluating possible impacts on privacy, safety, and user rights.
- Establish a risk matrix to prioritise high-risk AI systems, ensuring focused compliance efforts.
- Identify mitigation strategies or controls in areas with elevated risk.
Benefit
This step helps teams understand the specific risks tied to their AI systems and apply safeguards, which builds confidence in responsible AI practices and compliance.
4. Security measures and compliance checks
Objective
Review the organisation’s existing security protocols and data protection measures to ensure they align with the AI Act’s requirements.
Actions to take
- Audit current security measures associated with AI systems, such as data encryption, access control, and regular monitoring.
- Evaluate whether these measures meet the standards set by the AI Act for safeguarding personal data and preventing unauthorised access.
- Update or implement additional security measures where necessary to close compliance gaps.
Benefit
This step assures departments and stakeholders that robust security measures are in place to protect user data and mitigate security threats, which is particularly important for building trust with users and customers.
Objective
Review the organisation’s existing security protocols and data protection measures to ensure they align with the AI Act’s requirements.
Actions to take
- Audit current security measures associated with AI systems, such as data encryption, access control, and regular monitoring.
- Evaluate whether these measures meet the standards set by the AI Act for safeguarding personal data and preventing unauthorised access.
- Update or implement additional security measures where necessary to close compliance gaps.
Benefit
This step assures departments and stakeholders that robust security measures are in place to protect user data and mitigate security threats, which is particularly important for building trust with users and customers.
Which teams should be involved?
Compliance and legal teams: provides a roadmap for regulatory compliance, helping legal teams ensure that AI practices meet legal standards, reducing the risk of penalties.
AI development and engineering teams: offers guidance on technical and security requirements, helping developers build compliant and secure AI systems from the ground up.
Risk management and compliance officers: supports risk analysis and mitigation, aiding in the identification of high-risk areas and ensuring they align with AI Act requirements.
Data privacy and security teams: emphasises security and data protection measures, guiding privacy-focused teams in upholding the required safeguards.
Executive leadership: assists in understanding organisational responsibilities for compliance and ensures alignment across departments for a coherent AI strategy.