Step 4

Implementation of an AI governance framework

A robust AI governance framework is essential for ensuring that AI systems are deployed responsibly and comply with regulatory standards.

Tailoring the framework to the level of AI risk and organisational role allows for effective oversight and minimises potential liabilities. Below is an overview of obligations that may apply based on the risk classification of AI systems, building on the assessments in steps 1 and 2.

1. High-risk AI systems: comprehensive governance and compliance obligations

For high-risk AI applications (those that could significantly impact individuals, society, or business operations) a rigorous set of governance measures is required to mitigate risk and maintain accountability. Key obligations for deployers of high-risk AI systems may include:

Transparency obligations

Reporting and cooperation with authorities

Automatic recording of events

Quality management system

Technical documentation

Human oversight

Conformity assessment

Post-market monitoring

Risk management system

Data requirements

Registration

Cybersecurity, accuracy, and robustness

Transparency obligations

Reporting and cooperation with authorities

Automatic recording of events

Quality management system

Technical documentation

Human oversight

Conformity assessment

Post-market monitoring

Risk management system

Data requirements

Registration

Cybersecurity, accuracy, and robustness

Deployers are subject to less intensive obligations

For deployers less intensive obligations apply including AI literacy, human oversight, data governance and transparency.

AI literacy


Train relevant staff to understand the basic principles, regulatory landscape, and ethical implications of AI, fostering a culture of responsible use.

Human oversight


Ensure a degree of human oversight to monitor AI outputs and provide intervention capabilities if needed, maintaining accountability.

Data governance


Establish clear data governance practices to ensure the accuracy, security, and fairness of data inputs, reducing potential bias or errors.

Transparency


Provide a basic level of transparency regarding AI functions and objectives, especially when engaging with end-users or customers.

AI literacy


Train relevant staff to understand the basic principles, regulatory landscape, and ethical implications of AI, fostering a culture of responsible use.

Human oversight


Ensure a degree of human oversight to monitor AI outputs and provide intervention capabilities if needed, maintaining accountability.

Data governance


Establish clear data governance practices to ensure the accuracy, security, and fairness of data inputs, reducing potential bias or errors.

Transparency


Provide a basic level of transparency regarding AI functions and objectives, especially when engaging with end-users or customers.