ICO AI Data Protection Toolkit

What are the Data Protection Risks with AI?

AI (Artificial Intelligence) tools are very difficult to decouple into logical data protection risks, primarily because effectively, they can process and release any information that they have access to. For this reason, the ICO has produced an AI Data Protection Toolkit.

Also, there is the inherent risk of bias and discrimination as well as unexpected behaviour.

Examples of AI Processing Personal Data

Companies may use trained AI to measure and predict spending habits, information on people, objects and places from images, financial data, marketing releases and an infinite number of variations and permutations of these.

While an individual may have given consent from the beginning, the route that AI can take needs some form of control to mitigate Data Protection Risks.

What is the AI Data Protection Assessment Toolkit?

The ICO has been working on the foundations of data protection guidance on AI systems by publishing their AI and Data Protection Toolkit.

This toolkit breaks down AI Lifecycle stages into 5 stages which are grouped by a Risk Domain area – and then offers guidance through practical steps for each area for your assessment.

The 5 stages of the AI Assessment Toolkit along with example Risk statements are below:

Business Requirements and Design

Risk Statement Examples:

Failure to take a risk-based approach to data protection law when developing and deploying AI systems because of an immature understanding of fundamental rights, risks and how to balance these and other interests. This may result in a contravention of individual’s rights and freedoms, and the principle of accountability.

Failure to take a risk-based approach to data protection law when developing and deploying AI systems because of an immature understanding of fundamental rights, risks and how to balance these and other interests. This may result in a contravention of individual’s rights and freedoms, and the principle of accountability.

Data Acquisition and Preparation of AI Data Protection Toolkit

Risk Statement Examples:

Choosing to rely upon the same lawful basis for both the AI development and deployment stages because of a failure to distinguish the different purposes in each stage may lead to unlawful processing and a contravention of the purpose limitation principle.

Inaccurate outputs or decisions made by AI systems caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason.

This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Inaccurate outputs or decisions made by AI systems caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Inaccurate outputs or decisions made by AI systems caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Inaccurate outputs or decisions made by AI systems caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason.

This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Inaccurate outputs or decisions made by AI systems caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Training and testing the AI system

Risk Statement Examples:

Failure to take a risk-based approach to data protection law when developing and deploying AI systems because of an immature understanding of fundamental rights, risks and how to balance these and other interests. This may result in a contravention of individual’s rights and freedoms, and the principle of accountability.

Choosing to rely upon the same lawful basis for both the AI development and deployment stages because of a failure to distinguish the different purposes in each stage, may lead to unlawful processing and a contravention of the purpose limitation principle.

Deploying and monitoring the AI system

Risk Statement Examples:

Inaccurate outputs or decisions made by AI systems are caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Inaccurate outputs or decisions made by AI systems are caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Failure to explain the processes, services and decisions delivered or assisted by AI to the individuals affected by them where AI systems are difficult to interpret. This can lead to regulatory action, reputational damage and disengaged public.

Procurement

Risk Statement Examples:

Inaccurate outputs or decisions made by AI systems are caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Inaccurate outputs or decisions made by AI systems are caused by insufficiently diverse training data, training data that reflects past discrimination, design architecture choices or another reason. This leads to adverse impacts on individuals such as discrimination, financial loss or other significant economic or social disadvantages.

Failure to explain the processes, services and decisions delivered or assisted by AI to the individuals affected by them where AI systems are difficult to interpret. This can lead to regulatory action, reputational damage and disengaged public.

How to use the AI Data Protection Toolkit

At this time, DPO and technical architects can follow the steps detailed in the toolkit and produce the relevant documentation and perform the assessments in line with the guidance.

The ICO will be releasing the final version of the toolkit in 2021.