Data protection in the development and use of AI systems
These pages have information on the requirements arising from data protection legislation that should be taken into account when artificial intelligence (AI) systems are developed and used. These guidelines are not exhaustive and organisations developing or deploying AI systems must always assess the requirements laid down in legislation case by case. In addition, AI systems are subject to the EU AI Act, for example.
On this page you will find information on the following topics:
- What is an AI system?
- How must data protection legislation be considered in AI systems?
- Assess risks and data protection impact
- Ensure the lawfulness of processing: when can personal data be used?
- Prohibited practices and high-risk AI systems
- Take data protection principles into account
- What must be taken into account in automated decision making?
- Process personal data securely
- Respect peoples’ data protection rights
- Demonstrate compliance with data protection legislation
What is an AI system?
In these guidelines, ‘AI system’ means a system that is meant and designed to analyse data, recognise patterns and use the data to produce decisions, content or predictions.
Data protection legislation does not include a definition for AI or AI systems. The EU AI Act defines ‘AI system’ as ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.
The European Commission has published guidance on the definition of AI systems (on the Commission's website).