- Harmful manipulation and deception. AI systems that use techniques that a person cannot consciously detect or that are otherwise intentionally manipulative or deceptive with the objective or effect of distorting the behaviour of a person or a group. As an effect of the distortion, the person’s ability to make decisions is appreciably impaired, which leads to them making decisions that they would not have otherwise made or that are likely to cause significant harm to them or another person.
- Haavoittuvuuksien haitallinen hyödyntäminen. Tekoälyjärjestelmät, joissa hyödynnetään iästä, vammaisuudesta tai erityisestä sosiaalisesta tai taloudellisesta tilanteesta johtuvia haavoittuvuuksia ja joiden tavoitteena tai seurauksena on vääristää heidän käyttäytymistään niin, että se kohtuullisen todennäköisesti aiheuttaa merkittävää haittaa.
- Exploiting vulnerabilities. AI systems that exploit vulnerabilities resulting from age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of such persons in a manner that is reasonably likely to cause significant harm.
- Social scoring. AI systems used for the evaluation or classification of persons or groups based on their social behaviour or personal characteristics, with the social score leading to detrimental or unfavourable treatment. The treatment is unjustified or disproportionate in respect of the evaluated behaviour, or the treatment manifests in a context other than the context in which the original data on the behaviour or characteristics was collected.
- Assessing and predicting whether a person will commit a crime. AI system for assessing or predicting the risk of a natural person committing a criminal offence based solely on profiling or assessing the personality traits and characteristics of the natural person. However, it is not forbidden to support human assessment of the same which is already based on objective and verifiable facts directly linked to a criminal activity.
- Untargeted scraping for the creation of facial recognition databases. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions. AI systems that infer emotions of a person in the areas of workplace and education institutions. This prohibition does not apply to medical or safety purposes.
- Biometric categorisation. AI systems that categorise natural persons based on their biometric data to deduce their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. This prohibition does not apply to any labelling or filtering of lawfully acquired biometric datasets, such as images, for the purposes of law enforcement.
- Real-time biometric identification. Using ‘real-time’ remote biometric identification AI systems in publicly accessible spaces for the purposes of law enforcement. This prohibition does not apply to situations where such identification is absolutely necessary, such as targeted search for specific victims, preventing specific threats (e.g. terrorist attacks), or localisation of persons suspected of specific crimes in compliance with the requirements laid down in the law.