On 4 February 2025, the European Commission (EC) published the draft Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act.
The AI Act, which aims to promote innovation while ensuring high levels of health, safety, and fundamental rights protection, classifies AI systems into different risk categories, including prohibited, high-risk, and those subject to transparency obligations. These guidelines aim to provide clarity for the implementation of the AI Act regarding the prohibited AI systems.
AI Act Article 5(1)(b) prohibits placing on the market or the use of AI systems that exploit the vulnerabilities of persons due to their age, disability or specific socio-economic situation with the objective to distort their behaviour causing that person or another person significant harm.
Regarding persons with disabilities, the guidelines clarify that the AI Act aims to “prevent AI systems from exploiting cognitive and other limitations and weaknesses in persons with disabilities and to protect them from harmful undue influence, manipulation, and exploitation.” The guidelines provide the following examples:
Additionally, the guidance clarifies that “significant harm” encompasses a range of significant adverse impacts, including physical, psychological, financial, and economic harms, and that persons with disabilities represent a vulnerable group that exploitative and manipulative AI systems may significantly harm. It provides the following example:
As for next steps, after having approved the draft guidelines, the EC must now formally adopt the guidelines’ final version.