European Commission publishes guidelines on prohibited AI practices

On 4 February 2025, the European Commission (EC) published the draft Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act.

The AI Act, which aims to promote innovation while ensuring high levels of health, safety, and fundamental rights protection, classifies AI systems into different risk categories, including prohibited, high-risk, and those subject to transparency obligations. These guidelines aim to provide clarity for the implementation of the AI Act regarding the prohibited AI systems.

AI Act Article 5(1)(b) prohibits placing on the market or the use of AI systems that exploit the vulnerabilities of persons due to their age, disability or specific socio-economic situation with the objective to distort their behaviour causing that person or another person significant harm. 

Regarding persons with disabilities, the guidelines clarify that the AI Act aims to “prevent AI systems from exploiting cognitive and other limitations and weaknesses in persons with disabilities and to protect them from harmful undue influence, manipulation, and exploitation.” The guidelines provide the following examples:

  • A therapeutic chatbot aimed to provide mental health support and coping strategies to persons with mental disabilities can exploit their limited intellectual capacities to influence them to buy expensive medical products or nudge them to behave in ways that are harmful to them or other persons;
  • AI systems can identify women and young girls with disabilities online with sexually abusive content and target them with more effective grooming practices, thus exploiting their impairments and vulnerabilities that make them more susceptible to manipulation and abuse and less capable of protecting themselves. 

Additionally, the guidance clarifies that “significant harm” encompasses a range of significant adverse impacts, including physical, psychological, financial, and economic harms, and that persons with disabilities represent a vulnerable group that exploitative and manipulative AI systems may significantly harm. It provides the following example:

  • An AI system that uses emotion recognition to support mentally disabled individuals in their daily life may also manipulate them into making harmful decisions, like purchasing products promising unrealistic mental health benefits. This is likely to worsen their mental health condition and financially exploit them through the purchase of ineffective and expensive products, which is likely to cause them significant psychological and financial harm. 

As for next steps, after having approved the draft guidelines, the EC must now formally adopt the guidelines’ final version.