10.02.2025
AI Act: New obligations from 2 February 2025
Chapter I AI Act: AI competence must be ensured
From 2 February 2025, companies that offer and/or operate AI systems must ensure that their employees involved in AI have sufficient AI competence (Article 4 of the AI Act).
In practice, this means that the employees concerned must be trained in three areas
- Legal training on the AI Act: Employees must be familiar with their rights and obligations under the Act.
- Technical training: Employees must be familiar with the proper use of AI systems.
- Ethical training: Employees need to be aware of the social and ethical opportunities and risks of using AI.
It is particularly important that training is tailored to employees' prior knowledge and the way in which AI is used.
Actions needed to implement AI skills
- Implementation of customised training
- Develop internal policies and standards
- Provide training and certification programmes
- Establishing points of contact, such as an AI committee or AI officer
Chapter II AI Act: Ending prohibited practices
The second major obligation under the AI Act, to be implemented from 2 February 2025, is the cessation of prohibited practices.
Article 5 of the AI Act lists specific prohibited practices that are deemed to pose an unacceptable risk to the safety, rights and freedoms of individuals.
Schedule your initial consultation
Describe your situation to us in a no-obligation phone call, and our lawyers will work with you to find the best solution.
The top 8 prohibited AI practices under the AI Act are
- Manipulative or deceptive AI systems: The use of AI systems that use subliminal or deliberately manipulative techniques to influence the behaviour of individuals to make decisions that may cause them significant harm.
- Exploitation of vulnerabilities: AI systems that exploit the vulnerabilities of particular groups of people based on age, disability, or social or economic status to influence their behaviour in a harmful way.
- Social scoring: Systems that score people over time based on their behaviour or personal characteristics, leading to unjustified or disproportionate disadvantage in different areas of life.
- Crime prediction: AI systems that rely solely on profiling to predict a person's risk of committing a crime, without objective and verifiable facts.
- Building facial recognition databases: The untargeted extraction of facial images from the internet or video surveillance footage to create or expand facial recognition databases.
- Emotion recognition in the workplace and education: The use of AI systems to infer the emotions of individuals in work or educational contexts, except for medical or security reasons.
- Biometric categorisation: AI systems that classify people into categories such as ethnicity, political opinions, religious beliefs or sexual orientation based on biometric data.
- Real-time remote biometric identification: The instant recognition and identification of people at a distance based on their biometric characteristics, such as facial features, by comparing them to reference databases without the active involvement of the person concerned.
Failure to comply with these prohibitions may result in fines of up to €35 million or, in the case of companies, up to 7% of the total worldwide annual turnover in the preceding financial year, whichever is the greater.
Our AI services at a glance
- Regulatory mapping:
Identification of relevant regulatory requirements through detailed mapping according to various national specifications and EU data regulations. - Data & AI Governance:
Development and adaptation of governance structures, identification of requirements and preparation for the AI Act. - Training courses:
Workshops on the scope and implementation of the Al Act, provision of Al competence according to Art. 4 AlG for managers, product teams and developers. - AI inventory:
Assistance in creating an overview of all AI systems in the company, including determining whether a system should be defined as an AI system or not. - Contract drafting:
Drafting of contracts related to AI projects, such as development contracts, AI-as-a-Service (AIaaS) contracts and others. - Advice on external AI applications:
Advice and guidance on the use of external KI applications and review of third party applications. - Anonymisation & pseudonymisation:
Design and advice on anonymisation and pseudonymisation policies. - Risk assessments:
Advice on risk assessments in the context of data protection and fundamental rights & consequences & assessments in relation to AI systems. - Advice on copyright law:
Advice on copyright implications in the context of GenAl (e.g. rights to data input, protectability of prompts and output). - Legally compliant data use:
Advice on legally compliant data use of big data, machine learning and generative AI in the context of data protection law, trade secrets and database rights. - AI development advice:
Holistic advice on contract management, compliance and other legal aspects of AI development projects.