14.12.2023
News on the AI Act: Logbook on the planned EU Regulation
In order to be adequately prepared for the upcoming AI Act, it is advisable to keep an eye on the latest developments relating to the Regulation. We present the most important developments in the legislative process here. You’ll find the most recent post at the top.
Work on the EU’s forthcoming Regulation on AI – or AI Act – is almost complete. Once in force, the AI Act will regulate the development and use of AI across the EU. As an EU Regulation, it will apply directly in the Member States. It does not require transposing into national law. The draft Regulation was tabled by the EU Commission in April 2021. The priority is to create a European legal framework for AI. But not only that: the AI Act should also set a global standard for the ethical use and development of AI technology.
Many of the prospective requirements are not expected to change significantly. As such, the basic regulatory framework is already foreseeable and organisations can prepare for it now. They can use our white paper (only avaiable in German) on the upcoming Regulation to help them do just that.
Whitepaper: AI Act
The AI Act came into effect on August 1, 2024. Therefore, we have completely revised our whitepaper.
Get the latest version now!
The official trilogue between the EU legislative bodies has recently been completed. The EU Parliament and the EU Council of Ministers have finalised their compromise versions of the draft AI Act and agreed on the key content. Further details will be finalised in the near future and the process will be concluded with formal votes by the Parliament and the Council of Ministers. So far it is not possible to say with certainty when the law will come fully into force. However, it is likely to be early 2024. Afterwards there will be an implementation period of two years for companies, although the first bans on particularly important areas of regulation will take effect just six months after the law comes into force.
Check here for all news and updates on the planned AI Act
Following the final trilogue negotiations, which began on Wednesday, 6 December 2023, the EU Parliament, the Commission and the Council of Ministers have finally reached an agreement on the upcoming AI Act. The length of the negotiation has been record-breaking. Initially scheduled only for Wednesday, the legislators had to take a break after 22 hours of tough negotiations and finally agreed on the final version of the AI Act on late Friday evening, 8 December 2023.
An important goal was to find a balance between being innovation-friendly and protecting fundamental rights. According to statements of the involved parties this has now been achieved. The basic structure of the regulation has remained true to its first draft. The risk-based approach with a focus on high-risk AI still applies. However, many details have changed compared to the original Commission version. The most controversial points in the last negotiations included the regulation of general purpose AI and the ban on real-time biometric identification.
As a result, biometric real-time identification is generally prohibited by the AI Act. However, three exceptions apply: the technology should be permitted to recognise and prevent terrorist attacks, in the prosecution of serious crimes and in the search of victims of serious crimes. Independent institutions are to monitor that states comply with this narrow scope of application.
The regulation of general purpose AI (GPAI), meaning AI that can be used for a variety of purposes, has been a controversial point of the AI Act for some time. Most recently, it seemed possible, that this controversial topic could prevent the regulation altogether. However, this did not happen. A two-stage approach was finally agreed upon on Friday. Developers of all GPAI models, meaning on the first stage, should fulfil certain transparency obligations. Second-tier GPAI, which are models powered on a high level of computing power and thus resulting in a particularly high risk will have to fulfil certain risk assessment and management requirements. This includes, for example, the current version of the ChatGPT language model. If the systems are used in the area of high-risk AI, the corresponding requirements will of course also apply to them.
The classification as high-risk AI and the resulting regulation have significantly changed since the original draft. In short, it can be said that the scope of application and the mandatory programme have been expanded upon. You will soon be able to read about this in further detail in our updated white paper.
The final text of the AI Regulation is not yet available. The wording will first be fine-tuned, after which the EU Parliament and Council of Ministers will have to formally vote on the regulation. However, there will be no further significant changes. The law is expected to come into force at the beginning of 2024. The planned bans will take effect 6 months after that. After one year, the requirements for conformity assessment bodies and the governance provisions will apply. After two years, companies will have to fully comply with the AI Act. Companies need to start preparing now in order to achieve full Compliance.
In the current trilogue negotiations, an agreement between the EU Parliament, the EU Council of Ministers and the EU Commission is drawing ever closer. The last negotiation meeting on 24 October 2013 once again made significant progress in the work on the regulation. Progress was made in regard to the regulation of basic models such as ChatGPT and the classification of high-risk AI. Regarding the classification, the co-legislators seem to agree that an exception should be made to Annex III if a system does not pose a significant risk to the health, safety or fundamental rights of a natural person. However, there is still disagreement on other issues. In particular, it has not yet been possible to finalise which AI systems should be banned and which exceptions should be
made for law enforcement authorities. The next trilogue hearing is scheduled for 6 December 2013. It is considered likely that a political agreement on the final version of the AI Regulation will be reached on this date.
The EU Parliament today agreed on its final compromise proposal on the planned AI Act. This means that the formal trilogue between the EU Parliament, the Council of Ministers and the EU Commission can now begin. The trilogue will now work on a final draft of the AI Act. The first trilogue meeting is scheduled to take place later today, on Wednesday 14 June. Spain, which takes over the presidency of the EU Council for the second half of the year from next month, has already announced that the negotiations on the AI Act will be concluded during its presidency. As a result, work on the AI Act will gather pace and the Regulation is expected to come into force later this year.
Negotiations in the EU Parliament had recently stalled and the agreement previously reached between the political groups was considered to have failed. In today’s final vote, therefore, some new motions were tabled by political groups in the EU Parliament. However, none of these new motions were successful. According to one MEP, the main purpose was to signal that there was not 100 per cent agreement in the Parliament on the AI Act. As a result, there were no further amendments today.
An important aspect that distinguishes the Parliament’s compromise proposal from the original Commission version is the planned regulation of general-purpose AI. While it does not have a specific purpose, it can be used as a basis for specific purposes. Especially relevant here are so-called foundation models, large language models on which other AI can be built. A prominent example of this is ChatGPT. According to the EU Parliament, such systems should be subject to a particular labelling obligation for the content they generate and to a disclosure obligation for the training data they contain. Importantly, the Parliament also wants to shorten the planned two-year implementation requirement for such AI, as it is already having a detrimental effect.
Other important amendments made by the Parliament concern the definition of AI, an expansion of prohibited systems and high-risk AI, an additional level for qualifying as high-risk AI and stricter obligations for such systems. A European AI Office is proposed to help implement the AI Act on a cross-border basis. In the long term, this office is to be expanded into a comprehensive EU digital agency.
Last week, MEPs reached agreement on the draft of the planned AI Act. According to one parliamentary source, the day before the agreement was the most tense day of negotiations so far. Among other things, the category of prohibited AI practices has been expanded. Furthermore, in addition to the existing requirements, high-risk AI should only be present where systems pose a significant risk to health, safety or fundamental rights. The parliamentary proposal will be voted on in plenary in mid-June.
The responsible rapporteurs in the EU Parliament have presented a new compromise proposal on the planned AI Act. The proposal has been seen by the Euractiv media network. It addresses new prohibited AI practices and categories for high-risk AI.
With regard to the prohibitions already contained in the draft, the rapporteurs propose an extension of the prohibition of social points systems. According to this, the ban should no longer apply only to individuals, but also to entire groups, if social scoring leads to conclusions about personal characteristics and disadvantages. AI that subliminally uses techniques beyond a person’s ability to perceive will be added to the list of prohibited AI, except for therapeutic purposes or with explicit consent. In addition, AI that is intentionally manipulative or exploits a person’s vulnerability to influence their behaviour and cause significant physical or psychological harm will be banned.
The proposal significantly widens the scope of systems to be considered high risk. One area affected by the proposed extensions is that of biometric identification. Here, real-time biometric identification in public spaces is to be completely prohibited, so that the risk group in these areas will only refer to systems used for subsequent identification. However, the use cases will now also include systems for recognising emotions. It is also proposed to add both live and ex-post identification to the list for privately accessible spaces.
The area of critical infrastructure is also to be expanded. According to the proposal, this should include any safety components for road, rail and air traffic.
The proposal also provides for an extension for the high-risk category of employment. This includes systems that make or support decisions relating to the initiation, establishment, performance or termination of an employment relationship, in particular the assignment of personalised tasks or the monitoring of compliance with regulations in the workplace.
Other areas to be expanded include education and access to public services.
As in the Council of Ministers’ version, the Parliament’s proposal also includes the high-risk category of AI systems in the insurance sector. These were not included in the Commission’s original draft.
Completely new high-risk areas have also been proposed. For example, systems used by vulnerable groups, especially those that could affect the development of minors, will now be included. It’s not hard to imagine that this could also apply to the recommendation algorithms used by social networks.
Systems that could influence people’s voting behaviour or that are involved in democratic processes such as vote counting are also to be included.
Finally, the category of high-risk AI is to include generative AI systems that, for example, can produce texts that could be mistaken as being human-generated, or audiovisual results that say something that never happened. This should not apply to texts if they have been reviewed by a human being or if a human being is legally responsible for them; in this respect, the proposal has parallels with the provision of Art. 22 GDPR. Audiovisual content should be exempt if it is obviously a work of art. Popular tools like ChatGPT and DALL-E would fall into the high-risk category. Under previous drafts, chat bots such as ChatGPT would have been classified as low-risk AI and therefore only subject to transparency requirements.
According to Euractiv, the two rapporteurs in charge are aiming to conclude the negotiations on the parliamentary proposal for the AI Act within the next few days.
The Council’s final compromise proposal was adopted at the meeting of the EU Council of Ministers in the Transport, Telecommunications and Energy configuration. The co-legislator has thus succeeded in taking a first step towards the entry into force of the AI Act. Now only the EU Parliament’s proposal for a Regulation is pending. Afterwards, the Commission, Council and Parliament can start the official legislative trilogue. According to the European media network Euractiv, the Parliament’s version is expected in March 2023.
The Council proposal adopted today does not contain any substantive changes, but is the result of earlier compromises proposed by the Council of Ministers. The Council of Ministers has made some significant changes to the Commission’s original proposal. Among other things, the definition of AI has been narrowed, the proposal includes provisions for so-called general-purpose AI, and the requirements and obligations for high-risk AI have been modified. What changes will ultimately be included in the text of AI Act will only become clear during the formal legislative negotiations. Many of the proposals make sense and can be expected to become law as they are or in a similar form.
Under the Czech Presidency, the EU Council of Ministers presented its final compromise proposal. This was endorsed by the Committee of Permanent Representatives of the Member States (COREPER) on 18 November and is due for final adoption at the meeting of telecommunications minsters on 6 December. Once the EU Parliament’s proposal is ready, formal trilogue negotiations can begin.
The changes made to the previous proposals are very minor. It has been confirmed that “general-purpose AI” will fall under the regime of the AI Act and that the EU Commission will define the concrete requirements for these AI systems through a separate implementing act. In addition, the details of the exceptions for law enforcement authorities have been adapted. Under certain conditions, they will be allowed to operate high-risk AI systems that have not gone through the conformity assessment procedure. However, if the competent market surveillance authority subsequently refuses to grant an exemption, the new amendment requires all results and outputs from the system to be deleted.
In addition, Recital 37 now explicitly clarifies that AI systems that verify eligibility for public benefits and services are to be classified as high-risk AI systems. This amendment is purely declaratory as it is already covered by Annex III of the Regulation. Furthermore, the text of the recital has been adapted to reflect the previous inclusion of certain insurance services in Annex III.
With regard to the transparency obligations under the draft Regulation, the latest proposal states in Recital 70 that special consideration should be given to those groups that are vulnerable due to age or disability. However, the proposal does not specify how this should be done.
The EU Council of Ministers has presented a new compromise proposal on the planned AI Act. Now that Member States have another opportunity to comment on the draft, the Czech Presidency is aiming for a general agreement for the next ministerial meeting on 6 December 2022. The Council of Ministers is therefore almost at the end of its negotiations. Once the Council of Ministers has drafted a final amendment to the AI Act, the start of the formal trilogue depends only on the EU Parliament.
The current version from the Council of Ministers makes a small but meaningful change to the definition of AI. So far, the Commission’s draft has covered AI systems that “operate with a certain degree of autonomy”. The text has now been amended in the current compromise proposal so that the definition now includes such AI systems that “operate with autonomous elements”. This change is to be welcomed, as it was previously unclear what requirements should be placed on that “certain degree”.
Some other important changes concern prohibited AI systems. So now only biometric identification systems that are used remotely are included again. In an earlier compromise proposal, the word “remote” had been deleted, significantly broadening the ban.
There are also some new developments in the area of high-risk AI. Among other things, providers of such AI systems will now be subject to a transparency obligation to include the expected results of using the AI in the AI’s instructions of use. Furthermore, Annex III of the AI Act has been amended. This lists AI systems that should be considered high-risk AI. For example, AI systems for risk assessment and pricing of insurance products, including life and health insurance, have been included again.
Other aspects of the proposed amendment concern, for example, the transparency obligations for low-risk AI systems or the penalties for violations of the Regulation.
On 28 September 2022, the EU Commission presented two proposals for directives to reform the liability rules for AI. On the one hand, the plan is to reform the European Product Liability Directive, which came into force way back in 1985. The Directive regulates the strict liability of manufacturers when their products cause personal injury or damage to property. The proposed modernisation is to adapt the Directive to include compensation claims for such damage where an AI system is part of a product and the product is rendered unsafe by the AI application. If consumers are harmed by such a product, and the product comes from a manufacturer outside the EU, they should be able to claim compensation from the importer or the manufacturer’s EU representative.
In addition, there are plans to introduce a Directive on AI liability. For the first time, it will include specific provisions for damage caused by AI. There are two main points in the proposal. The first is a presumption of causality: if an injured party can prove that an obligation relevant to the damage has been breached and that a causal link with the AI’s performance is reasonably probable, then it will be presumed that the breach of the obligation has caused the damage. However, by proving that the damage had another cause, the liable party can rebut this presumption.
Secondly, access to evidence should be made easier for injured parties. For example, in the case of harm caused by high-risk AI, they should be able to apply to the court for an order to disclose information about the AI system. This would allow injured parties to identify those who could be held liable and to find out what exactly led to the damage that occurred. On the whole, the Directive is intended to apply to all damage caused by AI, regardless of whether it is the result of high-risk or other AI systems.
According to the EU Commission, the AI Liability Package will accompany the AI Act by introducing a new standard for trust and redress around AI. This will create the legal certainty that will encourage companies to invest in artificial intelligence.
he Czech Presidency has presented the third complete compromise proposal on the AI Act from the EU Council of Ministers. It implements suggestions and comments from Member States. Compared to the Commission’s original draft, some important passages have been changed. Perhaps the most important change is in the way AI systems are defined. The definition had already been adapted in an earlier Council proposal and is much narrower than the EU Commission’s original definition. The Council’s definition is more in line with the classical understanding of AI. Conversely, the Commission’s definition would include simple computing applications like calculators. The Council of Ministers was responding to criticism from many quarters that the original definition was too broad and vague. In the third compromise proposal now on the table, the requirement that AI’s work goals be “human-defined” has been removed. The reference was not essential for the purposes of the definition, according to the Council of Ministers.
Another important aspect of the Council proposal is the regulation of so-called “general-purpose AI”. This classification for AI systems is not provided for in the Commission’s draft. It was included for the first time in the compromise proposal presented by the Council of Ministers in November 2021 under the Slovenian Presidency. This means AI systems that are not designed for single, specific applications, but have a wide range of possible uses. As such, they can be used for a variety tasks in different areas. In particular, such general algorithms can serve as the basis for more specialised AI systems. For example, a single “general-purpose AI” for general language processing can be used for a plethora of specialised applications, such as chatbots, ad generation systems or decision-making processes. In many cases, even the developers of general-purpose AI have no idea what it will be used for later on. If such algorithms are to be used as high-risk AI, or as a component of such AI, then most of the corresponding obligations for “providers” of AI systems are also linked to them, according to the Council proposal now on the table. However, small and medium-sized enterprises are to be exempted, provided that they are not partners of or affiliated with larger enterprises. According to the Council’s proposal, the specific requirements to be imposed on general-purpose AI systems themselves will be laid down by the Commission in a separate implementing act within one and a half years after the AI Act enters into force.
Other areas where the Council of Ministers has proposed changes, some of them rather marginal, are transparency obligations, measures to promote innovation, supervision and penalties for violations.
Schedule your Initial Consultation
Describe your situation to us in a no-obligation phone call, and our lawyers will work with you to find the best solution.
Our Experts
More news
14.12.2023
News on the AI Act: Logbook on the planned EU Regulation
07.08.2023
Health data: What to consider for third-country transfers
26.07.2023