News on the AI Act: Logbook on the planned EU Regulation
To prepare for the upcoming AI Act, it’s wise to keep an eye on the latest news regarding this new EU Regulation. We present the most important developments in the legislative process here. You’ll find the most recent post at the top.
Work on the EU’s forthcoming Regulation on AI – or AI Act – is in full swing. Once in force, the AI Act will regulate the development and use of AI across the EU. As an EU Regulation, it will apply directly in the Member States. It does not require transposing into national law. The draft Regulation was tabled by the EU Commission in April 2021. The priority is to create a European legal framework for AI. But not only that: the AI Act should also set a global standard for the ethical use and development of AI technology.
Many of the prospective requirements are not expected to change significantly. As such, the basic regulatory framework is already foreseeable and organisations can prepare for it now. They can use our white paper (only avaiable in German) on the upcoming Regulation to help them do just that.
Newsletter
Subscribe to our monthly newsletter with information on judgments, professional articles and events (currently only in german).
By clicking on “Subscribe”, you consent to receive our monthly newsletter (with information on judgments, professional articles and events) as well as to the aggregated usage analysis (measurement of the opening rate by means of pixels, measurement of clicks on links) in the e-mails. You will find an unsubscribe link in each newsletter and can use it to withdraw your consent. You can find more information in our privacy policy.
The EU Council of Ministers and the EU Parliament are in discussion about their positions on individual points of the Regulation. The EU Council of Ministers has already tabled several proposals for changes. The EU Parliament is expected to vote on its proposal in November 2022. This will be followed by the official trilogue between the EU’s legislative bodies. It is not yet possible to say with any degree of certainty when the new law will come into force. While it is conceivable that this could happen as early as 2023, it seems more likely that it will come into force in 2024. There will then be an implementation period for companies.
Check here for all news and updates on the planned AI Act
Negotiations in the EU Parliament had recently stalled and the agreement previously reached between the political groups was considered to have failed. In today’s final vote, therefore, some new motions were tabled by political groups in the EU Parliament. However, none of these new motions were successful. According to one MEP, the main purpose was to signal that there was not 100 per cent agreement in the Parliament on the AI Act. As a result, there were no further amendments today.
An important aspect that distinguishes the Parliament’s compromise proposal from the original Commission version is the planned regulation of general-purpose AI. While it does not have a specific purpose, it can be used as a basis for specific purposes. Especially relevant here are so-called foundation models, large language models on which other AI can be built. A prominent example of this is ChatGPT. According to the EU Parliament, such systems should be subject to a particular labelling obligation for the content they generate and to a disclosure obligation for the training data they contain. Importantly, the Parliament also wants to shorten the planned two-year implementation requirement for such AI, as it is already having a detrimental effect.
Other important amendments made by the Parliament concern the definition of AI, an expansion of prohibited systems and high-risk AI, an additional level for qualifying as high-risk AI and stricter obligations for such systems. A European AI Office is proposed to help implement the AI Act on a cross-border basis. In the long term, this office is to be expanded into a comprehensive EU digital agency.
With regard to the prohibitions already contained in the draft, the rapporteurs propose an extension of the prohibition of social points systems. According to this, the ban should no longer apply only to individuals, but also to entire groups, if social scoring leads to conclusions about personal characteristics and disadvantages. AI that subliminally uses techniques beyond a person’s ability to perceive will be added to the list of prohibited AI, except for therapeutic purposes or with explicit consent. In addition, AI that is intentionally manipulative or exploits a person’s vulnerability to influence their behaviour and cause significant physical or psychological harm will be banned.
The proposal significantly widens the scope of systems to be considered high risk. One area affected by the proposed extensions is that of biometric identification. Here, real-time biometric identification in public spaces is to be completely prohibited, so that the risk group in these areas will only refer to systems used for subsequent identification. However, the use cases will now also include systems for recognising emotions. It is also proposed to add both live and ex-post identification to the list for privately accessible spaces.
The area of critical infrastructure is also to be expanded. According to the proposal, this should include any safety components for road, rail and air traffic.
The proposal also provides for an extension for the high-risk category of employment. This includes systems that make or support decisions relating to the initiation, establishment, performance or termination of an employment relationship, in particular the assignment of personalised tasks or the monitoring of compliance with regulations in the workplace.
Other areas to be expanded include education and access to public services.
As in the Council of Ministers’ version, the Parliament’s proposal also includes the high-risk category of AI systems in the insurance sector. These were not included in the Commission’s original draft.
Completely new high-risk areas have also been proposed. For example, systems used by vulnerable groups, especially those that could affect the development of minors, will now be included. It’s not hard to imagine that this could also apply to the recommendation algorithms used by social networks.
Systems that could influence people’s voting behaviour or that are involved in democratic processes such as vote counting are also to be included.
Finally, the category of high-risk AI is to include generative AI systems that, for example, can produce texts that could be mistaken as being human-generated, or audiovisual results that say something that never happened. This should not apply to texts if they have been reviewed by a human being or if a human being is legally responsible for them; in this respect, the proposal has parallels with the provision of Art. 22 GDPR. Audiovisual content should be exempt if it is obviously a work of art. Popular tools like ChatGPT and DALL-E would fall into the high-risk category. Under previous drafts, chat bots such as ChatGPT would have been classified as low-risk AI and therefore only subject to transparency requirements.
According to Euractiv, the two rapporteurs in charge are aiming to conclude the negotiations on the parliamentary proposal for the AI Act within the next few days.
The Council proposal adopted today does not contain any substantive changes, but is the result of earlier compromises proposed by the Council of Ministers. The Council of Ministers has made some significant changes to the Commission’s original proposal. Among other things, the definition of AI has been narrowed, the proposal includes provisions for so-called general-purpose AI, and the requirements and obligations for high-risk AI have been modified. What changes will ultimately be included in the text of AI Act will only become clear during the formal legislative negotiations. Many of the proposals make sense and can be expected to become law as they are or in a similar form.
The changes made to the previous proposals are very minor. It has been confirmed that “general-purpose AI” will fall under the regime of the AI Act and that the EU Commission will define the concrete requirements for these AI systems through a separate implementing act. In addition, the details of the exceptions for law enforcement authorities have been adapted. Under certain conditions, they will be allowed to operate high-risk AI systems that have not gone through the conformity assessment procedure. However, if the competent market surveillance authority subsequently refuses to grant an exemption, the new amendment requires all results and outputs from the system to be deleted.
In addition, Recital 37 now explicitly clarifies that AI systems that verify eligibility for public benefits and services are to be classified as high-risk AI systems. This amendment is purely declaratory as it is already covered by Annex III of the Regulation. Furthermore, the text of the recital has been adapted to reflect the previous inclusion of certain insurance services in Annex III.
With regard to the transparency obligations under the draft Regulation, the latest proposal states in Recital 70 that special consideration should be given to those groups that are vulnerable due to age or disability. However, the proposal does not specify how this should be done.
The current version from the Council of Ministers makes a small but meaningful change to the definition of AI. So far, the Commission’s draft has covered AI systems that “operate with a certain degree of autonomy”. The text has now been amended in the current compromise proposal so that the definition now includes such AI systems that “operate with autonomous elements”. This change is to be welcomed, as it was previously unclear what requirements should be placed on that “certain degree”.
Some other important changes concern prohibited AI systems. So now only biometric identification systems that are used remotely are included again. In an earlier compromise proposal, the word “remote” had been deleted, significantly broadening the ban.
There are also some new developments in the area of high-risk AI. Among other things, providers of such AI systems will now be subject to a transparency obligation to include the expected results of using the AI in the AI’s instructions of use. Furthermore, Annex III of the AI Act has been amended. This lists AI systems that should be considered high-risk AI. For example, AI systems for risk assessment and pricing of insurance products, including life and health insurance, have been included again.
Other aspects of the proposed amendment concern, for example, the transparency obligations for low-risk AI systems or the penalties for violations of the Regulation.
In addition, there are plans to introduce a Directive on AI liability. For the first time, it will include specific provisions for damage caused by AI. There are two main points in the proposal. The first is a presumption of causality: if an injured party can prove that an obligation relevant to the damage has been breached and that a causal link with the AI’s performance is reasonably probable, then it will be presumed that the breach of the obligation has caused the damage. However, by proving that the damage had another cause, the liable party can rebut this presumption.
Secondly, access to evidence should be made easier for injured parties. For example, in the case of harm caused by high-risk AI, they should be able to apply to the court for an order to disclose information about the AI system. This would allow injured parties to identify those who could be held liable and to find out what exactly led to the damage that occurred. On the whole, the Directive is intended to apply to all damage caused by AI, regardless of whether it is the result of high-risk or other AI systems.
According to the EU Commission, the AI Liability Package will accompany the AI Act by introducing a new standard for trust and redress around AI. This will create the legal certainty that will encourage companies to invest in artificial intelligence.
Another important aspect of the Council proposal is the regulation of so-called “general-purpose AI”. This classification for AI systems is not provided for in the Commission’s draft. It was included for the first time in the compromise proposal presented by the Council of Ministers in November 2021 under the Slovenian Presidency. This means AI systems that are not designed for single, specific applications, but have a wide range of possible uses. As such, they can be used for a variety tasks in different areas. In particular, such general algorithms can serve as the basis for more specialised AI systems. For example, a single “general-purpose AI” for general language processing can be used for a plethora of specialised applications, such as chatbots, ad generation systems or decision-making processes. In many cases, even the developers of general-purpose AI have no idea what it will be used for later on. If such algorithms are to be used as high-risk AI, or as a component of such AI, then most of the corresponding obligations for “providers” of AI systems are also linked to them, according to the Council proposal now on the table. However, small and medium-sized enterprises are to be exempted, provided that they are not partners of or affiliated with larger enterprises. According to the Council’s proposal, the specific requirements to be imposed on general-purpose AI systems themselves will be laid down by the Commission in a separate implementing act within one and a half years after the AI Act enters into force.
Other areas where the Council of Ministers has proposed changes, some of them rather marginal, are transparency obligations, measures to promote innovation, supervision and penalties for violations.