In the last decade, the topic of artificial intelligence (AI) has garnered increasing attention. Companies, law firms and government institutions are devoting more and more time to grappling with AI and asking themselves about its impact on different areas of life. Today we want to take a look at what opportunities and challenges AI holds for companies and law firms. This article provides you with an initial overview of current questions and issues concerning AI as well as legislative trends in the European Union.
The particular challenge of defining AI
Defining the term “artificial intelligence” has always proved difficult. Merely drawing a comparison with the ways humans think and behave is not clear-cut, and even the term “intelligence” is understood in very different ways. A definition should be based on two central aspects: a certain degree of autonomous functioning and an adaptive processing procedure. In this respect, AI can be described as a phenomenon of self-learning software algorithms. These classically take over tasks that have traditionally been performed by humans. They are used as forecasting mechanisms based on correlations of old and new data patterns. The autonomous adaptation of processing is realised by various techniques, among which “machine learning” is currently the most popular. Software simulations are used to master simple tasks by training the algorithm, feeding it a large number of example cases. These simulations result in so-called neural networks.
If machine learning is performed on multiple layers, this is called “deep learning”. It is based on “deep networks” with more subtle differentiations and enables much more complex tasks than classic machine learning. Deep learning has become an eminent, highly powerful factor in the IT industry.
Deep learning has recently been used to achieve outstanding results in the areas of image recognition, language translation and stylistic adaptation. There are three main developments behind the increased use of AI:
- As a result of digitalisation, vast amounts of data are available that can be used to train AI and in turn constantly improve AI applications.
- More efficient training algorithms are available for the use of large and deep neural networks.
- Developers have harnessed the technology of high-performance graphics chips to develop “deep-learning software”. This has enabled quantum leaps in terms of performance.
Neural networks are trained by running millions of test runs and examples. In image recognition, for example, this works by showing countless images. And in speech recognition, networks are trained by evaluating millions of audio recordings. Once the network has taken in enough training examples (input), it can correctly predict the correct output for previously unknown input.
Impact of AI on the real (legal) world
Digitalisation has led to fundamental technological, economic and social changes in many areas of society. Currently, AI applications are also making inroads into almost all areas of the world of work. AI applications are integrated as components in software. The use of a software solution does not necessarily have to change for users, in fact in many cases the output is presented exactly as before – only the processing procedure that produced the result runs differently and the specific output will have an effect on the algorithm and thus future processing.
For example, AI components are already finding their way into a number of areas of e-commerce, such as mechanisms to prevent fraud, serve ads and display products on websites, right through to logistics processes. Many voice assistants, chatbots and data analysis tools already use AI. But the spread of AI is also changing internal processes in HR departments, for example in personnel development or recruiting.
Legal work in particular is undergoing extensive changes. Especially for drafting, negotiating and reviewing contracts, AI is becoming an essential factor – influencing predictions for case outcomes, suggestions for appropriate courses of action, organisational processes and far more besides. AI modules are currently used to run “intelligent” searches in comprehensive document sets or for text comparisons.
The use of AI in legal contexts is already a reality, even in contentious proceedings. A few years ago, for example, a statement of claim was prepared using AI (admittedly, though, it didn’t meet the requirements for a statement of claim due to its failure to sufficiently reference the individual case). There are special opportunities for competition authorities: AI could offer a standardised means of identifying irregularities in market pricing, or special market power.
Developing and applying AI does not necessarily involve processing individual people’s data. To the extent that personal data is processed, however – for instance, in connection with the use of AI for job applications – the General Data Protection Regulation (GDPR) must be observed.
The EU Commission’s proposed AI Regulation
The European Commission has recognised the opportunities associated with AI, but also the accompanying need for regulation. On 21 April 2021, it presented a proposal for an AI Regulation. The AI Regulation would be the first of its kind in the world. The aim is to develop Europe into a global centre for AI that benefits society and complies with the law. The Council has already presented a modified proposal, which is currently being discussed in the European Parliament. The trilogue between the three legislative bodies is still pending. However, it is becoming apparent that the project is being pursued with great vigour. The Regulation may come into force in as early as 2023, with a one-year transitional period before it becomes fully binding. As a European Regulation, the law will then be directly applicable in all Member States. There will be no need for implementation in the form of national laws.
Legal background of the Regulation
The proposal puts the protection of citizens at the centre. The Commission has already issued a variety of landmark legislation to facilitate the development of the data-driven economy.
This includes the Regulation on the free flow of non-personal data in the European union, focusing on cybersecurity, the Directive on open data and the re-use of public sector information, and the GDPR.
In 2018, the Commission presented an AI strategy for the first time and agreed it with the Member States.
At the same time, the Commission set up a high-level expert group on AI. The latter presented its ethics guidelines for trustworthy AI in April 2019. These guidelines served as the basis for the White Paper on AI presented on 19 February 2020.
The scope of the AI Regulation
The territorial scope of the draft AI Regulation is very broad. Art. 2(1) of the AI Regulation proposal provides for extraterritorial effect. What matters, therefore, is whether an AI system – or the output produced by that system – is used within the EU, regardless of where the providers or users are located. This means that the scope would also apply to providers and users in third countries. The proposed AI Regulation echoes the market place principle from Art. 3(2) of the GDPR, which similarly extends the scope of that Regulation to controllers outside the Union.
Objectively, a very broad scope of application is also emerging. Under the current Council draft of the AI Regulation, an AI system is defined as a software system developed through machine learning or logic and knowledge-based or statistical approaches that generates outputs that influence the environments in which the outputs are used, in the form of predictions or evaluations. Referring to the above key elements of autonomous adaptation, the main focus is on the type of technology underlying the application.
Risk-based approach of the AI Regulation
Just like the GDPR, the AI Regulation is also intended to take a risk-based approach. Accordingly, the higher the risks associated with the use of AI, the higher the regulatory requirements for the AI system in question. One central obligation for companies that use or develop AI will therefore be that they expand their risk management systems. Under Art. 9 of the draft Regulation, the risks associated with the use of AI will have to be identified, assessed and monitored. Based on the risks identified, companies will have to take appropriate measures to mitigate those risks.
Differentiation of risk groups
The draft AI Regulation differentiates between four risk types of AI systems. Each type is subject to different technical and organisational prerequisites and security requirements.
- Prohibited AI systems (Art. 5 of the draft AI Regulation)
- High-risk systems (Art. 6–51 of the draft AI Regulation, which can be described as the “heart” of the draft)
- Low-risk systems (Art. 52 of the draft AI Regulation)
- Systems with a minimal risk (Art. 69 of the draft AI Regulation).
Art. 5 of the draft AI Regulation standardises prohibited practices which, in the Commission’s view, pose an unacceptable risk to the rights and freedoms of individuals, as they violate EU values and are incompatible with the Union’s Charter of Fundamental Rights. Prohibited practices include placing on the market, putting into service or using AI systems which
- use subliminal, manipulative or exploitative techniques and inflict physical or psychological harm,
- use real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, such as facial recognition,
- are used by public authorities for the purpose of social scoring.
In contrast to the prohibited identification of individuals in real time, systems that can identify persons ex post are permissible under certain conditions. They must ensure accuracy, IT security and transparency in facial recognition and undergo a compliance procedure.
High-risk AI systems have to meet high technical and organisational standards, which are regulated in Art. 8 et seq. of the draft AI Regulation. Since the “input” for AI systems – the data – has a significant impact on their overall functioning, Art. 10 of the draft sets comprehensive requirements for data governance. Concrete specifications also refer to the testing processes. Several obligations relate to the documentation of AI deployment in the form of log files, the data basis, the design decisions or re-evaluation. Moreover, it must be impossible for a user of an AI system to gain control over that system (see Art. 15 of the draft AI Regulation).
In addition, in Art. 16–29 of the draft the AI Regulation provides for some additional obligations for providers, importers, distributors and users of high-risk AI systems. These obligations are part of IT and product safety law and – even if they are not explicitly addressed in the draft – have implications for contract and tort law. They can be used as (interpretation) guidelines in the context of contractual and tortious liability. Low-risk AI systems include AI chatbots, AI-enabled video and computer games, spam filters, inventory management systems, customer and market segmentation systems, etc.
Transparency obligations apply to these, so that users can see that they are interacting with a machine. This serves to enable freedom of choice as to whether such interaction is desired (see Art. 52(1)–(4) of the draft Regulation).
AI systems without any particular risk fall under the general legal provisions, in particular those of the GDPR. They are excluded from the scope of application of the draft AI Regulation.
Effects on companies
Drastic sanctions are foreseen for violations of the AI Regulation (6 % of annual gross turnover). Member States will have to create national supervisory authorities. However, sanctions should not be the main concern for companies looking to achieve comprehensive compliance. Legal certainty in the development, purchase and use of AI systems is particularly important. In this regard, it is advisable for companies to already initiate compliance projects now that monitor the further course of the legislative process and ensure that a future programme of obligations can be sensibly integrated into their existing corporate processes. Anyone who adapts their documentation requirements properly now will ultimately save themselves the trouble of subsequent documentation in the future.
The AI Regulation represents an important step towards legal certainty and the regulation of AI systems. Given how widely used AI has become, it can be assumed that the impact on the European and global market of regulating AI in this way will be comparable to that of the GDPR before it. If you would like to implement an AI system in your company and need support with embedding the system in a manner that complies with data protection laws, our experts will be happy to advise you!