Artificial intelligence (AI) and its implications on law & business – a brief analysis

Abstract

Since the past decade, every decision maker seems to be talking about AI and how it will change the world – for better or worse. (I.) But what does AI really mean once you set aside the hype and fears of “Terminator”-style AI apocalypse, which has been spread by Elon Musk? (II.) Why is it important for all areas of law? (III.) What will it mean for lawyers, law firms and businesses in general? (IV.) And last but not least, what are the legislative trends in the EU?

I.

AI is a very broad term and can be described as the science of self-learning software algorithms that take over tasks, traditionally performed by humans. Algorithms are forecasting mechanisms based on correlations of old and new data patterns. Today’s hype is focused on one subfield of AI known as “machine learning”. In fact, the hype is mostly centered on a special machine learning technique known as “deep learning” in which software simulations of simple models of the human brain are trained to do things by showing them large numbers of examples. “Neural networks”, as these simulations are called, have been around for a while but “deep networks”, which are more sophisticated and can be trained to recognize more subtle differences, have become far more capable in recent years.

In other words, deep learning is just one very specific subcategory of AI. The importance around this subfield originates from its ability to handle a wide range of problems from image recognition over language translation to speech transcription.

The recent rise of AI has three main reasons: First, digitization has provided a large pool of data that can be used for training. Secondly, researchers figured out more efficient training algorithms for use with larger or rather deeper neural networks. Thirdly, scientists figured out how to get graphical processing units to run deep learning software. This provided a boost in performance.

In each case a neural network is feeded by exposing it to millions of examples. For image recognition, this means training a network with millions of labeled images (i. e. this is a dog, that is a cat); for speech recognition, millions of sound clips are used, each tagged with the correct transcription. Once the network has absorbed enough examples (input), it can correctly predict the right output for a previously unseen input. This configuration of deep learning, called “supervised learning”, is widely used in the corporate world.

II.

Almost all social areas of life are legally regulated. One can find for nearly every social field a legal connection. If you like horses, you can specialize in equine law. If you like sports, you can do sports law. Over the past years, there have been many substantial technological, economic and social transformations in all social areas. Just as all companies now use computers and the internet, they will all end up using AI, too. In companies that are focusing on data processing machine learning expertise will become a core competence, requiring teams of specialists. Initially, other firms will adopt the technology incrementally and unconsciously as AI features are added to all modern devices, softwares and services they already use, from smartphones to email systems to e-commerce engines. Depending on whether you advise on technical matters or other fields of law the implications of AI may vary. But as mentioned above, all fields of law will gradually have certain connections to AI.

The implications of AI for practical legal work may be explained by the following examples:

Recently, false AI patterns had deadly consequences for a Tesla driver whose autopiloted car crashed into a truck. The Tesla algorithm interpreted the white sidewall of the truck as the sky according to the trained pattern “white equals sky”. This case demonstrates the shift of new ethical and legal questions. Especially responsibility and liability issues will gain new inputs and importance by AI.

Another yet philosophical question is whether robots are capable of committing crimes. This would be the case if we assumed a “self” for an AI system. If we don’t allow this, then robots can’t commit crimes for the time being. If we can certainly say that humans will always be responsible for the actions of robots, then existing laws can be adapted to cover new threats. Only if it is possible to give robots a soul, based on emotions and moral considerations, the regulatory field will change and the legislator as well as lawyers will have to work harder.

Complex legal questions are closely interwoven with social issues. For instance, how do we make sure, AI models are trained with trusted data that is not afflicted with bias? One prominent example is Apple’s “gender biased” credit card algorithm. Users noticed that it seemed to offer smaller lines of credit to women. This caused an investigation by New York’s Department of Financial Services. This example shows that AI needs to be carefully audited to make sure bias hasn’t crept in. If data is biased, AI will perpetuate further bias. This could affect millions of people, exposing companies to lawsuits. Thus, algorithms should ideally fulfil six characteristics: They must be explainable, dynamic, precise, autonomous, fair and reproducible. Thus, the German TÜV-Association (TÜV is focusing on safety tests for technical installations, machinery and motor vehicles) is currently working on a seal for AI to ensure respective standards.

AI is now a key concern not just for technology companies around the world, but for any company that wants to remain competitive in its own market. This leads to an increased need for legal consultation for every business which has interfaces to AI. Companies should consider AI as an integral part of their compliance-management-system (CMS), in order to meet the described challenges and to reduce possible liability risks.

III.

Algorithms can be used profitably for all businesses with iterating tasks in a stable environment. Compared to a human brain AI is better in finding the best place for stored goods in a central warehouse with thousands of individual products. Other typical successful areas of AI are the purchase of consumer goods, the maintenance of machinery and the planning of commodity transportation.

However, AI doesn’t possess motivations and emotions like humans. AI is generally not able to do things it wasn’t designed to. Regardless of the extent of the data involved, the basis of AI prediction always remains a mathematical correlation. It lacks causality.

An algorithm only predicts that I am interested in certain products, but not why. The quality of the prediction depends on the validity of the old and new data. The extracted pattern is not a proven knowledge, but an assumption that may or may not be true. If it does not apply, it is a pseudo pattern. For example, if a customer buys a book about cats, he will receive advertising on cats even though he doesn’t like cats and the book was only dedicated as a gift for a friend. The purchase was therefore only associated with an interest on cats, but not causally dependent on it. For those who are drafting or applying law, this is an important aspect to bear in mind.

Unique complex entrepreneurial decisions are not comparable to AI results. An example is the construction of a new factory abroad or entering a new business field. For such decisions, there is no database for the use of algorithms. Sensible data from other companies are legally protected. These decisions are based on entrepreneurial imagination.

Accumulating high profits correlates with advantages over competitors. However, such advantages are only achievable and defendable if a company differentiates itself from its competitors. In this respect, when it comes to unique entrepreneurial decisions, it is better to break AI patterns and trigger serendipity.

Being an entrepreneur means seeing something that others do not see (yet). It would even be detrimental to rely on an AI pattern here. For instance, you can’t predict trends by AI in the acyclic fashion industry, because trends are volatile. Algorithms do not deliver imagination, visions and intuition. However, these are precisely the central resources for trend-setting decisions and thus the exclusive field of entrepreneurial decision-makers.

Deep learning’s dependence on vast amounts of training data explains why internet giants were its earliest and most enthusiastic adopters. They have access to tremendous amounts of data that can be used to train and to feed systems. For companies that are used to processing high amounts of data – for example, TMT-Business or financial services – moving from data analysis to the adoption of machine learning is an obvious step.

For other firms, the adoption of AI techniques depends on first being able to gather, process and analyze internal data effectively. Companies with poor analytic capabilities or fragmentary data management will struggle. But the opportunity is clear. Today, every company has processes that can be managed or optimized by AI.

AI is a large factor shifting the way legal work is conducted: In the legal (tech) world, AI is being used for contract drafting, negotiation and review, predicting case outcomes, suggesting courses of action, organizing legal research, time keeping, etc. Furthermore, algorithms can also be used by competition authorities to detect irregularities in the pricing of certain markets, e. g. oligopolistic markets.

In this sense, AI powered software that enhances the efficiency of document analysis, especially for the purpose of due diligences. It helps saving time and it can produce results that can be statistically validated. Clients who pose questions, such as “should I settle” can be answered on a comparative basis, as AI has access to years of trial data.

With all these benefits one must bear in mind, that AI has to be applied reasonably and with a sense of proportion. Recently the German Higher Regional Court in Cologne caused bizarre headlines in the media. A law firm, representing its client in the “dieselgate scandal” has issued with help of an AI software a nearly 150 pages document for the appeal, that was predominantly based on not case-related text modules. The court rejected the appeal due to a lack of individual case reference. In this case, the results of the AI should have been monitored and tailored by a lawyer. Seen in a sober light, AI is often still “automated statistics” that learns to link enormous amounts of data. But things get difficult when value-based assessment is required. Values cannot be mathematically calculated.

AI is also increasingly finding its way into HR work and will play a central role in recruiting in the future. The more detailed the AI creates a (complete) personality profile of the applicant, the more difficult it becomes to comply with the requirements of data protection law. In the event of a violation of the requirements of the GDPR, there is a risk of draconian fines, with the explicit intention of deterrence, Art. 83 GDPR.

As in so many other industries, the crucial question is whether AI will have a negative impact on legal employment. Firms which don’t keep up with the time, will hire less talent. Progressive firms, on the other hand can cut fees, and clients will consult their lawyers more frequently. Clients are no longer willing to pay fees for a solely repetitive work.

IV.

On 21 April 2021, the European Commission presented its proposal for the AI regulation (the Draft AI Regulation, hereinafter abbreviated: DAR). The DAR, globally the first of its kind, intends to contribute that Europe will become the global centre for trustworthy AI. It will now pass the European Parliament and the Council of the EU. Experience shows that it can take at least 18 to 24 months, but possibly much longer, until a regulation is ratified and enters into force. At the earliest the AI Act can be expected to be applied in 2025.

Legislative background of the draft: Since 2014, the Commission has taken several steps to facilitate the development of a data-agile economy, such as the Regulation on the free flow of non-personal data, the Cybersecurity Act, the Open Data Directive, and the General Data Protection Regulation. In 2018, the Commission presented an AI strategy for the first time and agreed on a coordinated plan with Member States. The High-Level Expert Group on AI presented their Ethics Guidelines on trustworthy AI in April 2019. These guidelines served as a basis for the Whitepaper “A European approach to excellence and trust” on AI presented on 19 February 2020.

The DAR does not only affect companies in the EU. The draft will have an extraterritorial impact (see Art. 2 I lit. a) and c) DAR): Each provider or user providing AI output within the EU will be subject to the DAR regardless of his location. The DAR has thereby aligned itself with the provision of Art. 3 II of the GDPR, which extends the scope of application of the GDPR in a comparable manner to controllers located outside the EU.

The Commission’s proposal puts the citizen at the center of the regulation and underlines the protection of general interests, health, safety and fundamental rights.

Civil law matters regarding the use of AI, such as the attribution of declarations of intent, the liability in tort or the creation of intellectual property are not addressed by the draft of the Commission.

Like the GDPR, the AI regulation will also follow a risk-based approach – in short:

The higher the possible dangers in an area of application are, the higher are the regulatory requirements for the AI system. The proposed AI Act will create legal certainty to facilitate investment and innovation in the field of AI. It is primarily a preventive prohibition act that bans the use of AI in certain application scenarios or makes the use of AI subject to technical and organizational preconditions and security requirements. Thus, the draft distinguishes between three risk-groups:

Prohibited AI systems (black-list), high-risk and low-risk-systems, whereby the latter will also be subject to special rules if certain characteristics are present.

Art. 5 DAR proscribes certain AI applications that the Commission considers posing an unacceptable risk by violating the Unions’ values. Prohibited AI systems include (1) subliminal, manipulative, or exploitative techniques that cause harm, (2) real-time remote biometric identification systems used in publicly accessible spaces, e. g. facial recognition systems used to identify a person in a crowd, for the purpose of law enforcement, and (3) all forms of social scoring by public authorities or on their behalf. Social scoring is the practice of evaluating the trustworthiness of natural persons over a defined period, based on their social behavior. Such a practice leads to an unfavorable treatment of a certain social group. Private-sector providers, for example agencies such as German “Schufa” or Italian “CRIF”, are not addressed by the draft.

The DAR attaches special value and importance to high-risk AI systems. The decisive factor for the classification as a high-risk AI system is the expected negative impact of the respective system on European fundamental rights. According to the recitals of the DAR, the probability of the occurrence of damage and its extent must be taken into account.

High-risk AI systems must meet high technical and organizational standards. Art. 6 (DAR) differentiates between two groups of high-risk AI systems:

(1) AI systems that are products or safety components of products (e. g. toys, machinery medical devices etc.) that are covered by the EU harmonization legislation listed in Annex II (Art. 6 I lit. a DAR). Annex II contains a list of European product safety regulations and directives (harmonization legislation). Furthermore, the product or a safety component of the product must be subject to a third-party ex-ante conformity assessment (Art. 6 I lit. b) DAR).

(2) AI systems referred to in Annex III, belonging to a list of stand-alone high-risk AI systems. (Art. 6 II DAR). These AI systems are listed in Annex III of the Draft AI Regulation. This list may be updated by the EU at any time. The table includes a selection of the AI systems that are most relevant for the private sector, such as systems that evaluate consumer creditworthiness, assist with recruiting or managing employees, use biometric identification and categorization of natural persons, use safety critical systems (i. e. systems that would put health of citizens at risk due to failure) or use any systems used in the administration of justice.

Face recognition is one of the most sensitive issues of AI. It walks a fine line between freedom and security. The draft prohibits real-time remote biometric identification systems (black-listed). On the other hand, it allows people to be identified later via facial recognition, i. e. ex-post-systems are allowed. However, such facial recognition systems must guarantee accuracy, IT security, transparency and undergo a conformity assessment procedure because they are high-risk AI systems.

Providers, importers, distributors and users of high-risk AI systems have to meet different key obligations in order to comply with high-risk AI systems (see Art. 16–29 DAR). These obligations are part of IT and product safety law and have – even if not explicitly addressed in the draft – implications for contract and tort law. The obligations of providers, importers, distributors, and users of high-risk AI systems can be used as (interpretation) guidelines in the context of contractual and tort liability.

Low-risk AI systems comprise, AI chatbots, AI-enabled video and computer games, Spam filters, Inventory-management systems, Customer- and market-segmentation systems and most other AI systems. Low-risk AI systems guarantee specific transparency obligations demonstrating users that they are interacting with a machine so that they can make up their minds about continuing the mechanism (see Art. 52 I–IV DAR).

AI systems without any particular risk, i. e. which are not covered by the requirements of the DAR fall under the general legal provisions that are applicable for all AI systems, especially the requirements of the GDPR. This risk-free category may also self-regulate itself by implementing voluntary codes of conduct (see. Art. 69 DAR)

Potential fines could be draconic (up to €30 million or 6 percent of global revenue). The use of black-listed AI systems or the infringement of the data governance provisions for high-risk AI systems will entail the highest potential fines. Sentences are even more severe than those regulated in the GDPR. All other violations of AI systems are subject to a lower maximum of €20 million or 4 percent of the total worldwide annual turnover. Providing authorities with misleading information in the context of AI will carry a maximum penalty of €10 million or 2 percent of global revenue.

The DAR represents an important step towards regulation and legal certainty. Due to the wide scope of application of AI it can be assumed that the AI regulation will have a comparable influence on the European and global market as the GDPR. This is ensured by the power of authorities which are entitled to impose fines for violations. The AI draft will contribute to inspire organizations to implement a CMS with special emphasis on AI. Companies can use the risk-based stratification as a base for developing their own internal AI-CMS.

It should also be pointed out, that in October 2020, the European Parliament published a draft on new liability rules for AI as civil law matters are not mentioned in the DAR. The proposals for liability rules are complex. Art. 4 of the EU draft establishes strict liability for high-risk AI systems.

Newsletter

Subscribe to our monthly newsletter with information on judgments, professional articles and events (currently only in german).

By clicking on "Subscribe", you consent to receive our monthly newsletter (with information on judgments, professional articles and events) as well as to the aggregated usage analysis (measurement of the opening rate by means of pixels, measurement of clicks on links) in the e-mails. You will find an unsubscribe link in each newsletter and can use it to withdraw your consent. You can find more information in our privacy policy.