srd focus artificial intelligence

Artificial Intelligence

Few technologies are likely to arouse so many hopes – but also fears – in society: artificial intelligence, or AI, is here to stay. From a business perspective, the benefits of artificial intelligence can be huge. It can be utilised to optimise or streamline processes in IT, sales, customer service and product manufacturing. Above all, this type of AI-based process optimisation has the potential to reduce costs, increase capacity and ultimately generate more revenue.

Using AI: Potential risks and the regulatory requirements

Despite the many benefits, the use of AI always poses risks to the rights and freedoms of individuals – known as data subjects – affected by automated processing. One consequence of this lies in the fact that the General Data Protection Regulation (GDPR) takes a strict approach to fully automated decision-making processes. Other regulatory requirements, such as the AI Act currently being negotiated at EU level, also pose challenges.

Introducing AI in your organisation: We are experts in process optimisation and regulatory compliance

With our in-depth knowledge of this fast-moving market, we help organisations adopt and harness the power of AI. Discover the potential that artificial intelligence has to offer your business and start optimising your processes now! But beware: are you certain that you are in compliance with all regulatory requirements? We can help you adopt AI successfully whilst staying on the right side of the law.

We advise companies in a range of industries on AI issues, including:

  • E-commerce
  • Fintech
  • Insurance
  • Customer loyalty system providers
  • Gaming providers
  • Mobility
  • AdTech
  • Pharma and health
  • Finance

Our clients’ legal and technical challenges – and our solutions

We will show you what steps to take when adopting and using AI – and how we can help you with legal issues, particularly to reduce the burden of compliance.

What do you want to achieve with AI? Where and how do you intend to use it? Once these questions have been answered, we will discuss with you the necessary steps to ensure that your use of AI is within the law.

Depending on the application, different legal challenges are likely to arise. AI is used in a wide range of settings. These include:

  • Profiling, scoring
  • Facial recognition systems
  • Chatbots, digital assistants
  • Self-driving vehicles

Big data and machine learning: Technical delivery

At the heart of any AI system are the algorithms that underpin it. Some algorithms, for example, allow the AI to learn and refine the original algorithm on its own. This is where we enter the realm of machine learning. Deep learning systems that are fed with large amounts of data – be it to train the models or to apply them to large databases – are particularly relevant here from a data protection perspective. These systems learn autonomously and, as the learning process progresses, become increasingly opaque or no longer (fully) comprehensible to human operators – this phenomenon is referred to as a “black box”.  With this in mind, in addition to the general data protection principles, AI developers must in particular take into account Art. 22 of the GDPR, which in principle grants individuals whose data is processed the right not to be subject to automated decision-making.

Data governance requirements and preparing for the AI Act

Since the “input” for AI systems – the data – has a significant impact on their overall functioning, the proposed AI Act will set out further comprehensive requirements for data governance. Although this EU regulation has not yet come into force, its key requirements are already foreseeable. This is why companies should start preparing for the regulation now. With regard to data handling, it should be noted that, for example, high-risk AI must be trained and tested with sufficiently representative, traceable and verifiable data sets. As the AI Act follows a risk-based approach, it is important to ensure that the technical implementation is accompanied by a risk management system from the outset.

Right of access under data protection law and conflict with trade secrets

The right of access under data protection law in Art. 15 of the GDPR also comes into play. This is because it basically requires the controller to inform the data subject fully and in a clear and intelligible form about the purposes of the processing and the data processed, but in particular also about the logic involved and the implications and intended effects of such processing for the data subject. However, the provision of such information is only possible if the controller is able to understand the data processing. The proposed AI Act also provides for obligations in this regard. High-risk AI systems should be designed in such a way that their operation is sufficiently transparent. Developers of AI must therefore ensure that users can interpret and use the system’s output appropriately. The conflict between disclosure and a lack of transparency is exacerbated by companies’ own interests in not disclosing trade secrets. After all, proprietary algorithms can be a competitive advantage. In this regard, the General Data Protection Regulation states in its Recital 63, Sentence 5, that the right of access must not adversely affect the rights and freedoms of others, including trade secrets.

Minimise compliance costs by applying pseudonymisation or anonymisation early on!

From a compliance perspective, the risk posed by a processing operation is directly related to the work involved in meeting the requirements of the GDPR. In general, the GDPR rewards measures that serve to reduce risks. The pseudonymisation of personal data has a number of advantages. For example, from the point of view of the controller, it leads to a more favourable balancing of interests within the meaning of Art. 6(1) Sentence 1(f) of the GDPR. It also makes it easier to reconcile any further processing with the original purpose of the processing, and it makes conducting a data protection impact assessment (DPIA) easier. Ultimately, the controller has the option of invoking the exception of Art. 11(2) of the GDPR. Given the extensive rights granted to data subjects, this is an attractive way forward.

Anonymisation and pseudonymisation of personal data to circumvent GDPR requirements

Ideally, however, all relevant personal data should be anonymised so that it falls outside the scope of the GDPR. Pseudonymisation or anonymisation of personal data should already take place in the raw data storage environment, before it is transferred to any machine learning environment.

We can help you implement anonymisation and pseudonymisation measures that comply with data protection law, and conduct a data protection impact assessment.

Data protection impact assessments as a precautionary measure for GDPR-compliant AI applications

If it is not possible to move outside the scope of the GDPR, then our experience shows that a data protection impact assessment is necessary where artificial intelligence is concerned. This offers the great advantage that data protection aspects can be taken into account at the planning stage of a machine learning project. In this way, the controller can meet the requirements of Art. 25 of the GDPR – namely data protection by design and by default – in a more targeted manner.

Helping you comply with the proposed AI Act and classify AI applications according to risk

This is where our clients benefit from our experience, which can be incorporated when designing an AI application. In this context, there is also an opportunity to address the far-reaching challenges posed by the proposed AI Act. The first thing to clarify is whether your own algorithms are covered by the regulation at all. It will then be necessary to assign the risks to the various categories defined in the AI Act. From this, the requirements for the corresponding AI systems can be derived. Especially when it comes to so-called high-risk AI, the AI Act will present an extensive catalogue of obligations. When fulfilling these, it will be worthwhile to draw on expertise from data protection law. Generally speaking, it will be possible to address many of the challenges posed by the AI Act in synergy with those posed by the GDPR. A data protection impact assessment provides a platform to comply with some of the provisions of the future AI Act. With the right preparation, it will be possible to comply with the AI Act – and we are here to help you!

GDPR documentation and accountability obligations for AI systems

The principles of the GDPR include extensive documentation and accountability obligations. Fulfilment of these obligations requires some understanding of the algorithm used. The weighting of the criteria by which the AI learns and decides must be documented, as must the impact of various correlations on the output.

It is therefore necessary that changes in weightings resulting from the AI’s self-learning process can be detected (technical monitoring). In our view, this obligation can and should also be seen and used as an opportunity for the organisation concerned to retain control over key operational decisions.

Black box tinkering as a way to make decision-making in AI systems transparent

Another way to better understand and document decision-making processes is through what is known as “black box tinkering” (operational monitoring). This is done by making the algorithm process raw data sets that have been changed in only one criterion, and comparing the output with the output based on the original data sets. This type of monitoring allows conclusions to be drawn about the impact of individual criteria or combinations of several criteria and enables controllers to better understand and document the logic of the AI.

Requirements of the proposed AI Act for technical and organisational measures for AI systems

The EU’s planned AI regulation also places high demands on users and developers of AI systems when it comes to compliance with technical and organisational measures. The approaches differ with regard to the risk groups. These range from systems that are fundamentally prohibited, to high-risk systems that are subject to extensive obligations regarding documentation, design decisions and re-evaluation, to minimal and low-risk systems that are subject to simplified rules. We can help you qualify your system and set up a comprehensive risk management system to suit your needs.

Other problem areas under data protection law:

  • Right to erasure
  • Right to data portability
  • Division of responsibility (the manufacturer of an algorithm makes it available to a third party)
  • Our soft skills: Contract negotiations with service providers
  • When adopting AI, avoid becoming dependent on service providers

Companies that want to work with artificial intelligence for the first time often like to use service providers that offer the necessary technologies. There is a danger here that these companies will become heavily dependent on the service providers, putting them at a disadvantage in the future. What’s more, any exchange of data outside the European Economic Area will require legal scrutiny. We also advise our clients on the selection of service providers and conduct the sometimes difficult but necessary contract negotiations.

Special problem: Disclosure of the algorithm

A particular challenge is the obligation to disclose the “logic involved” while avoiding violating trade secrets. This is because algorithms themselves are usually protected intellectual property. The wording in Recital 63 Sentence 5 of the GDPR (“should not adversely affect”) makes it clear that not all access may be refused outright with a blanket reference to trade secrets. Rather, a balance must be struck between the controller’s interest in confidentiality and the data subject’s interest in access.

AI decision-making gaps in the GDPR: Lack of transparency remains a problem for data subjects

If the balance is tipped in favour of the data subject, there remains the problem of the lack of transparency of AI applications that independently write new algorithms – an issue that is not explicitly addressed by the GDPR. Ultimately, it may be appropriate to explain to the data subject, in simple and clear language, how the technology around the algorithm and its decision-making works – think of the technical and operational monitoring described above.

Recommended action

  1. Consider legal aspects when designing a system involving AI
  2. No personal data in the machine learning development environment
  3. Regular monitoring and evaluation of decisions
  4. Long-term data use and data protection policy as well as risk management system

Discover the potential that artificial intelligence has to offer your business and start optimising your processes now! We can help you implement your plans in compliance with data protection law.

Artificial intelligence and data protection are not mutually exclusive! Do you have questions about this issue? Our specialist lawyers will be happy to help you.

Contact us now to find out how we can help!


Subscribe to our monthly newsletter with information on judgments, professional articles and events (currently only in german).

By clicking on "Subscribe", you consent to receive our monthly newsletter (with information on judgments, professional articles and events) as well as to the aggregated usage analysis (measurement of the opening rate by means of pixels, measurement of clicks on links) in the e-mails. You will find an unsubscribe link in each newsletter and can use it to withdraw your consent. You can find more information in our privacy policy.