AI & Privacy

Hardly any other technology is likely to arouse so many hopes and fears in society: We are talking about artificial intelligence (AI). From a business perspective, AI offers enormous advantages. Processes in IT, sales, customer service and product manufacturing can be optimised or made more efficient. Above all, AI has the potential to reduce costs, free up capacity and generate more revenue. However, despite the many potential benefits of AI, data protection issues continue to arise, particularly when it comes to the processing of personal data.

Request a non-binding introduction now!

Potential risks and legal requirements in the use of AI

In addition to the many benefits, the use of AI always poses risks to the rights and freedoms of individuals affected by automated processing. One consequence of this is the strict treatment of fully automated decision-making processes by the General Data Protection Regulation (GDPR). Other regulatory requirements, such as the AI Regulation currently being negotiated at EU level, also pose challenges.

Implementing AI in your business: Our expertise for optimised processes and regulatory compliance

We help organisations adopt and use AI with our extensive expertise in this rapidly evolving market. Discover the potential of artificial intelligence for your business and optimise your processes now! But be careful: are you meeting all the regulatory requirements? We can help you ensure a successful and compliant implementation.

We advise companies on AI issues in the following sectors, among others:

  • E-Commerce
  • FinTech
  • Insurance companies
  • Loyalty providers
  • Gaming providers
  • Mobility providers
  • Adtech
  • Pharma & Health
  • Financial Services

Our clients' legal and technical challenges - our solutions

We can show you what steps to take when implementing and using AI. And how we can help you with legal issues, particularly to reduce compliance costs.

What do you want to achieve with AI? Where and how should it be used? Once these questions have been answered, we will discuss with you the steps required to use AI in a legally compliant manner.

Depending on the application, different legal challenges arise. For example, AI is used for:

  • Profiling, scoring
  • Facial recognition
  • Chatbots, digital assistants
  • Autonomous driving

Big data and machine learning - the technical implementation

The underlying algorithms are the linchpin of any AI. There are algorithms that enable AI to learn and evolve independently from the original algorithm. This is where we enter the realm of machine learning. From a data protection perspective, deep learning systems that are fed with large amounts of data, either to train the models or to apply them to large databases, are particularly relevant here. These systems learn autonomously and become increasingly opaque or not (fully) comprehensible to those responsible as the learning process progresses - this is referred to as a 'black box'.  Against this background, AI developers have to take into account not only the general data protection principles, but in particular Art. 22 GDPR, which grants individuals whose data is processed the fundamental right not to be subject to an automated decision.

Data governance requirements and preparations for the AI Regulation

As the 'input' to AI systems in the form of data has a significant impact on the overall functioning, the planned AI Regulation also places extensive requirements on data governance. Even though the regulation has not yet entered into force, the main requirements are already foreseeable. Companies should therefore start preparing for the regulation now. In terms of data handling, for example, high-risk AI will need to be trained and tested with sufficiently representative, traceable and verifiable data sets. As the AI Regulation follows a risk-based approach, a risk management system needs to be in place already during the technical implementation.

Right of access under data protection law and conflict with trade secrets

The right of access under data protection law pursuant to Art. 15 GDPR also comes into play. This basically requires the controller to inform the data subject comprehensively and in clear and understandable language about the purposes of the processing and the data processed, but in particular also about the logic involved and the scope and intended effects of such processing for the data subject. However, such information can only be provided if the data processing is also comprehensible to the controller. The proposed AI Regulation also provides for obligations in this respect. High-risk AI systems should be designed in such a way that their operation is sufficiently transparent. AI developers must therefore ensure that users can interpret and use the system's results appropriately. The conflict between disclosure and lack of transparency is exacerbated by companies' interest in not revealing trade secrets. After all, proprietary algorithms can be a competitive advantage. Recital 63 sentence 5 of the GDPR states that the right of access should not adversely affect the rights and freedoms of others, including business secrets.

Pseudonymise or anonymise in time - minimise compliance costs!

From a compliance perspective, the cost of complying with GDPR requirements is directly related to the risk posed by processing. Measures that reduce risk are regularly rewarded by the GDPR. For example, the pseudonymisation of personal data leads to a more favourable balancing of interests from the perspective of the data controller within the meaning of Art. 6 para. 1 lit. f GDPR, to further processing that is more compatible with the original purpose of the processing, and to an easier implementation of a data protection impact assessment (DPIA). Last but not least, the controller may be able to rely on the exception under Art. 11(2) GDPR. Given the extensive rights of data subjects, this is a desirable approach.

Anonymisation and pseudonymisation of personal data to avoid GDPR requirements

At best, however, all relevant personal data should be anonymised in order to avoid the scope of the GDPR. Pseudonymisation or anonymisation of personal data should also take place in the storage environment of the raw data, and thus before it is transferred to the machine learning environment.

We support you in the data protection-compliant implementation of anonymization and pseudonymization measures and in carrying out a data protection impact assessment.

Data protection impact assessment as a precaution for data protection-compliant AI applications under the GDPR

If it is not possible to leave the scope of the GDPR, experience has shown that a data protection impact assessment is required in the field of artificial intelligence. This has the great advantage that data protection aspects can be taken into account in the planning phase of a machine learning project. In this way, the data controller can meet the requirements of Art. 25 GDPR, namely data protection through technology design ("privacy by design") and through data protection-friendly default settings ("privacy by default"), in a more targeted manner.

Supporting the implementation of the planned AI regulation and the classification of AI applications into risk classes

Our clients will benefit from our experience, which can be incorporated into the design of an AI application. In this context, there is also an opportunity to assist in the implementation of the extensive challenges posed by the proposed AI Regulation. The first step is to clarify whether your own algorithms are covered by the regulation at all. The next step is to classify them into the different risk classes of the regulation. This will determine the requirements for the corresponding AI systems. Particularly in the case of so-called high-risk AI, the AI Regulation will contain an extensive list of obligations. When implementing these, it will be worthwhile to draw on expertise from data protection law. To a large extent, the challenges of the AI Regulation can also be implemented in synergy with those of the GDPR. The data protection impact assessment provides a platform for complying with some of the provisions of the future AI Regulation. With the right preparation, the implementation of the AI Regulation will be successful, and we will be happy to support you.

Documentation and accountability obligations under the GDPR for AI systems

The principles of the GDPR include extensive documentation and accountability obligations. Meeting these obligations requires some understanding of the algorithm used. The weighting of the criteria by which the AI learns and decides must be documented, as well as the impact of different correlations on the results.

It is therefore necessary that changes to the weightings resulting from the AI's self-learning process can be detected (technical monitoring). In our view, this obligation can and should also be seen and used as an opportunity for the company concerned to retain control over decisions of operational importance.

Black box tinkering as a way to make decision processes in AI systems traceable

Another way to better understand and document decision processes is black box tinkering (operational monitoring). This involves running the algorithm on raw datasets where only one criterion has been changed, and comparing the output with the results based on the original datasets. This type of monitoring allows conclusions to be drawn about the effects of individual criteria or combinations of criteria, and enables managers to better understand and document the logic of the AI.

Requirements of the proposed AI Regulation for technical and organisational measures for AI systems

The planned AI Regulation also places high demands on users and developers of AI systems with regard to compliance with technical and organisational measures. The approaches differ in terms of risk groups. These range from systems that are generally prohibited, to high-risk systems that are subject to extensive obligations regarding documentation, design decisions and re-evaluation, to systems with minimal and low risk that are subject to simplified regulations. We can help you qualify your system and put in place a comprehensive risk management system.

Other problem areas under data protection law:

  • Right to erasure
  • Right to data portability
  • Division of responsibility (manufacturer of an algorithm makes it available to a third party)
  • Our soft skills: Contract negotiations with service providers

Don't become dependent on service providers when adopting AI

Companies that want to work with artificial intelligence for the first time tend to rely on service providers to deliver the necessary technologies. There is a risk that these companies will become too dependent on these service providers, which could put them at a disadvantage in the future. The exchange of data outside the European Economic Area also requires legal consideration. We also advise our clients on the selection of service providers and conduct the sometimes difficult but necessary contract negotiations.

A particular problem: disclosure of the algorithm

The tension between the obligation to disclose the "logic involved" and the need to avoid infringing trade secrets is a particular challenge. This is because algorithms are generally protected intellectual property. The wording in recital 63 sentence 5 GDPR ("should not be prejudiced") makes it clear that not all information can be refused from the outset with a blanket reference to trade secrets. Rather, a balance must be struck between the controller's interest in confidentiality and the data subject's interest in receiving information.

GDPR gaps in AI decisions: Lack of traceability remains a concern for data subjects

If the balance is in favour of the data subject, there remains the issue of the lack of traceability of AI applications that independently write new algorithms - an issue that is not explicitly addressed in the GDPR. Ultimately, it should be appropriate to explain to the data subject, in simple and clear language, how the technology around the algorithm and its decision-making works - think of the technical and operational oversight described above.

Recommendations for action

  1. Considering legal aspects when designing an AI
  2. Machine learning development environment without personal data
  3. Regular monitoring and assessment of decisions
  4. Sustainable data use and privacy policy and risk management system

Discover the potential of artificial intelligence for your business and optimise your processes now! We can help you with data protection implementation and compliance.

Non-binding initial consultation on AI topics

Do you have questions about the use of AI in your business? Our specialist lawyers can help you in the areas listed.

Request a non-binding introduction now!