02.05.2025

ChatGPT and its regulation under the AI Act

At the end of November 2022, the Californian company OpenAI launched ChatGPT to the public. This AI-based chatbot consists of a large language model and can solve various language-based tasks. The acronym 'GPT' stands for 'Generative Pretrained Transformer' and refers to a language model based on a neural network trained using deep learning.

Arrange a non-binding initial consultation

At the end of November 2022, the Californian company OpenAI launched ChatGPT to the public. This AI-based chatbot consists of a large language model and can solve various language-based tasks. The acronym 'GPT' stands for 'Generative Pretrained Transformer' and refers to a language model based on a neural network trained using deep learning. The company OpenAI was only founded in 2015 and in 2019 received an investment of one billion US dollars from Microsoft. ChatGPT will now be integrated into Microsoft's Bing search engine to compete with Google. This development will mean that search engine users will be able to expect fluid and tailored answers rather than a long list of results with potentially helpful links. ChatGPT therefore has the potential to disrupt the internet and many different areas of life.

From a legal perspective, the main question is how such chatbots will be regulated in the future under the forthcoming AI Act. The aim of the European Commission's draft regulation on artificial intelligence (AI Act) is to limit the risks associated with AI systems through regulation. If ChatGPT can write programs on its own, it can do the same for hackers, blackmailers and nation states. In the hands of the 'wrong' actors, the software poses a significant potential threat. The AI Act is also intended to counter this.

What ChatGPT can do today - and what it can't do yet

It's been a long time since the networking scene has been so enthusiastic about a new technology. Five days after its launch, ChatGPT already had one million users. Compared to other tech hypes of the past, this is a considerable number. The software is currently (still) available to anyone on the internet for free, and the result is astonishing: ChatGPT can answer much more than simple factual questions. The chatbot can write poems, weigh up arguments, create programming code, correct errors in it and much more. In other words, there are many tasks that would take a human many hours to complete, whereas ChatGPT only needs a few seconds. The new technology could therefore disrupt many professions. Academic teachers already fear that they will no longer be able to assess homework, as it will often be impossible to tell whether students have written it themselves or whether they have used ChatGPT.

Newsletter

Current updates and important information on topics such as data law, information security, technology, artificial intelligence, and much more. (only in German)

What is the sum of 6 and 1?

Mit Klick auf den Button stimmen Sie dem Versand unseres Newsletters und der aggregierten Nutzungsanalyse (Öffnungsrate und Linkklicks) zu. Sie können Ihre Einwilligung jederzeit widerrufen, z.B. über den Abmeldelink im Newsletter. Mehr Informationen: Datenschutzerklärung.

ChatGPT's capabilities are impressive. However, ChatGPT is still a long way from being a 'strong AI' with almost the same intellectual abilities as a human, i.e. acting independently, flexibly and with foresight. On the one hand, the chatbot sounds particularly authentic because it gives information with full conviction, even if it is uncertain. This sounds so believable that it can easily put off many users. On the other hand, ChatGPT sometimes fails even at simple mathematical tasks. This is because the AI cannot rely on right or wrong answers, but only tries to calculate which answer is probably the best from a huge pool of information. ChatGPT does not rely on knowledge, but instead reproduces patterns it recognises in the texts it is fed. As a result, the AI often produces incorrect facts: For example, ChatGPT was very sure that the elephant is the mammal that lays the largest eggs, although this is obviously not the case. The OpenAI company itself explains that ChatGPT's system is still in a research phase and not fully developed. When confronted with a false statement, the AI is sometimes able to change its mind. Occasionally, however, it will insist on its opinion. This seems very human.

However, the AI still has some way to go before ChatGPT becomes a reliable assistant. However, the foundations have already been laid. The successor GPT-4 will be released in February this year. It will work with 100 trillion parameters, 500 times more than its predecessor ChatGPT.

How are chatbots designed?

In order to accurately categorise chatbots in general, and ChatGPT in particular, from a regulatory perspective, it is necessary to have a basic understanding of the underlying technology. The nature of the technology determines whether it falls under a regulatory regime and how strictly it is regulated.

Modern chatbots are based on machine learning (ML). This involves automatically learning knowledge from data, usually in relation to a specific problem, and capturing it using a suitable model. At the end of the learning phase, the knowledge should be generalised by applying it to new data. To provide this transfer performance, it is important that the learning process recognises patterns, relationships and regularities in the data.

Chatbots based on ML algorithms can be divided into information retrieval models and generative models. Similar to a web search, the former compare the user's query with an index of answers to provide the most appropriate response. The latter use the algorithm to generate the answers word by word. With generative models, the algorithm does not extract a pool of answers, but only the vocabulary, syntax and so on. The latest generation of chatbots is also based on the transformer model. Again, the algorithm weights the input according to relevance to produce more accurate results. ChatGPT is such a generative transformer.

To remove the model's bias and prevent it from spreading hate messages, OpenAI's developers also subjected ChatGPT to human review - a process called 'Reinforcement Learning from Human Feedback' (RLHF). For its RLHF, OpenAI used its predecessor InstructGPT, a pre-trained language model. Hence the term 'pre-trained'. On this basis, ChatGPT then answered the input itself and produced several outputs. These outputs were in turn rated by humans, who ranked them from best to worst. ChatGPT then took this feedback into account and tried to optimise the output.

ChatGPT and the AI Act

Almost simultaneously with the current technological development around ChatGPT, the legal assessment of AI systems at European level has gained momentum. On 25 November, the European Council presented its final position on the planned AI Act. Discussions are now continuing in the European Parliament. The Act aims to ensure that AI systems placed on the EU market and used in the EU are safe and respect EU fundamental rights and values. The draft AI Act takes a horizontal approach: the law aims to regulate the placing on the market, putting into service and use of AI systems in all sectors. Once the European Council and the European Parliament have reached an internal agreement, the trilogue negotiations will start, in which the Commission, the Parliament and the Council will have to reach a final agreement, expected by mid-2023.

Chatbots such as ChatGPT, which are based on ML, would in principle be classified as AI systems under the AI Act, both under the old definition developed by the Commission and under the new, narrower definition of the Council. The Council defines an AI system as one that is designed to operate with autonomous elements and that uses machine learning and/or logic and knowledge-based approaches to reason about how to achieve a set of defined goals, based on data and inputs provided by machines or humans, and that produces system-generated results such as content (generative AI systems), predictions, recommendations or decisions that influence the environment with which the AI system interacts. In essence, it must be an autonomous and adaptive system. Strictly rule-based chatbots are certainly conceivable, which, free of ML, merely represent a dialogue-based user interface with a keyword search. From the outset, such chatbots would not fall under the definition and therefore the regulatory regime of the AI Act.

The AI Act further classifies AI systems according to a risk-based approach into categories with graduated requirements. There are four risk classes: 'prohibited AI practices', 'high-risk AI systems' and other AI systems with 'low' or 'minimal' risk.

Firstly, it should be noted that the use of ChatGPT can generally fall into all four risk classes, depending on the intended purpose.

Prohibited AI systems will be of little practical significance to very few companies. They include particularly critical systems that pose unacceptable risks to the Union's fundamental rights and values. For example, ChatGPT would fall into this category if it attempted to drive people to suicide through targeted communication.

High-risk AI systems are those that are considered to pose a particularly high risk to the health and safety or fundamental rights of EU citizens. The high-risk categorisation is limited to those systems listed in Annex III of the AI Act. Human-machine interactions (e.g. cobots, algorithmic management, etc.) are not included per se. If, in the future, ChatGPT were to be integrated into an online application process, for example, so that it communicates and interacts with potential applicants, the bot would be subject to strict regulation as a high-risk AI system, as Annex III No. 4 a) of the draft AI Regulation explicitly regulates this. In addition, ChatGPT would also have to comply with the transparency requirements of Art. 52 of the draft AI Act. This would mean that the bot would have to undergo a high-risk AI conformity assessment pursuant to Art. 43 et seq. of the AI Regulation, but would also have to fulfil the transparency obligation towards applicants under Art. 52 (1) of the AI Act. This example clearly shows that the regulation of chatbots under the draft AI Act varies. Depending on the design and intended use of a chatbot, different regulatory requirements will apply.

According to Art. 52 of the draft AI Act, specific transparency obligations apply to low-risk AI systems. Low-risk AI systems are not subject to Art. 52 of the draft AI Act. For these systems, the draft AI Act only recommends the implementation of optional codes of conduct in Art. 69.

Art. 52 of the draft AI Act regulates transparency obligations, as the AI systems mentioned therein present a "particular risk of identity fraud or deception" (Recital 70 of the draft AI Act). The Commission identifies this particular risk of manipulation for three groups: AI systems for interaction with natural persons (Art. 52 para. 1 AI-RO-E), AI systems for emotion recognition or biometric categorisation (Art. 52 para. 2 AI-RO-E) and AI systems that generate image, sound or video content corresponding to real persons or objects, so-called deepfakes (Art. 52 para. 3 AI-RO-E). In each of these three cases, it must be made clear that the content has been generated using AI.

ChatGPT is fluid and versatile. With this comes a high potential for abuse, which the KI-VO-E is trying to channel. ChatGPT obviously falls into all three risk categories. If ChatGPT is integrated into other programmes in the future, for example so that the AI interacts with natural persons, the AI system must be designed in such a way that the natural persons are informed that they are dealing with an AI system. Even if ChatGPT is integrated into other programmes for emotion recognition or used for deepfakes, the users have to fulfil special transparency obligations.

ChatGPT as General Purpose AI?

The final Council draft of 25 November 2022 includes a specific section on so-called General Purpose AI (GPAI), which was missing in the original Commission version. The provisions on GPAIs are also relevant for ChatGPTs.

According to Art. 3 (1b) of the final Council draft, General Purpose AI (GPAI) are AI systems that have functions such as image and speech recognition, audio and video generation, pattern recognition, question answering and translation, regardless of how they have been placed on the market or put into service, and that can be integrated with other AI systems.

GPAI systems are therefore versatile and characterised by the application of transfer knowledge. ChatGPT, Dall-E and all similar AI systems are likely to be classified as GPAI under the definition. ChatGPT is a true chatbot, but because of its programming it is also capable of solving programming code and can be integrated into other AI systems via APIs.

The background to the proposed GPAI Act, which first found its way into the text of the Act under the Slovenian Presidency and was further elaborated and refined under the Czech Presidency, was, inter alia, to curb the specific instrumentalisation potential of GPAIs. One example is the deep learning text-to-image generator 'Stable Diffusion', which has already been misused to create child pornography. In addition, those involved in the legislative process had realised that GPAIs could essentially only be developed by tech giants or by companies protected by these giants. There was a risk that these companies could find loopholes in the Commission's draft text to escape Act in the event of non-explicit regulation. It is therefore theoretically possible to circumvent the high-risk area if a provider is no longer covered by Annexes II and III of Art. 6 of the AI Act. Last but not least, various studies and lobby groups have raised the profile of the issue and pushed it forward: Tech Monitor, for example, predicts that the market volume for GPAI applications will double by 2025 (especially in America and China). This is another reason why tendencies to protect the EU more strongly against such rapid developments have found their way into the negotiations on the final AI Act.

Furthermore, it has already been pointed out that ChatGPT sometimes disseminates unverified information due to the programming of the algorithm. This means that the programme could be used specifically as a vehicle to spread post-factual information, hate speech and propaganda. It can therefore be said that GPAIs pose an inherently higher risk to consumers than narrow AIs programmed for specific purposes.

Title IA of the final Council draft is specifically dedicated to GPAIs. According to Art. 4 b, GPAIs used as high-risk AI systems will also be subject to the strict requirements of the high-risk regulation. Art. 4 c makes an exception, provided that the provider has excluded all high-risk implications of his GPAI. On the risk scale of the AI Act, GPAIs could thus be classified as a sub-category of high-risk AI systems. Instead of a direct application of the GPAI rules, the EU Commission should be tasked with an implementing act in which the obligations for general purpose AI can be adapted.

Summary and conclusion

ChatGPT will revolutionise search and knowledge management and automate large areas of knowledge work. Rapid technological progress, which has led to vastly improved processors and immense amounts of data, now makes it possible to build such huge language models. This is good timing for a European AI Act that sets clear limits on the use of such technologies. It remains to be seen whether chatGPTs will be regulated as GPAIs in the final version of the AI Act. In any case, it is clear that even under the Commission's current version of the draft AI Act, chatbots such as ChatGPTs will be regulated differently depending on their intended use and will be subject to specific transparency obligations.

Schedule your initial consultation

Describe your situation to us in a no-obligation phone call, and our lawyers will work with you to find the best solution.

Schedule consultation

More news

14.05.2025

Data licence agreement: the foundation of legally compliant data access and usage

08.05.2025

Customer data in asset deals: privacy and information obligations

02.05.2025

ChatGPT and its regulation under the AI Act