The Regulation of the European Union on Artificial Intelligence represents a fundamental step towards the regulation of this dynamically developing technology. Regulation (EU) 2024/1689 of the European Parliament and of the Council establishes uniform rules for the development, placing on the market and operation of artificial intelligence (AI) systems with the aim of ensuring their safety, transparency and trustworthiness. The regulation aims to protect fundamental human rights, prevent discrimination and establish the liability of companies developing and using AI systems.
The Regulation applies to a wide range of entities. It will primarily affect providers of AI, that is, companies and other organisations developing and supplying AI systems on the EU market. Furthermore, the regulation concerns deployers, that is, institutions and companies that actively use artificial intelligence systems. The Regulation also applies to importers and distributors of AI systems who place these technologies on the European market. Last but not least, public institutions will also have to meet the new requirements if they use AI systems, for example, in healthcare, transport or law enforcement. The Regulation does not apply only to entities established in the EU, but also to entities established or located outside the EU.
A key element of this Regulation is the approach based on risk assessment of individual AI systems. Systems with unacceptable risk, such as social scoring modelled on China, are completely prohibited. The category of high-risk AI systems includes technologies used, for example, in areas that have a significant impact on the lives of citizens, such as healthcare, transport, employment or creditworthiness assessment. These systems will have to meet strict regulatory criteria. AI with low risk, for example chatbots, will be subject only to a transparency obligation. One of the important aspects of the Regulation is the general obligation to label AI-generated content, which means that regardless of the riskiness of the AI system, developers will be obliged to ensure that users are clearly informed that they are communicating with artificial intelligence.
The Regulation further introduces several key requirements that companies deploying an AI system will have to meet. Companies deploying high-risk AI systems will be obliged to carry out risk assessments and establish mechanisms for monitoring and reporting incidents. Furthermore, they will have to appoint responsible persons who will ensure compliance of AI systems with the new rules. Conversely, companies using low-risk AI systems do not have to meet strict rules, but if these AI systems may lead to deceiving users, for example by generating deepfake content or creating news articles with the help of AI, specific requirements for labelling such content and informing users will apply to them. Deployers will moreover be obliged, regardless of the riskiness of the AI system, to ensure that their employees are sufficiently trained in the field of AI, according to their role and responsibility when working with AI systems.
Sanctions for breaches of obligations are very strict – in most cases they may reach up to 3% of the company’s worldwide turnover or EUR 15 million, whichever is higher. In particularly serious cases, sanctions may then reach up to 7% of worldwide turnover or EUR 35 million. Fines are threatened, for example, for unlabelled AI-generated materials or for failure to meet transparency obligations.
The new Regulation thus brings a whole range of challenges. The main questions include how the rules will be applied in practice and whether entities will be able to adapt to the new regulations in time. Another topic under discussion is whether the regulation will restrict innovation and the competitiveness of European enterprises on the global AI market. Enforcement of the rules against multinational corporations that operate AI systems outside the EU may also be problematic.
Despite these challenges, the new regulation has the potential to become a model for other regions and to create a framework for the responsible use of artificial intelligence. By establishing clear rules for the development and use of AI, the EU can ensure a safer and more ethical approach to these technologies. A regulated environment can ensure greater public trust in artificial intelligence and support investment in innovation in the field of responsible AI systems. By this step, the European Union intends to protect its citizens, but at the same time to create favourable conditions for the long-term and sustainable development of AI technologies at the global level. The worldwide discussion on the ethical and legal aspects of AI will probably intensify and it will be interesting to observe how the Regulation proves itself in practice and how it will influence the future development of artificial intelligence not only in Europe, but also in the world.
Jan Příhoda
This text was translated from Czech to English using an AI translator.