Navigating the Future of AI: The EU AI Act

17 April 2024

In a major step forward for responsible AI development, the European Parliament approved the Artificial Intelligence Act on March 13th. This legislation, forged through negotiations with member states in December 2023, prioritises both safety and innovation. The Act ensures that AI systems comply with fundamental rights, while simultaneously fostering a thriving environment for responsible AI development.

The EU AI Act primarily focused on four main pillars to regulate AI development and use. Read on for a deeper understanding of the definition of AI systems and the obligations of high-risk AI systems, as well as the Act’s four pivotal pillars, each designed to regulate and ensure trustworthy AI throughout the European Union, and to learn how we can support your seamless adaptation to these changes.

Distinction between Regular AI Systems and High-Risk AI Systems

This novel AI Act brings to the fore a new legal definition for an “AI system”. It remains to be enacted but so far the definition is a functional one encompassing technology approaches including machine-learning approaches (supervised and unsupervised), reinforcement learning including through the use of deep-learning and logic and knowledge-based approaches such as inference and deductive engines. The definition is therefore expected to be wide-ranging.

The AI Act will go even further by distinguishing between regular AI systems and high-risk AI systems. High-risk AI systems will have higher and more stringent compliance standards such as the establishment and implementation of a risk management system. The personnel will likely be required to undergo specific training on data governance in order to implement certain principles within the management and governance of the entity. There are many more obligations which will be incumbent on the entity managing the high-risk AI system.

Pillar 1: General Purpose Safeguards

The EU AI Act established transparency requirements for general-purpose AI (GPAI) systems and their underlying models. These requirements focus on ensuring responsible development and mitigating potential risks. Developers must comply with EU copyright law when using training data and publish summaries detailing the content used for training. Additionally, powerful GPAI models require further scrutiny. Developers must conduct model evaluations, assess and mitigate potential systemic risks (e.g., bias), and report any incidents encountered during operation. Finally, the Act mandates clear labelling of artificial or manipulated images, audio, or video content (deepfakes) to prevent misuse.

Pillar 2: Biometric Identification Systems

The Act takes a strong stance against certain biometric identification systems (RBI) to protect privacy rights. Untargeted scraping of facial images and the creation of facial recognition databases from CCTV footage or internet data are strictly prohibited. Law enforcement is granted exemptions under tightly controlled circumstances. “Real-time” RBI systems can only be deployed with prior judicial or administrative authorisation, with limitations on time and geographic scope (e.g., finding a missing person). Similarly, “post-remote” RBI for criminal investigations requires judicial authorisation.

Pillar 3: Social Scoring & AI used to Exploit User Vulnerabilities

The EU AI Act seeks to safeguard individuals from unethical AI practices that exploit vulnerabilities or manipulate behaviour. Several applications are explicitly banned, including social scoring systems that judge individuals based on personal data, emotion recognition technology used in workplaces or schools, and predictive policing solely based on profiling. Furthermore, the Act prohibits AI systems designed to manipulate human behaviour or exploit vulnerabilities.

Pillar 4: Right of Consumers to Launch Complaints and Receive Explanations

The EU AI Act empowers consumers by granting them the right to receive clear explanations for decisions made by high-risk AI systems that impact their rights. Additionally, consumers have the right to submit complaints about AI systems if they believe they have been negatively affected. This fosters accountability and transparency within the development and use of high-risk AI.

Next Steps

Following a thorough legal and linguistic review, the EU AI Act is expected to be formally adopted in the imminent future.

Following its adoption, the Act will follow a staged implementation timeline. The Act will enter into force twenty days after its publication in the Official Journal. However, its complete application will occur 24 months after this date. Bans on high-risk AI practices will be implemented swiftly, taking effect just six months after the Act enters into force. To assist developers in navigating the new regulations, codes of practice will be established within nine months of the Act’s entry into force. More complex regulations surrounding powerful AI systems (GPAI) and their governance will be implemented after twelve months. To allow for proper adaptation, obligations for high-risk AI systems will come into effect after a more extended period of 36 months.

How AE Business Advisors Can Help

At AE Business Advisors, our team possesses in-depth knowledge of the latest developments in the AI sector, as well as their intricate interplay with national and international laws. Whether you are already utilising AI in your business or considering its integration in the future, we are equipped to help you comprehend and adapt to the regulatory landscape, in a seamless manner.

Contact Us

Key Contacts

Kris

Kris Scicluna

kscicluna@ae.com.mt