EU Historic AI Act: Pioneering Regulation for Artificial Intelligence

The European Union has taken a groundbreaking step in the regulation of artificial intelligence (AI) with the introduction of the AI Act. As the world's first comprehensive AI law, this legislation aims to provide protection and structure to the ever-expanding use of AI technologies. Here's a closer look at how the AI Act will safeguard individuals and ensure responsible AI development.

A Digital Strategy for AI Regulation

The EU's digital strategy has prompted the need to regulate AI, ensuring its responsible deployment and fostering favorable conditions for its development. AI technology holds immense potential, offering benefits such as improved healthcare, safer transportation, efficient manufacturing, and sustainable energy solutions.

In April 2021, the European Commission proposed the first-ever regulatory framework for AI within the EU. This framework categorizes AI systems based on their risk profiles, with varying degrees of regulation depending on the level of risk they present. Once implemented, these rules will set a global precedent for AI governance.

Key Objectives of AI Legislation

The European Parliament's top priority is to ensure that AI systems deployed within the EU adhere to the following principles:

  • Safety: AI systems must be safe for users.
  • Transparency: The inner workings of AI systems should be understandable and transparent.
  • Traceability: AI decisions and actions must be traceable, allowing for accountability.
  • Non-Discrimination: AI systems should not perpetuate discrimination.
  • Environmental Responsibility: AI should have a minimal environmental impact, promoting sustainability.

Additionally, the EU aims to establish a uniform definition for AI that remains technology-neutral and applicable to future AI systems.

AI Act: Tailored Rules for Different Risk Levels

The AI Act introduces a tiered approach to regulation, customizing obligations for providers and users based on the risk posed by the AI technology in question.

Unacceptable Risk: AI systems classified as posing an "unacceptable risk" will be banned. These include systems that engage in cognitive behavioral manipulation, social scoring, and real-time biometric identification.

Exceptions may be made for "post" remote biometric identification systems used for prosecuting serious crimes, but only with court approval.

High Risk: AI systems with the potential to negatively impact safety or fundamental rights fall into the "high-risk" category and will be further divided into two subcategories:

AI systems used in products covered by the EU's product safety legislation (e.g., toys, aviation, medical devices).

AI systems in eight specific domains that require registration in an EU database. These domains include biometric identification, critical infrastructure management, education, employment, access to essential services, law enforcement, migration control, and legal interpretation.

All high-risk AI systems must undergo thorough assessments before being allowed into the market, and monitoring will continue throughout their lifecycle.

Generative AI and Limited Risk AI

Generative AI models, such as ChatGPT, are required to adhere to specific transparency requirements, including disclosing AI-generated content, preventing the generation of illegal material, and publishing summaries of copyrighted data used in training.

AI systems categorized as "limited risk" should comply with minimal transparency requirements that enable users to make informed decisions. Users should also be made aware when interacting with AI systems that generate or manipulate image, audio, or video content, such as deepfakes.

Next Steps

On June 14, 2023, Members of the European Parliament (MEPs) adopted Parliament's negotiating position on the AI Act. Negotiations with EU member states in the Council will now commence to finalize the legislation. The aim is to reach an agreement by the end of the year.

The AI Act marks a significant milestone in global AI governance, demonstrating the EU's commitment to fostering responsible AI development while prioritizing the safety, transparency, and ethical use of AI technologies.

Cookies

This website uses cookies for anonymous analysis of the usage behavior. By using this website, you agree to the use of cookies. Learn more