Girl Robot AI Artificial Intelligence

The European Union has taken a monumental step in regulating artificial intelligence (AI) by introducing the world's first comprehensive AI regulation—the AI Act (Regulation (EU) 2024/1689).

Published on 12 July, 2024, and entering into force on 1 August, 2024, this Act is set to reshape how organizations develop and deploy AI technologies.

For professionals working with digital experiences, understanding the implications of the AI Act is crucial for steering your offerings and organizations into a compliant and innovative future.

Regulations with Global Impact

While the AI Act is an EU regulation, its influence extends far beyond European borders. Organizations operating outside the EU, including those in the United States, may find themselves subject to its requirements if their AI systems affect individuals within EU member states.

This global reach means that even without a physical presence in Europe, companies must align their AI practices with the Act’s stipulations to avoid significant penalties.

Definition of an AI System

At the heart of the AI Act is the definition of an “AI System”, a term that has undergone several refinements to keep pace with technological advancements like generative AI and large language models. According to the Act, an AI System is:

“A machine-based system designed to operate with varying levels of autonomy and adaptiveness, generating outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.”

For digital experiences, this means that not all technologies traditionally labeled as AI fall under the Act's jurisdiction. It’s essential to evaluate whether your digital tools meet this specific definition to determine compliance obligations.

Categorizing Risks in AI Systems

The AI Act introduces a risk-based approach to regulation, classifying AI systems into four categories:

  1. Unacceptable Risk: AI practices that pose significant harm to individuals’ health, safety, or fundamental rights. These are prohibited entirely.
  2. High Risk: Systems that could significantly impact individuals in critical areas like employment, healthcare, or law enforcement. These require strict compliance measures.
  3. Limited Risk: AI systems that pose certain transparency risks but are not high risk, such as chatbots. They require some disclosure to users.
  4. Minimal Risk: Systems with minimal impact, like spam filters, which are largely unregulated by the AI Act but may fall under other laws.

Understanding where your AI-powered digital experiences fall within this framework is important for compliance and risk management.

See also: Top 10 AI Tools for Content Editors »

Obligations for Providers and Deployers

The Act distinguishes between “Providers” (those who develop AI systems) and “Deployers” (those who use them). Obligations vary accordingly:

  • High-Risk AI Systems: Providers must ensure accuracy, robustness, and perform impact assessments. Deployers need to monitor usage and report incidents.
  • Limited-Risk AI Systems: Primarily transparency obligations, such as informing users they are interacting with an AI system.

With this in mind, content editors must for instance ensure that the content delivered through AI complies with transparency and ethical guidelines.

General-Purpose AI and Systemic Risks

The Act pays special attention to general-purpose AI models (GPAIM) and systems (GPAIS), especially those posing systemic risks due to their broad applicability and potential impact. Additional obligations for these include:

  • Performing comprehensive risk assessments.
  • Ensuring compliance with intellectual property laws.
  • Maintaining detailed documentation and summaries of training data.

If your organization utilizes or develops such models, particularly those akin to large language models, these provisions are directly relevant.

Timeline for Compliance

Key dates to note:

  • 1 August, 2024: The AI Act enters into force.
  • Within 6 Months: Prohibited AI practices must cease.
  • Within 24 Months: Organizations must comply with obligations related to high-risk AI systems.

Early action is advisable to align your AI strategies with these timelines.

Implications for Digital Experiences

The AI Act's focus on safeguarding individual rights and promoting transparency directly affects how digital experiences are crafted and delivered. Considerations include:

  • User Transparency: Users must be informed when they are interacting with AI, which pertains to chatbots and virtual assistants.
  • Content Integrity: AI-generated content must comply with copyright and intellectual property laws, affecting content curation and generation.
  • System Design: Architects must incorporate compliance measures into system design, potentially affecting development timelines and costs.

For content editors, this could mean revising content strategies to ensure all AI-generated content is compliant. Solution architects may need to redesign systems to include necessary compliance features.

Steps to Prepare for Compliance

  1. Audit Your AI Systems: Identify all AI systems in use and classify them according to the Act’s risk categories.
  2. Update Documentation: Ensure all AI systems have comprehensive technical documentation as required.
  3. Enhance Transparency: Implement user notifications where AI is used, in line with transparency obligations.
  4. Perform Impact Assessments: For high-risk systems, conduct thorough assessments to evaluate potential risks to fundamental rights.
  5. Train Your Teams: Educate your colleagues about the AI Act’s requirements to foster a culture of compliance.

By taking these steps, organizations can mitigate risks and continue to innovate within the regulatory framework.

Create business value with the digital customer journey

Related blog posts

Get some more insights 🤓


Get started with Enonic! 🚀