The AI Act from the EU—What It Means for Digital Experiences
How will the new regulation for generative artificial intelligence impact your digital organization?
Written by Morten Eriksen on
How will the new regulation for generative artificial intelligence impact your digital organization?
Written by Morten Eriksen on
The European Union has taken a monumental step in regulating artificial intelligence (AI) by introducing the world's first comprehensive AI regulation—the AI Act (Regulation (EU) 2024/1689).
Published on 12 July, 2024, and entering into force on 1 August, 2024, this Act is set to reshape how organizations develop and deploy AI technologies.
For professionals working with digital experiences, understanding the implications of the AI Act is crucial for steering your offerings and organizations into a compliant and innovative future.
While the AI Act is an EU regulation, its influence extends far beyond European borders. Organizations operating outside the EU, including those in the United States, may find themselves subject to its requirements if their AI systems affect individuals within EU member states.
This global reach means that even without a physical presence in Europe, companies must align their AI practices with the Act’s stipulations to avoid significant penalties.
At the heart of the AI Act is the definition of an “AI System”, a term that has undergone several refinements to keep pace with technological advancements like generative AI and large language models. According to the Act, an AI System is:
“A machine-based system designed to operate with varying levels of autonomy and adaptiveness, generating outputs such as predictions, content, recommendations, or decisions that influence physical or virtual environments.”
For digital experiences, this means that not all technologies traditionally labeled as AI fall under the Act's jurisdiction. It’s essential to evaluate whether your digital tools meet this specific definition to determine compliance obligations.
The AI Act introduces a risk-based approach to regulation, classifying AI systems into four categories:
Understanding where your AI-powered digital experiences fall within this framework is important for compliance and risk management.
See also: Top 10 AI Tools for Content Editors »
The Act distinguishes between “Providers” (those who develop AI systems) and “Deployers” (those who use them). Obligations vary accordingly:
With this in mind, content editors must for instance ensure that the content delivered through AI complies with transparency and ethical guidelines.
The Act pays special attention to general-purpose AI models (GPAIM) and systems (GPAIS), especially those posing systemic risks due to their broad applicability and potential impact. Additional obligations for these include:
If your organization utilizes or develops such models, particularly those akin to large language models, these provisions are directly relevant.
Key dates to note:
Early action is advisable to align your AI strategies with these timelines.
The AI Act's focus on safeguarding individual rights and promoting transparency directly affects how digital experiences are crafted and delivered. Considerations include:
For content editors, this could mean revising content strategies to ensure all AI-generated content is compliant. Solution architects may need to redesign systems to include necessary compliance features.
By taking these steps, organizations can mitigate risks and continue to innovate within the regulatory framework.
Get some more insights 🤓