Site icon

EU AI Act Enforces New Rules on General-Purpose AI from August 2025

On August 2, 2025, the European Union’s AI Act ushered in binding obligations for general-purpose AI (GPAI) models—marking a significant milestone in global AI regulation. These rules confront the pressing challenges of transparency, copyright, systemic risk, and accountability, setting out a robust compliance framework for developers, vendors, and users of advanced AI technologies.

What Is a General-Purpose AI Model?

General-purpose AI (GPAI) models are powerful machine learning systems—like large language models or general foundation models—that:

GPAI models with systemic risk—those with even higher compute (10²⁵ FLOPs or more) or significant impact on society—face enhanced regulatory scrutiny.

The Core EU GPAI Obligations

1. Transparency

Providers must:

2. Copyright Compliance

Providers are required to:

3. Systemic Risk Mitigation (For High-Impact Models)

Models that meet the systemic risk threshold must:

4. Clear Definitions and Lifecycle Obligations

Compliance Tools: Guidelines, Code of Practice & Documentation Template

To smooth the path for providers, the European Commission offers:

The New Compliance Landscape

Organizations that develop, sell, or use GPAI models in the EU must:

This paradigm shift is driving new demand for expertise in:

How to Demonstrate Compliance

Providers can:

The EU’s approach is already influencing regulatory debates in the U.S., UK, and beyond, setting a high bar for responsible AI. By enshrining transparency, accountability, and risk mitigation at the heart of AI lifecycle, the EU AI Act seeks to balance innovation with fundamental rights, fostering trust among users, businesses, and regulators worldwide.

The EU AI Act represents a watershed moment for AI governance, requiring a proactive, lifecycle-based approach to compliance. Organizations should act now to adapt to this new standard, leveraging available guidelines and tools to ensure ethical, lawful, and accountable AI in Europe and beyond.


Frequently Asked Questions (FAQ)

Q1: What are the key criteria for classifying GPAI models?
A: Models trained with over 10²³ FLOPs and capable of a wide range of tasks, especially those generating language, are classified as GPAI; higher-capacity models (over 10²⁵ FLOPs) may be deemed systemic risk models.

Q2: If I use a GPAI model from another provider, am I a provider under the AI Act?
A: It depends. Integrating an unmodified GPAI into your product doesn’t make you a provider, but significant modifications—like retraining—can shift that responsibility to you.

Q3: Do open-source GPAI models fall under these rules?
A: Open-source GPAI models may still be subject to the obligations, though certain exceptions exist, particularly for models not placed on the market or under active development.

Q4: Is signing the Code of Practice mandatory?
A: No, it’s voluntary, but doing so greatly improves legal certainty and streamlines the compliance process. Providers may alternatively use other mechanisms to evidence compliance.

Q5: How is training data compliance checked?
A: Providers must use the EU’s official template to publish summaries of model training data. Authorities may request further detail or conduct audits for verification.

Q6: What is the timeline for compliance?
A: For new GPAI models, obligations began on August 2, 2025. Existing models on the market before this date have until August 2, 2027, to demonstrate compliance.

Q7: What happens if I fail to comply?
A: Non-compliance can lead to enforcement actions, including fines, orders to withdraw models from the EU market, or other regulatory penalties as detailed in the AI Act.


Exit mobile version