EU Simplifies AI Rules to Support Small Businesses and Mid-Caps

EU Simplifies AI Rules to Support Small Businesses and Mid-Caps

EU Simplifies AI Rules to Support Small Businesses and Mid-Caps

MEPs agree on proposals to simplify AI rules: The European Parliament adopted its position on an AI Act simplification proposal with 569 votes in favour, 45 against and 23 abstentions. Notably, the proposal delays rules for high-risk AI systems to allow implementation guidance and standards preparation. MEPs introduce fixed application dates: 2 December 2027 for high-risk systems and 2 August 2028 for systems covered by EU sectoral safety legislation. Additionally, MEPs grant providers until 2 November 2026 for watermarking compliance. 

They’re introducing a ban on “nudifier” systems that create or manipulate sexually explicit images of identifiable people without consent. This excludes systems with effective safeguards. Furthermore, MEPs permit personal data processing to detect and correct biases in AI system provided strict necessity safeguards are in place. They back extending SME support measures to small mid-cap enterprises. MEPs argue that AI Act obligations can be less stringent for products already regulated under sectoral laws. Following the Parliament’s adoption, negotiations with the Council on the law’s final form can now begin.

Analyses

Beyond “US Innovates, Europe Regulates”: Julia Tréhu, Program Manager and Fellow, and Adrienne Goldstein, Senior Program Coordinator, at the German Marshall Fund of the United States, reported on a bipartisan US state lawmakers delegation's December 2025 study tour to Paris and Brussels. They met European Parliament members, Commission staff, civil society, innovators and researchers to examine AI governance approaches.

The study identified three key findings: 1) children’s safety and high-risk AI redlines represent clearest areas for near-term transatlantic policy alignment, warranting an AI dialogue; 2) structural barriers to AI competitiveness extend beyond regulation across the entire AI value chain; and 3) legislators require cross-sectoral AI expertise to address whole-of-society challenges. 

The research challenges the “US innovates, Europe regulates” stereotype, finding shared priorities on citizen protection and competitiveness despite transatlantic tensions. The study tour identified bloc-wide competitiveness obstacles beyond regulation: fractured capital markets, venture ecosystems, energy costs, talent retention, and access to compute and datasets. Notably, the EU’s lag in AI is unlikely to be explained meaningfully by the AI Act itself, as many of its provisions haven’t yet taken effect. US companies already cooperate with EU enforcement mechanisms on data portability and interoperability.

Overview of enforcement of Chapter V: Eliška Andrš, a Policy Researcher at the Future of Life Institute, wrote an overview of the AI Act’s enforcement provisions relating to the provisions that impose obligations on providers of general-purpose AI (GPAI) models. Under the AI Act, GPAI model providers have procedural and substantive obligations. While these obligations have applied since 2 August 2025, the Commission’s supervision and enforcement powers against GPAI model providers commence on 2 August 2026.

These powers include requesting documentation and information, conducting evaluations, requesting measures concerning compliance, risk mitigation, market restriction, recall and withdrawal, and imposing fines. Beyond the Commission, multiple actors enforce the AI Act against GPAI model providers. National market surveillance authorities can request the Commission to enforce regulations. Downstream providers can also lodge complaints against GPAI model providers. Furthermore, the scientific panel can alert the AI Office to any systemic or concrete identifiable risks posed by GPAI models.

Siplification will roll back our rights in order to feed AI: Amnesty International published a critical analysis arguing that the Commission’s “Digital Omnibus” proposals constitute an unprecedented rollback of digital protections, framed by advocates as “simplification” but functioning as deregulation benefiting business interests. The article states that Big Tech companies, which spend significantly on lobbying like Amazon’s €7 million annual budget, have pushed against regulation.

The analysis contends that the AI Omnibus poses a significant threat to the AI Act, which is considered one of the world’s most ambitious efforts to safeguard people from AI-related harms. This is done by delaying implementation of high-risk systems. The article highlights concerns about the “grand fathering” clause permitting early deployment without full compliance obligations. The analysis suggests a planned “digital fitness check” of existing digital laws could further support deregulation.

The EU must mainstream gender in AI policy: Viktoria Henkemeier, a Junior Policy Analyst, and Samuel Goodger, a Policy Analyst at the European Policy Centre, argue that legislators are beginning to treat AI-generated gender violence as a design and governance issue, not merely a content moderation one. Gender-based digital violence, like harassment, stalking, and doxing, predates widespread AI and is generally covered by existing legislation. Crucially, AI-generated harmful content differs in quality – it can manufacture harm on a large scale and lower the barriers to real-world violence, exploiting existing gaps that current tools can’t address. Notably, the AI Act mentions gender equality but does not acknowledge gendered power structures influencing AI design, training, deployment, nor AI’s societal implications. The Code of Practice on Transparency does not mention gender and the Code of Practice on General-Purpose AI fails to classify gender-based discrimination and violence as systemic risks.

Recommendations for Digital Omnibus trilogue: A report by Marcel Mir Teijeiro and Koen Holtman at the AI Standards Lab analyses the positions of the Council and European Parliament on the AI Act Omnibus trilogue. Their main recommendations is to prioritise health, safety, and fundamental rights protections alongside reducing administrative burden. Specifically, they welcome reinstatement of Article 6(4) registration requirements, support Parliament’s proposed Article 64(2a) legally requiring AI Office resourcing, and welcome Article 75 proposals for AI Office oversight powers.

Furthermore, they prefer the Council’s approach of maintaining current Annex I.A obligations over Parliament’s exclusions and support Parliament’s extension of value chain obligations to GPAI model providers under Article 25, provided these extensions exclude providers clearly stating non-high-risk intentions. Finally, the report recommends reinforcing GDPR safeguards in Article 4a, adopting Parliament’s cleaner formulation for Article 5 prohibitions on non-consensual intimate imagery and child sexual abuse material, and prioritising Parliament’s template-based post-market monitoring guidance with an earlier February 2027 deadline for Article 72(3).