European Union Eases AI Compliance Rules

Negotiators from the European Parliament and the Council of the European Union have reached a provisional agreement to simplify key elements of the bloc’s artificial intelligence regulations while strengthening protections against harmful AI-generated material.

The deal, announced Thursday, forms part of the European Union’s broader effort to streamline regulatory procedures linked to the implementation of the EU AI Act.

Under the proposed changes, the enforcement timeline for certain high-risk AI systems will be postponed by up to 16 months, giving regulators and companies additional time to develop the technical standards and compliance tools required under the law.

According to the agreement, obligations for stand-alone high-risk AI systems will begin on Dec. 2, 2027. Rules covering high-risk AI integrated into consumer products will take effect later, on Aug. 2, 2028.

The revised framework also introduces stricter measures targeting abusive AI content. Lawmakers agreed to ban AI applications used to create non-consensual intimate material and child sexual abuse content.

In addition, the agreement expands selected regulatory exemptions to smaller mid-sized companies and extends the deadline for national AI regulatory sandbox programs until August 2027.

EU negotiators also moved to accelerate transparency requirements for AI-generated content. Providers will now be required to implement disclosure measures within three months instead of the previously proposed six-month period.

The provisional accord still requires formal approval from both the European Parliament and the Council before undergoing legal and linguistic review ahead of final adoption in the coming weeks.
NEWS DESK 
PRESS UPDATE