Meta to Train AI Using Public EU Data Despite Privacy Concerns

Facebook and Instagram users in Europe to receive data usage notices as regulators watch closely

BRUSSELS/LONDON: Meta Platforms has confirmed it will begin training its AI models using public posts and interactions from adult users in the European Union, despite mounting privacy concerns and ongoing regulatory scrutiny.

This strategic move follows the postponement of Meta AI’s European launch, originally planned for June 2024. While Meta introduced AI features across the United States in 2023, their EU rollout was delayed after pushback from data protection authorities, including Ireland’s Data Protection Commission (DPC).

Now, Meta—parent company of Facebook and Instagram—has announced it will proceed with a user-informed approach. European users will soon receive in-app notifications outlining how their public content and interactions with Meta AI (such as prompts and queries) may be used to train machine learning models.

Importantly, Meta has emphasized that private messages and data from users under 18 will be excluded. The company is also providing an opt-out form allowing users to object to the use of their public data in AI training.

The European Commission has declined to comment on the update. Meanwhile, privacy advocacy group NOYB (None of Your Business) has urged national regulators to intervene, labeling Meta’s plan a violation of user rights and consent laws.

This development comes as the DPC continues broader investigations into AI practices at other tech giants. Elon Musk’s X (formerly Twitter) is under scrutiny for alleged use of personal EU data to train its AI chatbot, Grok, while Google is being probed for how it handled user data ahead of deploying AI systems.

Meta’s decision signals a broader shift toward public-data-based AI training in Europe, raising fresh questions about transparency, GDPR compliance, and ethical data use in the age of artificial intelligence.
NEWS DESK
PRESS UPDATE