
The main concerns raised by the Vienna-based group NOYB (None Of Your Business), led by privacy activist Max Schrems, regarding Meta's AI training plans are as follows:
Consent: NOYB disagrees with Meta on the concept of consent, arguing that EU law requires explicit consent for using personal data, which they claim is not being obtained by Meta for its AI training6.
Legitimate Interests: Meta claims that it has a legitimate interest in using user data for AI training, which overrides the fundamental rights of users. NOYB argues that the European Court of Justice has already made it clear that Meta has no legitimate interest to override users' right to data protection when it comes to advertising, and the same should apply to AI training.
Lack of Transparency: NOYB claims that Meta's privacy policy is broad and unclear about the specific purposes of the AI technology that the user data will be used for. This lack of transparency makes it difficult for users to understand how their data will be utilized.
Data Removal: Once the data is used for AI training, users seem to have no option to have their data removed from the system, which is a violation of the "right to be forgotten" under the GDPR.
Unlimited Data Usage: Meta's new privacy policy appears to allow the company to use all public and non-public user data collected since 2007 for any undefined type of current and future AI technology, which NOYB believes is a clear violation of GDPR compliance1.
Responsibility Shift: Meta tries to shift the responsibility of data protection to users by directing them to an objection form (opt-out) that users are supposed to fill out if they don't want Meta to use all their data. NOYB argues that the law requires Meta to get opt-in consent, not to provide a hidden and misleading opt-out form.
Inclusion of Sensitive Data: Meta claims that it cannot distinguish between sensitive data under Article 9 GDPR, such as ethnicity, political opinions, religious beliefs (for which the "legitimate interest" argument is not available under the law), and other data for which a "legitimate interest" could theoretically be claimed. This could lead to the use of sensitive personal data in AI training without proper consent.
Overall, NOYB believes that Meta's AI training plans violate at least ten Articles of the GDPR and is urging national privacy watchdogs to stop the company from implementing these plans.

Meta intends to use public data from European users to train its AI models. This includes public posts or comments shared by users on its platforms such as Facebook, Instagram, and WhatsApp2. The company aims to use this data to better understand regional languages, cultures, and trending topics on social media in Europe. However, Meta has stated that it will not use private messages to friends and family nor content from European users who are under 18 for AI training purposes.

Meta justifies the need to train its AI models on data from European users by stating that it needs to better reflect the "languages, geography, and cultural references" of its users in Europe. According to Stefano Fratta, Global Engagement Director of Meta's Privacy Policy, if the company doesn't train its models on the public content that Europeans share on its services, such as public posts or comments, then the AI models and the features they power won't accurately understand important regional languages, cultures, or trending topics on social media2. Fratta also argues that Europeans will be ill-served by AI models that are not informed by Europe's rich cultural, social, and historical contributions.