Euroconsumers members say stop to Meta’s Generative AI training data grab

Euroconsumers’ members Testachats and OCU say Meta's new privacy policy which will use vast amounts of consumer data to train their Generative AI systems breaks EU law

Meta has announced it will change the terms of its privacy policy so that people’s data on instagram, threads and facebook can feed into and train its generative AI models. 

In practice, this means every photo, every post, every comment, every like, every share – every word (bar private messages) that is on our social media pages will be fed into training programmes for Meta’s generative AI projects. Options to deny permission are available to users, but are only open for a limited time and are neither simple or intuitive. 

Euroconsumers’ members Testachats/Testaankoop in Belgium and OCU in Spain believe Meta is breaking EU law and are asking their respective authorities to act upon it. Let’s look at what Meta is planning, what actions our members are taking and how My Data Is Mine is helping them do so.

What are Meta’s Generative AI plans?

Meta is one of many large tech firms looking to entrench their lead in generative AI, hyped as the future for digital technology.  

Generative AI is a branch of AI that uses vast amounts of existing data to train models that can be used to generate outputs like speech, music, text, images and videos. Previously, the data content of books, albums, art work, photographs and voices has been fed into AI training models to enable them to generate their own outputs. Now, AI training models are thirsty for vast swathes of people’s personal and behavioral information to create new outputs. 

Yet Meta’s statements sheds little light on what new outputs they might have in mind, using fuzzy phrases like ‘helps solve complex problems, sparks imaginations, and brings new creations to life’. 

They go on to say that it can provide ‘real time answers in a discussion’ and ‘help with the organization and planning of vacations’. This lack of clarity makes agreeing or disagreeing to the practice impossible, and fails to inform consumers of any risks. 

Euroconsumers’ webinars on Generative AI brought together multiple perspectives to explore some of the challenges of this new technology. Paramount is the potential for incredibly convincing content that was in fact inaccurate, misleading or just plain fake. 


“These vague notions are obscuring a training data grab that has no clear objective other than what might just be to use people’s data for whatever they want. People are being asked to give over their information for purposes which they have no sense of or control over”


Els Bruggeman, Head of Policy and Enforcement, Euroconsumers

OCU and Testachats issue urgent Data Protection complaints

Not opposing this change will give Meta the ability to use your information in any way they want to train generative AI. And timing is everything – Meta have rushed this through, giving just one month’s notice of the changes which are due to come in on June 26th 2024.

However, opposing it is not simple. Users have a right in law to object to the use of their data in this way, but such a right is useless if it’s not easy to understand and exercise. The basis of our Spanish and Belgian members’ complaint to their national data protection authorities is that this significant update to data use terms is being carried out in a way that is non-transparent and harmful for users, in breach of the EU General Data Protection Regulation.

Here’s the main basis of the complaints: 

Insufficient right of information: the exercise of the right to object to data use is hidden, this breaches Article 12 of the GDPR that provides for an obligation of transparency of information and communications and the modalities for exercising the rights of the data subject:

Information that the change was happening was low key to say the least.  Users got a notification which is easy to ignore, they didn’t get an email and there was no information campaign by Meta.

Opposing it is difficult. It requires several steps through different forms that are not simple or intuitive. For example, through the facebook app, there is a minimum of seven steps plus a form to fill out, requirements to leave the app and return. When our members tested it they found bugs and glitches which make the process hard to follow.

Insufficient right of objection, in violation of Article 21 of the GDPR which provides for a right of opposition for the user, allowing them to refuse the use of their personal data:

Meta reserves the right to refuse user objections. Not only are people obliged to justify why they object to the sharing of their personal data, but their justification can then be refused by META.

They state that they can still “process information about you to develop and improve AI at Meta, even if you object to its use or do not use our products and services.”  So, even if the right to object is exercised by the user, Meta’s AI will still be able to exploit that person’s data. This is due to the fact that the machine derives information from all the people who appear in the image studied, without differentiating between profiles that have accepted or not the new Meta privacy policy.

The whole situation created by Meta’s new policy and objection process is confusing and counter to the principles and rights of user control enshrined in the GDPR. 

We’re left with a bizarre situation where if users have been able to jump through all the hoops and glitches to register an objection, Meta can grant that right to object, but then continue to use the user’s data. 

My Data is Mine in times of Generative AI

To be honest, Meta is not the only one who is guilty of this. In the race to conquer AI market share other big players like X or Linkedin are also indulging into it. That doesn’t make it less troublesome. It forces us to ask where this leaves consumers and where this leaves the idea that My Data Is Mine

Euroconsumers’ My Data Is Mine declaration is grounded in the firm belief consumers should be in control of their own data in the digital economy, and be able to use that power to push for innovation that really meets their needs. It was written when AI was on the scene but not on everyone’s minds and lips. Today we are seeing a fast paced and large-scale development of generative AI that feeds itself on scraped consumer data. It’s fair to say the idea of My Data Is Mine has been taken to challenging new heights. 

Companies need to not only be transparent and empowering about their data practices, but also be held accountable for misuse of it.  We want a data economy that works for people, not serves the growth of vast data sets just for the sake of it and  can be trained for uses that have no value to add.

While tech companies are massively using consumer data to develop AI tools they intend to commercialize, we need to ask ourselves how to ensure consumers are in charge of their data and how to guarantee they get valuable services in return? 

Is Meta using consumers’ data to train AI models that can detect hate speech, develop consumer friendly innovation, or is it used to develop the next AI frivolity that can be monetized?  

More than ever, we need empowered consumers to improve the market, and more than ever they are in need of a trustworthy and enforced framework to do this. In 2024, My Data is Mine is more relevant than ever.


Euroconsumers knows there is value in automating many routine consumer tasks, and its manifesto called on AI to be used to  answer the tangible needs of people, guided by ethical and human-centered principles. 

Meta’s AI plans couldn’t happen without the data created by consumers, who we believe have rights to access, share, control and get value from how it is used.