Spain Launches Criminal Probe Into X, Meta, and TikTok Over AI Deepfakes
The Spanish government has requested a criminal investigation into X, Meta, and TikTok following the viral spread of 3 million AI-generated nude images in just two weeks. Prime Minister Pedro Sanchez and Minister Elma Saiz signaled a crackdown on algorithmic amplification of deepfakes targeting minors.
Mentioned
Key Intelligence
Key Facts
- 1Spain's government requested a criminal investigation into X, Meta, and TikTok for potential crimes against minors.
- 2Approximately 3 million AI-generated nude images were detected online in just under two weeks.
- 3The probe focuses on potential offenses including child pornography and degrading treatment of minors.
- 4Prime Minister Pedro Sanchez stated the platforms are undermining the 'mental health, dignity, and rights' of children.
- 5Meta and TikTok claim to have robust safeguards, while X did not immediately respond to the specific probe announcement.
- 6The investigation targets the 'algorithmic amplification' of harmful AI-generated content.
Who's Affected
Analysis
The Spanish government's decision to initiate a criminal investigation into X, Meta Platforms, and TikTok marks a watershed moment in the global regulatory battle against AI-generated non-consensual intimate imagery (NCII). By involving the public prosecutor's office, Madrid is shifting the conversation from civil platform liability to potential criminal offenses, including child pornography and the degrading treatment of minors. This escalation follows a staggering report that approximately 3 million AI-generated nude images, many depicting minors, proliferated across these platforms in a period of less than 14 days. The move signals that European regulators are no longer satisfied with self-regulation or standard content moderation reports, especially as generative AI tools lower the barrier to creating high-fidelity, harmful content.
At the heart of the investigation is the concept of algorithmic amplification. Spanish Minister of Inclusion Elma Saiz explicitly stated that the government cannot allow harmful content to be emboldened by the very algorithms designed to maximize engagement. This strikes at the core business model of social media giants: the recommendation engines that surface content to users. While platforms like Meta and TikTok have long maintained that they have robust systems to detect and remove child sexual abuse material (CSAM), the sheer volume of AI-generated content—3 million images in two weeks—suggests a systemic failure to contain the rapid output of generative models. The investigation will likely scrutinize whether these platforms' safety filters were bypassed or if their automated detection systems are simply unequipped to handle the scale of AI-generated deepfakes.
The Spanish government's decision to initiate a criminal investigation into X, Meta Platforms, and TikTok marks a watershed moment in the global regulatory battle against AI-generated non-consensual intimate imagery (NCII).
The role of specific AI products, such as X’s Grok, is also under the microscope. While X has previously claimed to have implemented restrictions on image generation and maintains a zero-tolerance policy for non-consensual nudity, the platform has faced consistent criticism for its leaner moderation teams and more permissive content policies under Elon Musk's ownership. In contrast, Meta has emphasized that its AI tools are specifically trained to reject requests for nude imagery and that it proactively removes NCII when detected. However, the Spanish government's probe suggests that even these safeguards may be insufficient when faced with sophisticated users who find ways to 'jailbreak' or manipulate AI models to produce illicit content.
This criminal probe could have far-reaching implications for the tech industry, particularly in how companies develop and deploy generative AI tools. If the Spanish prosecutor finds evidence of 'willful blindness' or criminal negligence, it could lead to unprecedented fines or even operational restrictions within the country. Furthermore, this action aligns with a broader European trend toward stricter AI governance, exemplified by the EU AI Act and the Digital Services Act (DSA). Other member states may follow Spain's lead, creating a fragmented but increasingly hostile legal environment for platforms that fail to police AI-generated harms effectively.
Looking forward, the industry should expect a push for more transparent 'Know Your Customer' (KYC) protocols for AI image generators and mandatory digital watermarking that social media platforms must recognize and filter. The outcome of this investigation will likely determine whether the responsibility for AI safety remains with the tool developers or shifts more heavily toward the platforms that facilitate the viral spread of their outputs. For now, the Spanish government has made it clear that the safety of minors and the protection of human dignity will take precedence over the rapid, unchecked expansion of generative AI features.