Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
A prominent Spanish feminist activist, Carla Galeote, is intensifying calls for stricter online regulation after she was targeted with AI-generated fake nude images and faced ongoing digital abuse — experiences she says expose serious gaps in current laws and enforcement.
At just 25, Galeote combines legal expertise with a strong online presence advocating for women’s rights. But she says what she encountered online wasn’t just harassment — it was a form of digital violence enabled by AI that existing regulations struggle to address.
Spain’s government has recently moved toward regulating social media and AI-generated content, including proposals to:
Ban under-16s from accessing social platforms
Hold platform executives criminally liable if illegal or hateful content isn’t removed
Mandate age verification and enforce better content consent rules
But Galeote argues these steps aren’t enough.
Her key demands:
Traceable identities behind all online accounts — to end impunity for digital violence
Protection that applies to all users, not just minors
Stronger accountability mechanisms for platforms that repeatedly allow harmful AI content
She slammed under-16 bans as “paternalistic,” saying adults are just as vulnerable to deepfake abuse and deserve robust protections too.
Her stance comes amid a wider European momentum to regulate AI-driven harms. Governments from France to Poland are debating similar measures, and the European Union aims to require member states to criminalize non-consensual sexual deepfakes by 2027.
Spain has also previously approved draft legislation targeting AI deepfakes and tightening consent rules on image use, showing the issue is gaining political traction.
Critics of stringent regulation, including some tech executives, argue new rules could threaten free speech. Galeote disagrees, saying freedom of expression and safety online can coexist — but only if platforms are held accountable and harmful anonymity is limited.
Her case underscores a growing reality in the digital age: as AI content tools proliferate, lawmakers are scrambling to catch up, and victims of AI abuse are demanding that legal protections actually reflect the scale of harm people experience online.