Non-consensual deepfakes disproportionately target women and girls

Synthetic child sexual abuse material should be treated as harm-producing even absent an identifiable victim.

By: Abarna Kamalakumaran

Across Europe a new chapter in digital justice is unfolding as authorities take bold action against harmful AI and content platforms, holding powerful tech companies to account.

In France, prosecutors have raided the offices of X and summoned its owner Elon Musk as part of a widening criminal investigation into the platform’s alleged role in facilitating sexually explicit deepfakes, distribution of child abuse material, and algorithmic abuses – a clear signal that no company is above the law when it comes to protecting citizens from exploitation and illegal content.

At the same time in Spain, the government has launched legal action to probe X, alongside other major social networks, for the spread of AI-generated child sexual abuse imagery and is pushing for tougher accountability measures to safeguard children’s dignity and rights online.

With these prosecutions, justice wins again – reinforcing that non-consensual sexual deepfakes, child exploitation and online grooming will be met with serious legal consequences, not impunity.