X’s Grok AI is at the center of a global backlash after reports that it generated non-consensual deepfakes of individuals.
Several countries and regulators are taking action, demanding accountability and tighter controls over AI-generated deepfake content.
Critics argue Grok lacks sufficient consent safeguards, allowing sensitive misuse of personal images and likenesses.
Some regions have initiated investigations or legal actions targeting X and its moderation policies.
The scandal fuels broader global concerns around AI ethics, privacy, and responsibility in generative models.
Experts are urging platforms to adopt stronger filters, verification systems, and user consent mechanisms to prevent misuse.