Taylor Swift deepfake pornography controversy
| Part of a series on |
| Artificial intelligence (AI) |
|---|
In late January 2024, sexually explicit AI-generated deepfake images of American musician Taylor Swift were proliferated on social media platforms 4chan and X (formerly Twitter). Several artificial images of Swift of a sexual or violent nature were quickly spread, with one post reported to have been seen over 47 million times before its eventual removal. The images led Microsoft to enhance Microsoft Designer's text-to-image model to prevent future abuse. Moreover, these images prompted responses from anti-sexual assault advocacy groups, US politicians, Swifties, and Microsoft CEO Satya Nadella, among others, and it has been suggested that Swift's influence could result in new legislation regarding the creation of deepfake pornography.
A similar controversy emerged in August 2025, when The Verge reported AI image and video tool Grok Imagine generated sexually explicit images and videos of Swift from an otherwise innocuous text prompt.