FOLLOWING the spread and surge of artificial intelligence (AI)-generated images of Taylor Swift on X (formerly Twitter) on Jan 24, the Elon Musk-owned social media platform began clamping on the graphic content within three days.
Initially, X’s safety team began “actively removing” all the images and by Jan 27, the platform began to negate attempts to search for the images through its search function, where any attempt to use “Taylor Swift” would result in an error message.
However, Variety reports that other variations, such as “Taylor AI Swift”, would still produce search results. “This is a temporary action and done with an abundance of caution as we prioritise safety on this issue,” X business operations head Joe Benarroch said in a statement to the BBC.
The alarming reach of the images was reported by NBC News to have generated more than 27 million views in 19 hours before the account that originally posted the images was suspended.
X’s safety team was also prompted to reiterate the platform’s stand on “non-consensual nudity”. “Posting non-consensual nudity images is strictly prohibited on X and we have a zero-tolerance policy towards such content,” the team posted.
According to the Daily Mail, a source close to Swift claims that the pop star is considering legal action.