The AI-generated images were reportedly viewed more than 47 million times before the account that posted them on X was suspended
The sexually explicit AI-generated “images” of pop singer Taylor Swift that went viral on social media platforms earlier this week are… “Worrying” Promising measures against non-consensual pornography are imminent, White House spokeswoman Karine Jean-Pierre told reporters during a news conference on Friday.
“We will do what we can to deal with this issue.” Jean-Pierre said, adding that Congress must pass legislation, and social media platforms must crack down on the sharing and dissemination of such images.
“While social media companies make their own independent decisions about content moderation, we believe they have an important role to play in enforcing and enforcing their own rules to prevent the spread of misinformation and non-consensual intimate images of real people.” She said.
One photo posted on X was viewed more than 47 million times before the account was suspended, according to the New York Times. X claimed he was working to remove the photos and suspended several accounts that posted them. The platform's terms of service prohibit the sharing of AI-generated images of real people, pornographic images or otherwise.
A search for “Taylor Swift” on X returned an error Saturday afternoon. One Swift fan on the platform said the search term was “Forbidden”. However, it was still possible to use it “”Taylor Swift”” In a search, as long as another word or words came first – either “Protect” or “Artificial intelligence was created.”
Some images were originally posted on the Telegram group dedicated to this “Sexual images of women created by artificial intelligence without their consent.” on Thursday, according to technology blog 404 Media. There were others hanging around on 4chan and other forums for weeks before the move to “mainstream” social media saw them spread widely.
Many of the images were created using Microsoft's Designer, a commercially available AI text-to-image builder. This Telegram group for beginners explains how to circumvent Microsoft's protections against celebrity deepfakes and pornography, detailing how, while the software will not generate an image in response to the prompt. “Taylor Swift” He will respond agreeably “Singer Taylor Swift.”
Microsoft told the blog about this “Investigate these reports and… take appropriate measures to address them.” Pointing out that its terms of service prohibit the use of software to create it “Adult or non-consensual intimate content.”
US lawmakers introduced a bill in the House of Representatives earlier this month aimed at federally controlling the use of artificial intelligence in audio and video deepfakes. The No Counterfeiting and Unauthorized Duplications Act (No Artificial Intelligence Fraud Prevention Act) is said to be based on a similar Senate bill that curates originals, encourages artistry, and maintains the integrity of entertainment (No Counterfeiting Act), which allows celebrities to sue any Someone who creates “Digital copies” To their image or voice without permission.
You can share this story on social media: