White House extracts voluntary commitments from AI vendors to combat deepfake nudes


WHITE HOUSE EXTRACTS VOLUNTARY COMMITMENTS FROM AI VENDORS TO COMBAT DEEPFAKE NUDES
Kyle Wiggers
8:14 AM PDT • September 12, 2024

The White House announced that several prominent AI vendors have pledged to take action against nonconsensual deepfakes and child sexual abuse material. Adobe, Cohere, Microsoft, Anthropic, and OpenAI, along with data provider Common Crawl, have committed to responsibly sourcing and protecting the datasets they utilize to train AI, ensuring these datasets do not include image-based sexual abuse content.

These organizations have vowed to implement “feedback loops” and strategies in their development processes to prevent AI from generating sexual abuse images. Furthermore, they have agreed to remove nude images from their training datasets “when appropriate and depending on the purpose of the model.” However, it’s important to note that these commitments are self-policed, and several AI vendors, including Midjourney and Stability AI, chose not to participate.

Moreover, the pledges from OpenAI raise concerns, especially since CEO Sam Altman indicated in May that the company would explore how to “responsibly” generate AI porn.

Despite these controversies, the White House emphasized that these commitments mark a significant achievement in their ongoing efforts to mitigate the harm posed by deepfake nudes.

“These voluntary commitments represent a proactive approach by major AI providers to ensure their technologies are not misused, especially concerning image-based sexual abuse.”

© Singularity Chamber of Commerce (SChamber) All Rights Reserved.