A landmark federal law targeting the rise of non-consensual, explicit deepfakes is set to take effect, offering long-awaited justice and protections for victims. President Donald Trump is expected to sign the Take It Down Act during a White House ceremony on Monday, signaling a strong bipartisan response to the growing misuse of AI-generated sexual imagery.
The law criminalizes the online sharing of non-consensual, sexually explicit images — whether real or digitally manipulated — and mandates that tech platforms remove such content within 48 hours of notification. This includes deepfakes, where a person’s face is digitally placed on a nude body, a form of harassment that has impacted high-profile figures like Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as countless teen girls across the U.S.
Previously, federal protections only applied to explicit AI-generated images of minors, leaving adult victims vulnerable under a patchwork of state laws. The Take It Down Act addresses that gap and sets a nationwide standard, giving law enforcement clearer guidelines for prosecuting offenders.
The bill — introduced by Sen. Ted Cruz (R-TX) and Sen. Amy Klobuchar (D-MN) — passed Congress with overwhelming support. Major tech companies including Meta, Google, and TikTok, as well as over 100 advocacy organizations, backed the legislation.
One of the most vocal supporters is Elliston Berry, a Texas high schooler whose AI-manipulated image was circulated on Snapchat. “Every day I’ve had to live in fear of those images coming back,” she said. “Now, I can finally breathe knowing there will be consequences.”
First Lady Melania Trump also championed the bill, lobbying lawmakers and hosting Berry during the president’s March address to Congress.
While tech platforms like Google, Snapchat, and Meta already offer tools for victims to request removal of such content, enforcement has been inconsistent. The law strengthens accountability by requiring action and expanding tools for victims, while also supporting organizations like StopNCII.org and Take It Down that help users remove harmful images from multiple platforms.
Despite industry improvements, harmful AI services continue to spread, and some platforms remain unregulated. This new law aims to close that loophole.
“Non-consensual deepfakes are a clear harm with zero societal benefit,” said Ilana Beller of advocacy group Public Citizen. “This law sends a powerful message — that this kind of abuse is not acceptable and will not be tolerated.”
As AI technology evolves, the Take It Down Act stands as one of the first major U.S. federal laws addressing its darker consequences, particularly in protecting women and teens from digital sexual exploitation.