Skip to main content

With the growth of artificial intelligence (AI) and technological advances, there are more ways to harm others. Deepfake media, derived from combining the phrases “deep learning” and “fake,” uses advanced machine learning algorithms to create highly realistic digitally altered images, videos, or audio recordings. These manipulated media employ facial mapping and voice synthesis techniques to swap one person’s face or voice with another’s, resulting in content that appears authentic but is entirely fabricated. 

Specifically, in deepfake pornography, artificial and computer technology is employed to map an individual’s face onto sexually explicit videos or images. Initially, deepfake pornography primarily focused on celebrities, but it has now expanded to include non-consenting individuals such as acquaintances, friends, or classmates.

Deepfake pornography falls under the broader category of image-based sexual abuse, which encompasses creating and disseminating non-consensual intimate images. This form of synthetic porn has evolved into a commercial industry, with many websites catering to a vast network of paying members who consume and distribute digitally created content involving sexual abuse. The availability and prevalence of deepfake pornography have raised significant concerns regarding privacy, consent, and the potential for severe emotional and psychological harm to the victims. But how can victims of deepfake pornography seek justice?

The Growing Prevalence

The growing prevalence of deepfake pornography has raised significant concerns. While deepfake technology has diverse applications, it has predominantly been used to create non-consensual, sexually explicit, and harmful images, with women being the primary targets. A study revealed that a staggering 96% of deepfakes are pornographic. 

One of the major concerns associated with deepfake pornography revolves around the misunderstanding of consent in online spaces. There exists a disconnection and dehumanization of online public figures, particularly women, as real individuals. This contributes to a lack of acknowledgment of the boundaries of consent, perpetuating the idea that their online presence somehow grants permission for their images to be manipulated and exploited.

Additionally, as the underlying AI technology advances and becomes more accessible, there has been a disconcerting trickle-down effect. Initially, deepfake porn predominantly targeted celebrities, streamers, and popular TikTok stars. However, with increasing accessibility, victimization has extended to any woman, regardless of fame or public status. This alarming trend highlights the urgency in addressing the legal implications of deepfake pornography and the dire need to protect individuals, particularly women, from its damaging consequences.

Potential Legal Actions and Their Pitfalls

Legal actions available to victims of deepfake pornography are varied, but they often encounter challenges due to the complex nature of the issue and existing legal frameworks. Some websites have taken limited steps to address deepfake content, such as Reddit banning the deepfakes subreddit and platforms like Discord, Pornhub, and Twitter implementing bans on deepfake videos. The legal landscape, however, remains complex. 

Under section 230 of the Communications Decency Act (CDA 230), websites are generally not liable for third-party content posted on their platforms. This means that platforms hosting deepfake content cannot be legally compelled to remove it, and any injunction obtained by a victim would typically only apply to the individual who shared the content rather than the platform itself. As a result, the quick removal of deepfake pornography is often challenging, and many non-consensual videos and images continue to circulate online.

Currently, no specific laws in Oregon directly address deepfake pornography. Additionally, the existing legal avenues are not adequate to tackle the complexities of deepfake pornography. One of the better legal approaches is pursuing a non-consensual dissemination claim, commonly called “revenge porn.” Prosecuting the creators of deepfake porn as revenge porn will likely present challenges since one must recognize the individual depicted in the deepfake. Proving harm to the victim, the individual whose face had been used, without evidence of viewer recognition can complicate cases under a revenge porn approach.

Another insufficient legal avenue is a defamation lawsuit. Since the person depicted in the image or video is not actually in it, a defamation claim might have some appeal for victims of deepfakes. However, pursuing defamation claims against international or anonymous content publishers trigger other challenges, including jurisdictional issues. Additionally, some argue that defamation laws are insufficient to tackle deepfakes because they are too narrow and ignore the core issue of consent.

Copyright-related claims are another existing legal route, especially if the deepfake content uses copyrighted images or videos. Civil claims such as misappropriation or copyright infringement might be viable options. However, applying existing laws to deepfake pornography cases requires creative interpretation and is not always straightforward due to the evolving nature of the issue.

While some states, such as California and Virginia, have passed legislation directly targeting deepfake pornography, no federal or uniform state law addresses this growing issue. Victims of deepfake pornography deserve comprehensive legal options to obtain justice. As AI and technology continue to grow, this issue will grow with it. Our laws should protect victims of non-consensual dissemination, even fake ones.

If you have been the victim of non-consensual pornography, contact our team of licensed, caring professionals today to learn about your legal rights. Call today for a free, confidential consultation at: 1-888-407-0224 or use our confidential submission form. We will treat you with dignity and respect. 

You are not alone. We are here to help.

Madeline Russell

Madeline is a law clerk at Crew Janci and a law student at Lewis & Clark Law School, where she will be entering her final year as a J.D. candidate this fall. As a law clerk, Madeline supports Crew Janci attorneys with legal research and writing largely regarding sex abuse cases. At Lewis & Clark, she is involved with Women’s Law Caucus and International Law Society while pursuing a certificate in International Law. Madeline hopes to pursue a legal career advocating for victims. In her free time, Madeline likes to read fiction, travel, and dance.