Recognizing the threat of AI Generated Intimate Content

While freedom of sexual expression is important for relationship intimacy as well as body image, it is critical for adults depicted in intimate content to have consented to its creation and/or sharing. Non-consensual creation, sharing, or the threat of either of those things is called image-based sexual abuse. The increasing capabilities of generative AI and access to those capabilities makes it easier for the creation of synthetic intimate content of others without sufficient consent. Left unchecked, this threat can normalise the sexual abuse of children, undermine internet safety, and increase the difficulty in identifying and protecting real victims.

The Cyber Civil Rights Initiative explains that “Image-based sexual abuse (IBSA) can inflict serious, immediate, and often irreparable harm on victims and survivors, including mental, physical, financial, academic, social, and reputational harm. Targets of IBSA may be threatened with physical and sexual assault; stalked online and at their homes and workplaces; and harassed both online and offline. They are subject to financial risks as career opportunities diminish due to missed workdays, job termination, or lost opportunities for new employment or promotions. They may also suffer health burdens such as anxiety, depression, and suicidal ideation.”

President Biden’s October 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence specifically calls for “preventing generative AI from producing … non-consensual intimate imagery of real individuals ” in 4.5(a)(iv).  The following month, Dr. Elissa Redmiles served as an expert at a roundtable discussion hosted by the U.S. White House of experts from the U.S and U.K, global civil society advocates, survivors, and researchers – to offer insights into next steps to preventing AI-facilitated image-based sexual abuse.

Dr. Redmiles highlighted three key open questions in this space, developed in collaboration with two other PRISM members: Yoshi Kohno (UW) and Lucy Qin (Brown University): a) How to block content misuse? b) How to detect abuse? c) How can we deter perpetration? Dr. Redmiles, Dr. Kohno, and Dr. Qin along with their coauthors recently highlighted these and additional challenges in preventing synthetic NCII in a public comment they authored in response to a request for information from the U.S. National Institute for Standards and Technology (NIST) related to NIST’s assignment to carry out responsibilities related to section 4.5 of the executive order.

Dr. Redmiles remarks at the round table and in the comment to NIST highlight the need for technical mechanisms that could prevent the collection and misapplication of content posted online. Just like physical copy machines respect “forced secure watermarks” to prevent unauthorized photocopying, such mechanisms can include data poisoning tools like Glaze and PhotoGuard to keep AI from being prompted with content protected by these tools.  

A major difficulty in detecting NCII abuse requires determining between acceptable and malicious content. Dr. Redmiles has raised a key question in this space pondering if it is even possible for AI to produce generic intimate content which does not infringe on the likeness of any real person, or whether all AI-generated intimate content so closely resembles a real person’s likeness that it is NCII. Empirical research is needed to estimate what portion of generated images of an unclothed human could be mistaken for a real person, who will be harmed by the creation of that content.

Moving forward in preventing NCII abuse will require more than just technical innovation: social norms about the acceptability of creating synthetic NCII, and seeking it out, will need to change. Changing  Techniques to attribute specific AI outputs to their creators and messaging to dissuade creators and viewers from abusive behavior will be needed to establish the norm that creating and viewing intimate content of others without their consent is not acceptable behavior.

The challenges Dr. Redmiles presented build on broader PRISM work focused on other forms of image-based sexual abuse, including a recent white paper with a M&V population community-group that is frequently subject to this form of technologically-facilitated abuse. “The report, which relies on interviews with more than 50 adults who have created or shared intimate images, combines an understanding of how users share sensitive images via tech platforms with the technical expertise needed to create systems that prevent abuse. “ [Georgetown’s Article – https://college.georgetown.edu/news-story/redmiles-report/]. Dr. Redmiles is engaged in several ongoing projects in the space, including with PRISM members Kohno and Butler. Dr. Redmiles also recently co-authored a technical blueprint with 10 research directions for computer scientists to tackle digital intimacy issues that disproportionately harm M&V populations: https://ieeexplore.ieee.org/document/10313957, https://elissaredmiles.com/research/sexworkSPMag2024.pdf  

Dr. Redmiles’s works with the Center for Privacy and Security of Marginalized and Vulnerable Populations (PRISM), a National Science Foundation project that aims to transform how the security community addresses the specific cybersecurity needs of marginalized and vulnerable (M&V) populations, by developing tools and methods to center those needs at the core of cybersecurity research and technology design.