X-Ray Vision: How the Latest Deepfake Privacy Invasion is Revealing the Dark Side of AI Technology

Recent news reports revealed that since July 2020, non-consensual fake nude photos of more than 100, 000 women have been generated and very few of them likely have any idea these photos exist. The concern is not “who” is responsible for making these images, but “what”– a bot operating on the messaging app Telegram

Deepfake detection company Sensity researched the impact of this artificial intelligence (AI) tool, which “removes” clothing from non-nude photos and synthesizes female body parts on the final image. Sensity published its findings to urge services hosting this content to take action. The term “Deepfake” (a combination of “deep learning” and “fakes”) includes videos, images, and audio files generated or altered with the help of AI. Deepfakes are meant to deceive an audience on its authenticity. Originating in 2017 from an anonymous user on Reddit, deepfakes have a history of helping users create fake pornographic images, initially of celebrities. The malicious and benign uses of deepfakes are well-cited, but what has changed is the ease of access to the code to create and use these documented AI tools. “These images require no technical knowledge to create. The process is automated and can be used by anyone– it’s as simple as uploading an image to any messaging service.”

So, what can be done about a technology that has run rampant, despite attempts to rein it in?

There are a variety of stakeholders who can take steps to help.

  • Terms and Conditions of Service Providers
    • Telegram’s Terms of Service state that no one should “post illegal pornographic content on publicly viewable Telegram channels, bots, etc.” Telegram will process takedown requests for this illegal content as they are submitted.
    • The open-source code from DeepNude, a similar technology exposed as an “invasion of sexual privacy” back in 2019, existed on GitHub even after the app was taken down. When asked about this, GitHub explained that it does not moderate user-uploaded content unless it receives complaints.
    • Having service providers place the onus on users to report violations of their policies is not an effective way to fight pornographic deepfakes, where a majority of the women likely did not know about the images. So, there needs to be a better industry standard that clarifies the responsibilities service providers have to verify the authenticity of content shared on their platforms. This can be addressed by an industry-wide adoption of technology solutions, as described below, to automatically detect deepfakes.

None of these options can operate as a single solution. Detecting technologies cannot decrease the impact of the spread of fake media like human awareness can, and legislative reform cannot work in a silo without collaboration from service providers. The most effective solution will feature expertise from multi-disciplinary teams to ensure that service providers invest in automated solutions to help monitor their content, which can help providers avoid new penalties that hold them and creators of deepfake generators accountable, while both governments and social media platforms share content to improve media literacy.

Written by Summer Lewis, a third year JD Candidate at Osgoode Hall Law School, enrolled in Professors D’Agostino and Vaver 2020/2021 IP & Technology Law Intensive Program at Osgoode Hall Law School. As part of the course requirements, students were asked to write a blog on a topic of their choice.