Recent news reports revealed that since July 2020, non-consensual fake nude photos of more than 100, 000 women have been generated and very few of them likely have any idea these photos exist. The concern is not “who” is responsible for making these images, but “what”– a bot operating on the messaging app Telegram.
Deepfake detection company Sensity researched the impact of this artificial intelligence (AI) tool, which “removes” clothing from non-nude photos and synthesizes female body parts on the final image. Sensity published its findings to urge services hosting this content to take action. The term “Deepfake” (a combination of “deep learning” and “fakes”) includes videos, images, and audio files generated or altered with the help of AI. Deepfakes are meant to deceive an audience on its authenticity. Originating in 2017 from an anonymous user on Reddit, deepfakes have a history of helping users create fake pornographic images, initially of celebrities. The malicious and benign uses of deepfakes are well-cited, but what has changed is the ease of access to the code to create and use these documented AI tools. “These images require no technical knowledge to create. The process is automated and can be used by anyone– it’s as simple as uploading an image to any messaging service.”
So, what can be done about a technology that has run rampant, despite attempts to rein it in?
There are a variety of stakeholders who can take steps to help.
- Terms and Conditions of Service Providers
- Telegram’s Terms of Service state that no one should “post illegal pornographic content on publicly viewable Telegram channels, bots, etc.” Telegram will process takedown requests for this illegal content as they are submitted.
- The open-source code from DeepNude, a similar technology exposed as an “invasion of sexual privacy” back in 2019, existed on GitHub even after the app was taken down. When asked about this, GitHub explained that it does not moderate user-uploaded content unless it receives complaints.
- Having service providers place the onus on users to report violations of their policies is not an effective way to fight pornographic deepfakes, where a majority of the women likely did not know about the images. So, there needs to be a better industry standard that clarifies the responsibilities service providers have to verify the authenticity of content shared on their platforms. This can be addressed by an industry-wide adoption of technology solutions, as described below, to automatically detect deepfakes.
- The Technology Solution
- The same AI technology that is used to make deepfakes could potentially be used to identify them. There are research teams investigating how to automatically detect deepfakes, such as the US Defense Advanced Research Projects Agency (DARPA)’s Media Forensics Department and the University of Albany.
- The IPilogue previously featured expertise from lawyer Maya Medeiros regarding creating a legal framework for AI. In relation to deepfakes, she wrote about Canadian company Dessa, which built a decoder to help detect audio deepfakes.
- Larger companies, like Microsoft, have also announced tools that can help authenticate videos.
- Increasing Media Literacy
- It is also necessary to educate social media users on the signs of how to spot fake media, since “studies have shown that disinformation travels faster and further among online communities simply because it is fake.” Siwei Lyu from the Computer Vision and Machine Learning Lab at the University of Albany shared that “user education and media literacy is the biggest issue here [since] a lot of people can be fooled by these fake videos because they simply don’t know that videos can be manufactured and manipulated in this way.”
- Legislative Reform
- Bill C-13, Protecting Canadians from Online Crime Act, amended the Canadian Criminal Code in 2015, making it an offence at s 162.1 to share nonconsensual pornography (also known as “revenge porn”, the explicit portrayal of an individual without their consent). It remains to be seen whether this offence could be applied to images falsified through deepfake technology. Ontario courts have recognized “public disclosure of private facts” as a common law tort, which is viewed as a parallel branch of the “intrusion on seclusion” privacy tort. This tort may be used in cases, like Doe 464533 v ND, 2016 ONSC 541, where revenge porn occurs between former intimate partners. In relation to deepfakes, privacy torts and privacy legislation, such as the Personal Information Protection and Electronic Documents Act, may not be helpful since the images are not true depictions of an individual and do not technically reveal personal information about an identifiable individual. However, the Quebec Civil Code contemplates that it is an invasion of privacy to use a person’s “name, image, likeness or voice for a purpose other than the legitimate information of the public”, which may fill this gap on a provincial level.
- Ultimately, it will be important to watch other jurisdictions to see if there is an opportunity to target deepfakes more narrowly in legislation. For example, the U.S. pending legislation, the Deepfakes Accountability Act may serve as a model for how countries can address the harms of deepfakes. The legislation, unlike the existing avenues in Canada, proposes labelling fake content with a watermark or statement regarding the altered content. There are also clear criminal and civil penalties associated with a failure to disclose that media has been altered. The legislation places an onus on software manufacturers who believe their creation could be used to produce deepfakes to ensure the software allows for such disclosures to be made and that their terms of use are updated accordingly to include this obligation. The proposed legislation also contemplates creating a task force to research and develop deepfake detection technologies.
None of these options can operate as a single solution. Detecting technologies cannot decrease the impact of the spread of fake media like human awareness can, and legislative reform cannot work in a silo without collaboration from service providers. The most effective solution will feature expertise from multi-disciplinary teams to ensure that service providers invest in automated solutions to help monitor their content, which can help providers avoid new penalties that hold them and creators of deepfake generators accountable, while both governments and social media platforms share content to improve media literacy.
Written by Summer Lewis, a third year JD Candidate at Osgoode Hall Law School, enrolled in Professors D’Agostino and Vaver 2020/2021 IP & Technology Law Intensive Program at Osgoode Hall Law School. As part of the course requirements, students were asked to write a blog on a topic of their choice.