Privacy advocates may be breathing a sigh of relief. While the world is grappling with an unprecedented pandemic, wearing a mask to prevent the spread of COVID-19 can result in an unforeseen side effect: masks potentially hamper the effectiveness of facial recognition technology, which was beginning to permeate every aspect of our society.
Facial recognition technology works by creating a template of a person’s face by measuring precise characteristics, such as the distance between the eyes. The template is then compared against other templates already stored in the database to find a possible match. According to experts, putting on a mask blocks access to the part of our biometrics that most uniquely sets us apart from others – the central portion of the face, just above the brow line down to the chin.
Still, this relief for privacy advocates may only be momentary, as companies are coming up with solutions to identify individuals even with masks. For example, researchers published the COVID19 Mask Image Dataset to Github to help artificial intelligence adapt. The dataset was created by scouring Instagram using the tag “mask” for selfies shared publicly. The creators behind the dataset assert that since the selfies were public, they did not need to ask for permission to use those photos.
This is not the first time a facial recognition company has asserted that building a database using publicly available photos does not violate privacy rights. Previously, I wrote about the privacy implications of Toronto Police using Clearview AI – a facial recognition technology application that has scraped over 3 billion images of people from social media websites such as Facebook, Twitter, and Venmo. By uploading a photo of a person into the database, you can find other identifying information such as their name, address, and occupation.
As a result of a series of journalistic investigations, Clearview AI is facing a class-action lawsuit. Residents in Illinois allege the company violated the Illinois Biometric Information Privacy Act (BIPA), legislation that requires entities to obtain consent before collecting and storing personal information. BIPA has been somewhat successful in protecting consumer’s facial and biometric data. In January of this year, Facebook agreed to pay $550 million to settle a class-action lawsuit over its use of facial recognition technology. Claimants in Illinois alleged that Facebook collected facial recognition data on images for its “Tag Suggestions” feature without providing adequate disclosure.
In response to the class action filed in Illinois, and perhaps because of Facebook’s settlement, Clearview AI asserted termination of its relations with non-law enforcement entities in a legal proceeding. However, at the heart of the discussion around Clearview AI, and other facial recognition technology, is the idea that images that are publicly available are up for grabs.
When a person makes their picture public, do they retain some expectation of privacy? Or, do they forego all expectations of privacy? Should we take into consideration the intention behind allowing the public to view a picture? Privacy lawyers suggest that while people may not expect complete privacy, they do have certain expectations as to how their images may be used. They may not consent to their picture aiding in the development of a database used by local law enforcement or government surveillance in countries known for committing human rights abuses.
Realistically, in today’s society, it isn’t easy to make every picture of your face private. Even though you can make your account private, having your profile picture visible to the general public makes it easier to identify friends or acquaintances. Building a brand, either of an individual or a company, necessitates strategically sharing information using social media.
From a legal perspective, the Canadian courts have not yet had to answer whether an individual foregoes every right to privacy once they make a picture publicly available. In a 2018 Ontario Court of Justice case, it was determined that an individual does not have a reasonable expectation of privacy to information they submit when applying for a driver’s licence. In that specific case, the police had used facial recognition technology on the Ministry of Transportation of Ontario database to find an individual they believed to be the owner of fraudulent licenses.
Last year, the Supreme Court of Canada clarified in the case of R v Jarvis that privacy is “not an all or nothing” right. Simply because an individual is in public does not automatically negate all expectations of privacy. It is possible that, in the future, a similar perspective will guide the analysis of online pictures.
So, what does this mean for the everyday person running their errands in a mask? Do they need to worry about the implications of these companies collecting images if the results are inaccurate? The answer is likely, yes. The lack of accuracy of facial recognition technology is well documented, yet different sectors of society rely on the tool. The testing of algorithms for people wearing masks is delayed, and companies are not likely to stop trying to adapt. So, for the time being, you may want to keep your mask selfie on private or not share it at all.
Nikita Munjal is an IPilogue Editor, Clinic Fellow with the Innovation Clinic, and a JD/MBA Candidate at Osgoode Hall Law School.