Shannon Flynn is a Guest Writer and the Managing Editor of Rehack Magazine
Although facial recognition may have begun as a useful tool for the masses, as with many things, it has become something that can be used against them. When paired with artificial intelligence, facial recognition software can sort through millions of photos to identify a single face or even a fragment of one.
The problem lies in the sourcing of these photos. Why is AI-driven facial recognition problematic, and what regulations and restrictions are in place in Canada to prevent its abuse?
The Problem With Clearview
Clearview is probably one of the best-known facial recognition programs in the world. Its AI is designed to detect and prevent crimes. By itself, this doesn’t sound like a bad thing. Ideally, AI programs can sort through many times the data a human worker could manage, finding collections and identifying people easily even with partial images to work with.
The problem does not lie in the algorithm itself, but rather in where it sources the images it sorts through. Clearview’s AI crawls the internet and can access, download, and store any image uploaded to social media. That means Clearview considers anything posted on Facebook, Twitter, Instagram, or other sites to be fair game. The company has also been accused of using photos from users’ Flickr albums to train the AI’s algorithm.
Many social media companies, including Google, Facebook, and Twitter, have accused Clearview of utilizing user images without authorization. It is important to note that this isn’t user authorization, but rather authorization of the social media program. Instagram’s terms of service include a non-exclusive limited license to use anything individuals post on their site — but that doesn’t allow AI programs like Clearview to swoop in and take what they need.
Even under the best circumstances, allowing a program like Clearview to sort through social media imagery — even in public posts — could be considered a violation of privacy. The average user should not have to worry that corporations or government entities are watching everything they post online. Indeed, there is an emerging legal precedent in favor of stronger consumer protections where data gathering is concerned.
Regulations and Restrictions
In June 2021, the Office of the Privacy Commissioner of Canada (OPC) submitted a special report to Parliament about the Royal Canadian Mounted Police (RCMP) and their use of facial recognition technology. Again, Clearview AI was in the crosshairs for improper use of private user data scraped from various social media sites across the internet.
Billions of people, both in Canada and around the world, have suddenly found themselves in a “24/7 police lineup,” as the report states, without even the courtesy of due process.
As a result of this report, new policy guidelines have been drafted that clarify when and where the use of facial recognition technologies is appropriate. These guidelines focus on four key points: accuracy, data minimization, accountability, and transparency. Accuracy is one of the biggest concerns because AI-powered facial recognition technologies tend to be a lot less accurate than human detectives completing the same task. Law enforcement officials shouldn’t take any matches discovered by facial recognition at face value, and should always double-check the results before making an arrest or pursuing legal action.
Data minimization ensures large swaths of the population are not included in a search. It also helps reduce the impact of a data breach if one happens, which grows more common every year. Accountability is essential so everyone involved knows what data is being collected and how. This key component also includes information security.
Finally, transparency helps keep innocent people out of digital lineups simply for sharing a certain demographic with an assailant.
Looking Toward the Future
It may seem as if an individual’s information is fair game because it’s available on a public post, but this is not the case. Facial recognition technologies can be valuable for preventing and detecting crime, but only if those in power are not allowed to abuse it.
The new policy guidelines being embraced in Canada are just one piece of the puzzle. Every government that utilizes facial recognition should follow suit by embracing key tactics like accountability, transparency, accuracy, and data minimization to ensure the technologies are used properly.
The a fine line between tyranny and law enforcement should not be crossed, regardless of how easily one could click a button and find the “bad guy.”