Four of Canada’s privacy commissioners have launched a joint investigation into whether a U.S. company’s facial recognition technology, which scrapes images from the internet for comparative purposes, violates privacy laws here.
The federal privacy commissioner and commissioners in Alberta, British Columbia and Quebec said Friday they are investigating Clearview AI about whether the company is collecting and using personal information without consent.
“Media reports have stated that Clearview AI is using its technology to collect images and make facial recognition available to law enforcement for the purposes of identifying individuals,” the commissioners said in a joint statement. “The company has also claimed to be providing its services to financial institutions.”
UPDATE: In a statement Tor Ekeland, attorney for Clearview AI said: “Clearview only accesses publicly available data from the public internet. It is strictly an after-the-fact investigative tool for law enforcement, and is used to solve crimes including murder, rape and child exploitation. We’ve received the letter and look forward to a productive dialogue with Canadian officials.”
It isn’t clear how widespread the use of Clearview is in Canada. A unit of the Toronto police had been using it for some purpose, apparently unknown to the chief, until last week. When its use was discovered the chief ordered activity stopped. According to the Toronto Star, Durham, Peel and Halton Regional Police have tested Clearview AI.
Aside from the controversy over the accuracy of facial recognition software and evidence suggesting it’s less accurate against images of people of colour, there is the question of where application makers get the base of images for comparison and learning. On its website Clearview says it “searches the open web” for images. “Clearview does not and cannot search any private or protected info, including in your private social media accounts.”
However, Clearview AI claims its solution has an advantage because it has copied more than 3 billion images from the internet, including from social media platforms like Facebook, Instagram, Twitter and YouTube. Police forces use Clearview AI to compare images of unknown people — usually suspects — to this database for identification.
In addition to questions about the legality of photo scraping for a commercial entity, Clearview AI apparently keeps images in its database even if someone deletes their image from a website.
Since news broke about its strategy Twitter, Google and Facebook have issued cease-and-desist orders in the U.S. In response on January 27 Clearview AI published what it calls a code of conduct.
“Clearview AI’s search engine is available only for law enforcement agencies and select security professionals to use as an investigative tool, and its results contain only public information,” it says. “Nonetheless, we recognize that powerful tools always have the potential to be abused, regardless of who is using them, and we take the threat very seriously. Accordingly, the Clearview app has built-in safeguards to ensure these trained professionals only use it for its intended purpose: to help identify the perpetrators and victims of crimes.
Moreover, Clearview’s User Code of Conduct mandates that investigators use our technology in a safe and ethical manner, and for legitimate law enforcement and security purposes only. They are required to obtain permission from a supervisor at their organizations before creating their Clearview accounts, and may only use the app for purposes authorized by their supervisors. We strictly enforce our Code of Conduct, and suspend or terminate users who violate it.”
The site also says that Clearview “is an after-the-fact research tool. Clearview is not a surveillance system and is not built like one. For example, analysts upload images from crime scenes and compare them to publicly available images.”
The three provinces that are part of the investigation have their own privacy legislation. The other provinces follow the federal Personal Information Protection and Electronic Documents Act (PIPEDA).
In 2018 the four commissioners issued guidance to the private sector on obtaining meaningful consent from consumers and customers for the collection and use of personal data. One section says “if there is a use or disclosure a user would not reasonably expect to be occurring, such as certain sharing of information with a third party, the downloading of photos or contact lists, or the tracking of location, express consent would likely be required.”
In January, the federal privacy commissioner also issued guidance to marketers on the scraping of email addresses. This report doesn’t deal with photos, but the advice may indicate which way the commissioners may lean.
“With very limited exceptions, PIPEDA prohibits address harvesting. This prohibition is highly relevant to organizations of all sizes in all sectors,” the report says. “If an organization engages in address harvesting or obtains and uses a list that has been compiled through address harvesting, it runs a real risk of being in contravention of the obligation to obtain meaningful consent under PIPEDA.”
However, in an interview Toronto privacy lawyer Barry Sookman of the McCarthy Tetrault firm noted that this may not be instructive because PIPEDA has a specific section on email harvesting. He also noted that law enforcement agencies have certain exemptions from data collection under PIPEDA.
He also pointed out that Clearview AI is headquartered in the U.S., where it presumably did its photo scraping. Canadian privacy law could only apply if the company collected images of Canadians.
Few countries have laws governing the use of facial recognition technology. Sookman noted that the CEO of Alphabet (parent company of Google) wrote an opinion piece for the Financial Times calling for the regulation of facial recognition, and in 2018 Microsoft president Brad Smith did the same.