Canada should stop using facial recognition at border crossings, says legal clinic

A Canadian technology legal think tank has called for a moratorium on the use of facial recognition systems at our borders as well as a publicly transparent reassessment of existing systems in Canada.

The report released Wednesday by the University of Ottawa’s Canadian Internet Policy and Public Interest Clinic calls facial recognition a “significant intrusive potential that threatens anonymity, substantive equality, privacy and human rights more broadly.”

While the technology has advanced, “accuracy challenges persist, especially when employed at scale,” says the report. “Overconfidence in the technology can lead to serious consequences for individuals. More problematically, many recognition algorithms remain plagued by deep racial biases, resulting in a situation where the benefits and harms of facial recognition technology errors are often unevenly distributed while their discriminatory impact compounds historical prejudices and stereotypes.

“In some border control contexts, errors and racial biases in facial recognition systems can have a devastating impact on individuals.”

The report is another in a long series of critical analyses of facial recognition, particularly over evidence of the technology’s imprecision. Under public pressure, a number of U.S. cities have banned their administrations from using the technology including San Francisco, Boston and Oakland. Portland, Ore., forbids the city from using it and private companies in public areas. In June, IBM said it would close its “general-purpose” facial recognition business, calling for public debate on whether it should be used by law enforcement.

Meanwhile, in February, four Canadian privacy commissioners began a joint investigation into whether a U.S. company’s facial recognition technology, which scrapes images from the internet for comparative purposes, violates privacy laws here.

Kiosk
This photo and document ID machine are used at airports in Vancouver, Halifax, Montreal, Quebec City, and Winnipeg. Source: Government of Canada.

Automated border clearance kiosks using what the government calls facial comparison technology at airports in Toronto, Vancouver, Ottawa, Halifax, Quebec City, Montreal and Calgary.

In a statement, Public Safety Minister Bill Blair’s office said the Canada Border Services Agency’s technology compares the photo of a traveler taken at a kiosk to a specific photograph in their travel documents, such as a passport, for the purpose of confirming identification.

“Facial recognition is a different technology that is used to compare a person’s photograph with a database of multiple photographs for the purpose of determining identification,” says the statement.

At the main kiosks, facial verification technology compares a traveler’s photo taken at the kiosk. The photo is stored on the traveler’s ePassport chip (an image-to-image comparison) for identity confirmation purposes.

At NEXUS kiosks for frequent Canada-U.S. travelers, the technology compares the member’s photo taken at the kiosk with the member’s passport photo, says the statement. “With the NEXUS member’s consent, this photo is then kept in the member’s file for future crossings. On subsequent crossings, the member will scan their NEXUS card and the NEXUS kiosk will compare the photo taken at the terminal with the photo in the member’s file. “

“The Government of Canada has no greater responsibility than to keep its citizens safe,” says the statement. “Strong privacy laws are critical in that effort. We are committed to protecting the rights of Canadians, including the right to privacy.”

In an email report author Tamir Israel acknowledged that compared to what other countries have been doing around the world recently Canada’s adoption of facial technology has to date been less intrusive.

“I would say the biggest challenge with Canada Border Security Agency’s Primary Inspection Kiosks is that CBSA has refused to be transparent about their error rates and levels of racial bias. There is no operational justification for withholding this information, simply knowing the error and racial bias rates does not undermine the integrity of the inspection kiosks.

Part of the broader problem in Canada is the lack of a meaningful legal framework for facial recognition in general, he added. This means that current practices can rapidly change, as they are changing in jurisdictions around the world including in the United States, Australia and the United Kingdom.

“Canada is already exploring some of these more intrusive options, he said. “For example, pre-pandemic, Canada was piloting an additional program that, while operating on an opt-in basis, raises serious implications in terms of its intended scope and its reliance on travelers’ mobile devices to manage digital identity at the border.

“I would also add that in 2017, CBSA piloted another program that involved screening all travelers at CBSA controlled areas of airports against watch lists using facial recognition. This type of capability would rely on identification (a type of facial recognition that, as noted by PSC, is more intrusive than the verification currently in active use in Canadian airports).

“Without a meaningful legal framework in place, CBSA is not under any obligation to be transparent with its existing programs, while these new and emerging initiatives could be rapidly adopted.”

Generally, says the clinic report, facial recognition “can surreptitiously identify individuals from a distance, and based on any live or historical image, posing a serious threat to real-world and online anonymity. It can be used to locate
enrolled individuals, or to track individual movements through live CCTV camera feeds, identify individuals participating in sensitive activities such as political protests, or to find online social media and other pseudonymous profiles of known individuals, all without the knowledge or participation of those being surveilled. Facial recognition can also provide a convenient mechanism for mapping
digital functionality to the physical world, eroding privacy.”

Facial recognition systems were first justified for border control. The report says that they are being justified by governments as a crime investigation tool, administrative and corporate identity assurance mechanism, customer service enhancement, or the backstop to a comprehensive digital identity management capability.

But, the report says, while the technology at borders may lead to some efficiency gains, “the threat posed by facial recognition systems to privacy and other human rights is both tangible and insidious.”

In some countries facial recognition is not only used at the border to confirm the identity of people at customs and immigration kiosks, but it’s also used at security triage gates and for automated baggage checks. “The goal,” the report says, “is for facial recognition to displace other travel documents—your face will be your passport.”

What worries the report’s author is that facial recognition will be used in conjunction with algorithmic decision-making tools including risk assessment mechanisms and rich digital profiling to make automated decisions about people. Given the inaccuracy of facial recognition and the need for high-speed processing at borders thousands of travelers could be impacted daily by wrong decisions, the report argues.

In particular facial recognition remains “prone to deep racial biases,” and therefore its errors will fall disproportionately on certain groups.

“Canada’s adoption of facial recognition systems in border control contexts to date has been characterized by excessive secrecy and few safeguards to prevent repurposing,” says the report. “While many border control facial recognition systems have been accompanied by regulatory or legislative frameworks, these frameworks are silent on the need for periodic and transparent evaluation of the more pernicious potential of facial recognition technologies.

“Some evidence suggests that Canadian border control agencies appear to have been unaware of the racial biases inherent in these systems, and what little public information is available suggests that while these capabilities may have been assessed for general levels of inaccuracy, they have not been assessed for racial bias.”

(This story has been updated from the original to include comments from report author Tamir Israel)

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now