The use of facial recognition technology by companies and government agencies should be strictly controlled, says a parliamentary committee.
In a report this week presented to Parliament, the House of Commons ethics and privacy committee made 19 recommendations, including one for the creation of a legal framework for facial recognition and artificial intelligence.
Until one is created, the government should impose “a national pause” on the use of facial recognition technology, the committee said, particularly for federal police services.
Going further, the committee also recommended the government forbid companies from automatically collecting any biometric information — such as photos of people in a building or mall — unless people opt in, and it should prohibit companies from making the provision of goods or services contingent on providing biometric information.
Federal privacy commissioner Philippe Dufresne welcomed the report. In a statement he said it “confirms and reiterates the pressing necessity of ensuring the appropriate regulation of privacy-impactful technologies such as facial recognition and artificial intelligence in a way that protects and promotes Canadians’ fundamental right to privacy.”
In May, Canadian privacy commissioners said Parliament should limit Canadian police use of facial recognition technology to closely defined circumstances such as the investigation of serious crimes.
Among the committee’s major recommendations, it said Ottawa should
–impose a federal moratorium on the use of facial recognition technology by federal police agencies and firms unless implemented in confirmed consultation with the Office of the Privacy Commissioner or through judicial authorization;
–actively develop a regulatory framework concerning uses, prohibitions, oversight, and privacy of facial recognition technology. That oversight should include proactive engagement measures, program-level authorization or advance notification before use, and powers to audit and make orders. The framework should also “set out clear penalties for violations by police;”
Quebec is the only jurisdiction to enact a law that specifically addresses biometrics, which includes facial recognition technologies. It requires organizations to notify the provincial Commission d’accès à l’information before implementing a biometrics database.
–ensure that airports and industries publicly disclose the use of facial recognition technology, including signage prominently displayed in the observation area and on the travel.gc.ca website;
–refer the use of facial recognition technology in military or intelligence operations, or when other uses of facial recognition technology by the state have national security implications, to the National Security and Intelligence Committee of Parliamentarians for study;
–amend federal procurement policies to require government institutions that acquire facial recognition technology or other algorithmic tools — including free trials — to make that acquisition public, subject to national security concerns.
This comes after at least one Canadian police department admitted it was testing the Clearview AI facial recognition application without the knowledge of superiors.
–create a public AI registry in which all algorithmic tools used by any entity operating in Canada are listed, subject to national security concerns;
–ensure the full and transparent disclosure of racial, age, or other unconscious biases that may exist in facial recognition technology used by the government, as soon as the bias is found in the context of testing scenarios or live applications of the technology, subject to national security concerns;
–update the Canadian Human Rights Act to ensure that it applies to discrimination caused by the use of facial recognition technology and other artificial intelligence technologies;
–create a right to erasure (also called a right to be forgotten) by requiring service providers, social media platforms, and other online entities operating in Canada to delete all users’ personal information after a set period following users’ termination of use, including but not limited to uploaded photographs, payment information, address and contact information, posts, and survey entries;
Facial recognition issues
Facial recognition is the process of identifying a face by comparing digital images through machine learning. It has been used in a number of ways around the world: Border agencies use it to identify people forbidden from entering a country. Police agencies use it to identify a suspect. For example, Toronto Police told the ethics committee it takes images from existing traffic, business or home video cameras and compares them to a suspect’s photo. Transportation companies may use it to reduce congestion. The ethics committee was told hospitals are using it to monitor patients and make sure their condition does not change. Private companies may use it to keep banned people out of buildings, allow people into sensitive areas like data centres, or feed tailored ads to shoppers. Canadian real estate developer Cadillac Fairview used it for marketing, but the federal privacy commissioner said those images were captured without people’s consent.
The use of facial recognition technology isn’t new. The Insurance Corporation of British Columbia began using it over 20 years ago to help stamp out the fraudulent acquisition and use of drivers’ licences and provincial ID cards. But when the corporation offered in 2011 to lend the technology to help Vancouver police identify Stanley Cup rioters the provincial privacy commissioner said that wasn’t allowed under B.C.’s privacy statute. (Not only that, the commissioner added, the insurance corporation hadn’t fully satisfied all of the legal requirements when it implemented facial recognition.)
The federal Liberal Party has used it in British Columbia to verify members voting online at candidate nomination meetings.
And, of course, many smartphone and computer owners can use facial recognition to unlock their devices.
The use of facial recognition technology among police agencies became controversial when experts noted it was inaccurate when used with images of people of colour. The bias can depend on the inputs a system uses for training. The ethics committee was told that researchers have found facial recognition is up to 100 times more likely to misidentify Black and Asian individuals. It misidentifies more than one in three darker-skinned women.
One expert told the ethics committee that facial recognition has a fatal flaw: it assumes that social constructs, like race and gender, are machine-readable in a person’s face.
Meanwhile Clearview AI was criticized for scooping up images of people from the internet to populate its comparative database. The federal privacy commissioner called it “mass surveillance.” Company officials argued images on the web aren’t private, a claim rejected by the federal privacy commissioner.
To clear up the issue, the ethics committee specifically asked the government to amend the federal Personal Information Privacy and Electronic Documents Act (PIPEDA) to prohibit the capturing images of Canadians from the internet or public spaces for the purpose of creating a facial recognition database or artificial intelligence algorithms.
Last year, a number of Canadians filed a class action lawsuit under PIPEDA against Clearview AI, demanding a declaration from the Federal Court that Clearview illegally collected, copied, stored, used, and disclosed their personal information in violation of their privacy rights. In response, Clearview is challenging the constitutionality of portions of PIPEDA. The class action hasn’t been certified yet, nor has the court approved hearing the constitutional challenge.
Earlier this year, the U.K. information commissioner fined Clearview AI for violating that country’s privacy law.
The committee report doesn’t propose a ban on facial recognition technologies. In fact, it quotes former federal privacy commissioner Daniel Therrien saying facial recognition “can, if used responsibly, offer significant benefits to society.”
The report also noted that Therrien said “it can also be extremely intrusive, enable widespread surveillance, provide biased results and erode human rights, including
the right to participate freely, without surveillance, in democratic life.”
One concern the ethics committee heard is that biometric databases created by the public or private sector for one purpose may be used for other purposes without an individual’s knowledge.