Site icon IT World Canada

Canadian privacy experts praise San Francisco ban on facial recognition software

Image by Metamorworks via GettyImages.ca

Canadian privacy experts say this week’s decision by San Francisco forbidding county and municipal departments from using facial recognition technology should be a model for government policy towards more privacy in public spaces.

“It’s a big deal,’ said Ann Cavoukian, former Ontario privacy commissioner and now head of Ryerson University’s privacy by design centre of excellence,

“This is the first time that I’m aware of that a major city in the world banned facial recognition technology. I think it’s unbelievable, and such a strong measure in support of privacy and personal liberties.”

“I think they’re acknowledging the fact that facial recognition is not the panacea that law enforcement believes it can be. It results in so many false positives, and those individuals who are falsely accused … it takes forever to clear your name.”

If police need facial recognition technology for a particular case they should get a judicial warrant, she added.

Brenda McPhail, director of the Canadian Civil Liberties Association’s privacy, technology and surveillance project Canadian Civil Liberties Association, said she was elated. “I hope many more cities, including our own, follow suit.”

On Tuesday San Francisco’s Board of Supervisors voted to prohibit the merged city and county’s 53 departments from using facial recognition technology, part of a broader anti-surveillance ordinance. Departments will have to get approval from the board before purchasing surveillance technology. Agencies that get approval will have to publicly disclose its intended use. Agencies can’t actively solicit information that they know comes from a facial-recognition system run by a private sector firm or a resident.

San Francisco police wear body cameras for recording incidents, but the system doesn’t use facial recognition software. Municipally-owned cameras on poles used to monitor traffic still stay up.

The ordinance, which comes into effect next month, won’t apply to federally controlled facilities such as the city’s international airport and its port. that will require agencies to gain approval from the board before purchasing surveillance tech and will require that they publicly disclose its intended use.

Several U.S. cities are also pondering similar rules.

The Electronic Frontier Foundation, which has for some time been urging San Francisco’s mayor to support the ordinance, called the decision “encouraging.”
“Face recognition technology is a particularly pernicious form of surveillance, given its disparate propensity to misidentify women, and people of color,” the EFF said in a statement. “However, even if those failures were addressed, we are at a precipice where this technology could soon be used to track people in real-time. This would place entire communities of law-abiding residents into a perpetual line-up, as they attend worship, spend time with romantic partners, attend protests, or simply go about their daily lives.

Facial recognition is increasingly being used by the private and public sectors thank in part to improvements in pattern-detecting machine learning. Among those making big bets on the technology is Amazon, whose Rekognition API allows organizations to add people and object identification to image capture systems. It could be used for a range of purposes, from searching by social media companies for objectionable content, Amazon says, to face-based user authentication.

Law enforcement agencies say it will help find criminal suspects in public spaces watched by street cameras

However, in addition to civil rights advocates who complain about the privacy implications, security researchers complain the technology is still flawed. In an article this week about the San Francisco decision Wired.com notes studies by researchers at MIT and Georgetown University have found that the technology is less accurate at identifying people of colour and could automate biases already common among police.

Privacy advocates see banning facial recognition as a unique opportunity to prevent the technology from getting too entrenched, the article adds.

The tech industry is fighting back. The Washington-based Innovation Technology and Innovation Foundation, which counts members of Apple and Intuit on its board, issued a statement criticizing the MIT study for the way it reported false positives by not including a level of confidence in a prediction, as other systems do. In an op-ed in April the foundation’s vice-president said policymakers should create rules to prevent inappropriate uses and allow government to adopt the technology where it is useful.

Both Cavoukian and McPhail dismissed suggestions that if technology lowers its rate of false positives it could justify wide deployment.

“That’s a dangerous road to go down from a rights perspective,” said McPhail. “If the idea is everyone is perpetually under surveillance, making the technology better just means they will be watched more effectively. That might address discrimination, but not the fundamental problem is that public shouldn’t be watched all the time as they move through public spaces.”

The point is, Cavoukian, “you should be able to go about in public without fear of your facial image being captured and used inappropriately”

Exit mobile version