Due to Canadian regulatory and legal requirements, privacy officers — and their lawyers — spend hours crafting policies that outline how their organizations handle personally identifiable information. But the text of those statements can be lengthy, full of legalese, and hard to understand.
Small wonder few read them — which raises the question of whether people are giving informed consent to the use of their data.
“It seems clear that reading privacy policies could be a full-time pursuit with untold hours of overtime,” federal privacy commissioner Daniel Therrien told a privacy conference in Toronto on Wednesday. “It is no longer entirely clear who is processing our data and for what purposes – creating challenges for meaningful consent.”
That’s why his office has started a consultation with chief privacy officers and other executives, researchers as well as the public on whether the consent model — largely instituted by the federal Personal Information Protection and Electronic Documents Act (PIPEDA) — should be improved or should there be more focus on accountability and ethical uses of personal information by organizations, which would place the responsibility for oversight on regulators.
“We are at a critical point in which action is needed on a few fronts,” said Therrien.
A discussion paper issued Wednesday notes that organizations are required under PIPEDA to obtain individuals’ consent to lawfully collect, use and disclose personal information. However, technology and business models have changed so significantly since the law was drafted that many are calling into question the feasibility of obtaining meaningful consent.
Research suggests, for example, that it would take 244 hours – roughly equivalent to 33 work days – to read all of the privacy policies and related legalese that the average Internet user encounters online each year. Not surprisingly, Therrien says, many people simply click “I accept” rather than read the full terms and conditions governing the use of a site.
“In order for consent to be considered meaningful under PIPEDA, individuals should have a clear understanding of what will be collected, how their personal information will be used, and with whom it will be shared,” the discussion paper notes. “Consent is only valid if it is reasonable to expect that an individual to whom the organization’s activities are directed would understand the nature, purpose and consequences of the collection, use or disclosure of the personal information to which they are consenting.”
While in the early years of personal computing people generally knew the identity of the organizations they were dealing with, the information being collected, and how the information would be used, with cloud computing, big data and the Internet of Things (IoT) the environment is radically different, the paper says. Big data repositories not only encourage organizations to hold onto data longer than ever, there are risks personal data will be used in new ways to which the individuals did not originally consent. Data collected by IoT devices is so widespread and without human involvement the European Commission’s Article 29 Data Protection Working Party has suggested IoT data should automatically be treated as personal data.
In addition, traditional point-to-point transfers of data are being replaced with data flows through distributed systems, the report notes, making it difficult for individuals to know which organizations are processing their data and for what purposes.
Then there’s human behavior: Many studies show people who say they care about privacy at the same time may disclose vast amounts of personal information online, the report notes, which undermines the consent model.
Some proposed solutions involve making privacy information more accessible for consumers, giving them the ability to manage privacy preferences across different devices and ensuring privacy is not an afterthought, but is rather “baked” into products and services.
Other possible solutions seek to ban certain collections and uses of personal information outright, while placing restrictions on others. Under another possible scenario, certain information could be allowed to be collected and used without consent under limited and justifiable circumstances, as long as there is adequate oversight. Industry codes of practice and tougher enforcement measures for regulators are some of the other possible solutions discussed in the paper.
The goal of the consultation is to identify improvements to the current model and bring clearer definition to the roles and responsibilities of the various players who could implement them.
Comments, which should answer at least one of the four questions posed in the consultation paper, can be sent to Therrien’s office by July 13. They will be posted online. In the fall, the commissioner’s office will meet with stakeholders to discuss the issues before it crafts recommendations.