Site icon IT World Canada

Global regulatory policies are ‘10 innovation cycles’ behind, says expert

Richard Wunderlich, far left, director of smart grid initiatives for Siemens Canada, participates in a panel discussion about connected economies. Joining him, from left to right, Jason Falbo, chief technology officer at Mircom Group of Companies, Abhinav Tiwari, head of advanced planning at Alectra Utilities, and Amar Varma, co-founder of Autonomic (of Ford Smart Mobility). Photo by Alex Coop.

The Big Data and AI Conference in Toronto this week featured dozens of businesses vying for attention and speakers talking about the potential implications of a connected economy. It also revealed organizations have a long way to go when it comes to changing the way they think about privacy in their products and solutions.

Richard Wunderlich, director of smart grid initiatives for Siemens Canada, tried to put a number on it during a panel discussion about connected economies.

“Our company is seeing this exponential growth for innovation, and accompanying it, an exponential growth of security issues,” he said. “The regulatory environment that’s supposed to enable the innovation and security is probably 10 innovation cycles late.”

It’s why many projects leaning into the “connected economy” concept, which is broadly defined as the value created through technology-enabled links between people, machines, and organizations, are failing to take off.

“We’re not ready yet to be as connected as we can be to optimize the living environment,” indicated Wunderlich.

Helene Beauchemin, legal counsel for Quebec’s research lab Stradigi AI, says organizations need to get much better at defining what type of AI is in their products and its relationship with users and their data. Photo by Alex Coop.

And unfortunately, Canadian startups steeped in AI and other technological talent don’t have many good examples to look up to as they try to navigate these complicated waters, according to Helene Beauchemin, legal counsel for Quebec-based AI solutions company and research lab Stradigi AI.

“It’s a delicate balance – you want to be compliant but you also want to be user-friendly,” Beauchemin told IT World Canada. “We need to change our privacy practices, but we’re still not sure how to do it in a way that makes sense for the products of tomorrow.”

Corporations like Facebook and Google are finding out the hard way that people actually do care about how their data is being used. And while they continue to stumble on their way to integrating better privacy practices, the topic of privacy is no longer something they can brush aside without the public noticing.

Growing businesses and startups leaning on AI and other emerging technologies don’t have to repeat those mistakes, but in order to sidestep them, they need an actionable guide to privacy culture, a topic that was the focus Beauchemin’s session this week at the event.

She outlined five steps organizations need to follow, but after the meeting, pointed to one more. It should be the very first thing an organization does before the actual privacy discussion happens – define the AI capabilities in your product or solution.

“AI is a broad term. So are we working with traditional machine learning? Deep learning? Image and video recognition? Biometrics? Or are we a mining company that’s trying to disseminate unstructured data from sensors that has nothing to do with personal information?” she said.

Organizations need to be able to identify the impacts that their products and solutions may or may not have on people. Beauchemin pointed to a tool called COMPAS — the Correctional Offender Management Profiling for Alternative Sanctions — which is used to predict a defendant’s risk of committing another crime. It works through a proprietary algorithm developed by a private company called Equivant that considers answers to a lengthy questionnaire. Studies have questioned its effectiveness and the algorithm behind it, and the debate over the tool continues today. “That [AI] has an enormous impact on human rights,” she said.

Once the AI and its relationship with people and their data is defined, Beauchemin’s five steps toward establishing a culture steeped in sound privacy practices are as follows:

Exit mobile version