IBM unveils Web privacy work

Researchers at IBM Corp.’s Privacy Institute are working on software that automatically scrambles Web visitors’ personal information – so consumers perhaps won’t feel compelled to lie just to protect their privacy.

It’s no secret that online visitors often provide false personal data to avoid any repercussions should the data be misused or shared with multiple sources. For merchants, that means the customer data they painstakingly track with customer relationship management software – and often rely on when making product development and marketing decisions – can be flawed from the start.

To help solve this problem, researchers Dr. Rakesh Agrawal and Dr. Ramakrishnan Srikant are developing what IBM calls “privacy-preserving data mining.” The duo’s research, which IBM announced May 30, relies on the notion that a Web visitor’s personal data can be protected if it is scrambled, or randomized, before it gets to the merchant. Once the data is transferred to the merchant’s systems, the IBM software applies algorithms to compensate for the data scrambling. With this technology, a retailer could still generate accurate data models and extract useful demographic information, but without ever seeing personal consumer data, IBM says.

“Our research institutionalizes the notion of fibbing on the Internet, and does so to preserve the overall reality behind the data,” Agrawal says.

When a Web user enters a piece of personal data, such as age, salary or weight, the IBM software immediately scrambles that number by adding to or subtracting from it a random value. This randomization step is performed independently for every user, IBM says. This means a 30-year old’s age may be changed to 42, while a 34-year-old’s age may become 28.

The merchant determines the range of the randomization – plus or minus 1 to 12 years, for example – which then remains constant. Once all the scrambled data is collected for a large number of users, IBM’s data-mining software determines how the true data might have looked like and uses the reconstruction to build a data-mining model, IBM says.

The greater the range of number-scrambling that is allowed, the more consumers’ private data is obscured. However, as randomization parameters increase, the accuracy of the post-scramble data-mining results decreases. According to Agrawal, it’s a trade-off. IBM says that in its experiments, after compensating for the data scrambling, it found only a 5 per cent to 10 per cent loss in accuracy, even with 100% randomization allowances.

The research project is underway at IBM’s Privacy Institute; beta trials will begin soon. It’s the first project announced by the Almaden, Calif., group, which was formed in November 2001.

These days, Internet privacy is a hot topic, most recently making headlines when U.S. Senator Fritz Hollings introduced a controversial bill designed to safeguard Internet users’ privacy, and which opponents suggest will hamper online commerce.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Previous article
Next article

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now