Metadata has been in the news since last summer, when NSA whistleblower Edward Snowden started leaking document showing how American, Canadian and British electronic spy agencies are capturing so much of it.

You can argue whether government agencies should be swallowing up so much of it without (or even with) a search warrant. But if you want to know how easy it is for governments — or organizations you give permission to have your metadata, like Google, a piece by Ars Technica senior business editor Cyrus Farivar points out what can be gleaned.

Normally the Web site doesn’t keep extensive login data, but for a test of what Farivar was doing over days in February it did. Briefly, the raw CSV data was sent to a security researcher who used a Python script to readable text. The data could be mapped.

It showed when he’d logged into the Web site, where he was (with an educated guess), what URLs he was reading while online.

And, he points, the data isn’t very big — it’s not expensive to store a lot of it on everyone in the U.S.

The piece has less to say about the morality of government spying and more about why legal Web sites like Google and Facebook are so hungry for metadata.

Previous articleThe keyboard is still alive
Next articleConsiderations on cloud backup
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

LEAVE A REPLY

Please enter your comment!
Please enter your name here