Syndicated

Metadata has been in the news since last summer, when NSA whistleblower Edward Snowden started leaking document showing how American, Canadian and British electronic spy agencies are capturing so much of it.

You can argue whether government agencies should be swallowing up so much of it without (or even with) a search warrant. But if you want to know how easy it is for governments — or organizations you give permission to have your metadata, like Google, a piece by Ars Technica senior business editor Cyrus Farivar points out what can be gleaned.

Normally the Web site doesn’t keep extensive login data, but for a test of what Farivar was doing over days in February it did. Briefly, the raw CSV data was sent to a security researcher who used a Python script to readable text. The data could be mapped.

It showed when he’d logged into the Web site, where he was (with an educated guess), what URLs he was reading while online.

And, he points, the data isn’t very big — it’s not expensive to store a lot of it on everyone in the U.S.

The piece has less to say about the morality of government spying and more about why legal Web sites like Google and Facebook are so hungry for metadata.

Uncategorized