In the wake of some of the biggest scandals relating to privacy in years, it is an odd question to ask, but is our attempt to increase our privacy misguided? Jennifer Lawrence and co might disagree (and now that I’m not not on Blogger any more I don’t have to worry about stupid visits coming to the site every time I mention a celebrity), but I hope that they bear with me for a few moments.
Without wanting to sound like a proper politician on this, when something like this happens we quite often start making changes that don’t address the problem that existed in the first place that caused this things to happen. We don’t really know whether the photos that were leaked were caused by the iCloud or not, but that hasn’t stopped Apple releasing a new version of iOS with encrypted data.
Equally our response to perceived (or real – let’s not get into that here) terrorist activity caused by ISIS, ISIL, the artist formally known as ISIS or whatever they call themselves these days has been varied. Our Home Secretary has announced today that if they’re in government next they’ll resurrect the so called ‘snoopers charter‘, in Australia they want to do the same thing under the Abbott government. Tim Berners-Lee, the founder of the internet, thinks that the opposite should happen and we should have our privacy embedded in laws.
All these people are missing the point – where we currently are in the world already means we have enough data about people to be able to break their privacy. At MeasureCamp a couple of weeks ago I was hosting a session to work out what Analytics vendors and agencies should do to make life better. Jim Sterne said in that session (and I apologise for paraphrasing)
“It takes a vanishingly small number of data points to get personal information about someone”
This was echoed at a Single Customer View session I went to hosted by DMPG, as Damian Blackden from Device 9 and Joe Reid from Krux talked about how their tools made it incredibly easy to do cross device stitching using fingerprinting (this isn’t new, but it is getting much more advanced than ever before). A vanishingly small number of data points are needed about a user to be able to get personal information about them.
So with the view that with modern computer power and ‘big data’ it is relatively easy to get personally identifiable information from a small number of non-personally identifiable meta data points that are stitched together, why are we worrying about what we are allowed to collect.
The ICO and the European Union have got it badly wrong on cookies. Restricting data collection doesn’t work. It is an illusion of privacy if you think that you aren’t being tracked (even with ‘Do Not Track’ or blocking third party cookies a la Safari). We’re far too advanced now that we can link data points together and get the personally identifiable information, so we are wasting your time with limiting what people can collect.
Blocking third party cookies by default is an even bigger red herring, instead of removing the collection of data, you’re simply handing it over to the big ad network companies: Google, Facebook and Twitter. They’re third party cookies on this website, but you already have their cookies on your browser because you’ve been to their website and got them as a first party cookie.
And therein lies the problem – cookies being first or third party are identified from when they are set, not what they are at the time. I’ve got a Facebook button on this site and that means they know that you are visiting this page.
A far better solution would be increased transparency on what data we are collecting. By forcing websites to explicitly say what they are collecting, users can make the decision on whether they want to use that site or not. Of course this would need to be very up front and users would have to pay attention to it. Opt in doesn’t work (as shown by those media websites who ran advert-less paid subscription alongside free, but ad supported content – nobody wants to pay for something they can get for free). This way, you will only collect data to the level that it doesn’t affect your business.
Data Processing and Access
What we can do is control what sort of processing companies do and who has access to what level of data.
I said that 2013 was going to be the year of the data protection officer. I don’t think I was quite right, but we’re not far from this. Data protection is going to be increasingly important in companies, especially those that deal with sensitive data. This person’s job is not to decide what should be collected and what shouldn’t be collected. We’ve already worked that one out.
This person’s job will be to decide how data should be processed and who should be allowed access to them. Long gone will be the days of live databases where if you know the right developer you can have access to an entire companies customer bases credit card details. The data will have to be stored in ways that make it impossible for the truly sensitive data to be accessed.
The data that is personally identifiable information, will have to be stored in a way that makes it impossible for someone outside the systems to be able to stitch it together with data from another system. Access to databases should only be available to those who need it. People who have left the company should have their access revoked as soon as they leave the organisation or after a period of inactivity (but for the love of god, don’t have passwords expiring if the account is active!).
More importantly we should not be storing usernames and passwords unencrypted. If we did this one simple thing, then the blushes of Ms Lawrence and others would probably have been saved as their accounts were probably hacked by someone stealing a third party database of usernames and passwords that is used across different accounts.