Do We Even Need Cookies?

fetchit-21

A new law came into force on May 26th 2011 in the UK and Europe that affect websites and how they deal with the user flow of saving cookies to a visitors browser.

Previously websites could use cookies as much as they liked and we only limited by the browsers of the people visiting their websites, but as of late May the Privacy and Electronic Communications (EC Directive) Amendment Regulations 2011 require that certain information must be given to that site’s visitors and the user must give his or her consent to the placing of the cookies.

In english, that means that a site must ask all users permission before saving a cookie to their browser.

If you think that’s not a big deal, remember that the internet is what’s called ‘stateless’. Without a method of tracking each user between page requests or form submissions, it may as well be a completely new visitor of which the site knows nothing.

If you think that’s not a big deal then imagine no logging in to a website, no shopping baskets, no Facebook, Twitter or MySpace.

Now more recently the law’s requirement for implementation has been deferred for 12 months. Most believe it’s because the web industry really isn’t sure how to deal with the new requirement of asking for permission to set a cookie and I would have to agree. We’ve taken cookies for granted for so long that asking to set one is like asking to use javascript. If you remember for a second that we can’t store anything about the user without their permission, then we can’t know 100% that they haven’t just been asked if we could set a cookie already. This means possibly posting a prominent message on every page of a website for anyone who hasn’t agreed, asking for permission to save a cookie to their browser.

So enough about the problems of the new law and how to deal with it, let’s consider a workaround. When a visitor requests a page of a website, plenty of information is spilt from the browser. The information usually available to a website includes (but is certainly not limited to), the I.P address (may not be unique, but bear with me), the operating system, the browser, the browser version, cookies enabled yes or no (remember here that we don’t set one permanently), javascript enabled yes or no (we don’t need it to be, it’s just a check), and what http_headers are sent as part of the page request (a long string of data with a high level of change across visitor).

All in all, the average browser already sends a huge chunk of information that a website can use to create a unique footprint which can then be tracked across page views which gives a system with almost as much accuracy as a tracking cookie, without the need for requesting a users permission for setting one.

Now because this workaround is never going to be 100% accurate and very occassionally you may find that two users have exactly the same footprint I would never suggest a website use it to track logged in users (logging in can be the point where you request permission to set a cookie anyway), and I would strongly urge them to steer clear of trusting it, but for visitors which you’d like to track who are not logged in, we may be on to a winner.

If you’d like to learn more, it turns out Panopticlick had the same idea as I have. They’ve written some interesting thoughts too and even have a test to see if you are unique to their website. Check Panopticlick out.

In summary: In most cases we don’t need cookies to track visitors who are not logged in and we can include requesting permission as part of the normal login user flow.