The NHS Choices website is a cornerstone of the government’s drive for health service efficiency and to move service delivery online. Users can log on to find out more about NHS services, and to use a symptoms checker to understand what might be wrong with them and (hopefully) seek medical attention where appropriate, or save a doctor’s time if their condition turns out to be nothing more than a cold. The site has made an effort to engage with social networking sites, such as integrating the Facebook ‘Like’ button. And as Mischa Tuffield of Garlk has spotted, this is where we get a big privacy FAIL.
Mischa points out that a visit to a NHS Choices conditions page calls on four external service providers:
Two of these – Google Analytics and Webtrends – are used to monitor web traffic. In theory the privacy implications are relatively minor, although in certain scenarios it should be possible to identify an individual user subject to access to other information. It’s odd that the NHS has chosen to use third-party analytic services rather than implementing their own. This problem has been explored in detail elsewhere, so I won’t dwell on it here.
However, the Facebook and Addthiscdn links are there to drive the Facebook ‘like’ service, and this is where our problems begin. If a user visits the page from a browser that they’ve used to access Facebook before, then Facebook automatically gets to know that they’ve been to that particular conditions page. That means that if someone is concerned about a particular condition – let’s say testicular cancer – then if they’ve been to Facebook before, then Facebook gets to find out about that interest. Not good. And it gets worse – let’s say that the user feels they’ve received useful information, and clicks on the ‘Like’ button (or does so accidentally) – then it shows on their Facebook profile, and that’s really not good at all. Imagine being worried you have a serious illness that you don’t want to worry your spouse about, and accidentally clicking ‘Like’ – they get to find out. So does a potential or current employer if they’re checking your profile. The consequences could be very significant indeed.
I’d like to hope that Mischa’s research will force the NHS to modify the website, and that at the very least the functionality will be suspended until the privacy issues have been properly investigated.
[Thanks to Ian for pointing this one out]
The Information Commissioner’s Office has made the first use of new powers to fine organisations for data breaches. Two organisations – Hertfordshire Country Council, and employment services firm A4e – were fined £100,000 and £60,000 respectively for failures to comply with the Data Protection Act.
There is little doubt that the offences were, in each case, serious: Hertfordshire Council twice faxed information about a child sex abuse case to the wrong recipient, and an A4e employee was burgled, losing a laptop containing the unencrypted sensitive personal information of approximately 24,000 individuals. The Hertfordshire case is particularly inexcusable, since the two incidents were two weeks apart, during which remedial actions could have been taken to prevent a recurrence.
On the face of it, this looks like a win for the Data Protection Act and a welcome return to form for the Information Commissioner’s Office. However, I personally think that these fines are ill-judged and inappropriate, and that the consequence is equally likely to be a degradation in Data Protection practices in other organisations.
Firstly, both organisations voluntarily notified the Commissioner and the affected data subjects of the breaches. Serious corporate governance mistakes were made, but once these became apparent, the organisations took remedial action. What message does this send to other organisations that have yet to put their Data Protection processes in order? Yes, it will make it clear that Data Protection has to be taken seriously. But it will also encourage organisations and individuals to try to cover up data breaches – after all, why bother notifying the ICO if his response will be a fine? Why not just try to cover it up, since if the breach subsequently becomes apparent then the fines will kick in anyway? Remember that even if the organisation has a culture of integrity, an employee who has made a silly mistake and lost some data (in the case of A4e, the laptop concerned was scheduled for encryption under a rolling programme of security upgrades, although that doesn’t excuse putting 24,000 records on it) would be highly incentivised to protect his/her job by trying to hush up the incident.
And that brings me on to the second problem: what is the point of fining public authorities for data breaches? At an organisational level, the only consequence is that Hertfordshire County Council now has £100,000 less to spend on legal services for child protection (I’ll bet nobody at an executive level has lost their job or suffered some other penalty). How does that help anyone? A far more appropriate solution would be to identify the culpable individual – even if that is the Chief Executive – and take action against them. That would focus their attention without the effect of penalising service recipients.
All this is in an environment where the ICO is facing widespread criticism for failing to tackle technically complicated cases, or those involving major organisations. The ongoing sense of “kick ’em while they’re down” that arises from the ICO’s taste for penalising small enterprises and public bodies, whilst – in the eyes of privacy activists – failing to deal with the local consequences of global breaches such as those committed by Google, demonstrates a continued lack of appetite within the Commissioner’s office to face down a data controller who might actually win a case, and an inability to take on technically complicated cases.
What the ICO does next is critical for the credibility of these new powers. It’s time to pick on a hard target, one which has the appetite and resources to fight back against a penalty in a situation that is technically complicated. Doing that will silence many of the critics. Picking on a ‘soft’ victim will only make things worse for all of us.
Over the next few days I’m going to focus on a number of important stories that relate to freedom of speech, and the unfortunate consequences of using that freedom to say something silly.
How many of us had heard of Fitwatch last week? Certainly a lot fewer than have heard of them now. If you’re not aware, FITwatch is an organisation that follows Police Forward Intelligence Teams – officers who use cameras to record individuals engaged in demonstrations, for identification and intelligence purposes. When protests turn ugly and unlawful, such as they did at Millbank last week, there is obviously a legitimate need to monitor, identify and arrest the miscreants. However, the FITs have come in for considerable criticism from civil liberties protestors, particularly for recording individuals engaged in legal and peaceful protests, which are, after all, within their fundamental rights. There have been allegations of FIT officers failing and refusing to reveal their identity numbers, and harassing journalists engaged in their lawful work. The attitude and delivery smacks of the surveillance state, since the inevitability of ending up on a database somewhere will be a deterrent for many people considering exercising their legal right to protest.
So, FITWatch is there to turn the cameras on the FITs using sousveillance, and to peacefully obstruct their cameras in whatever way they can. It’s a commendable mission, and one which forces reflection on the policy of treating anyone who protests as a dangerous activist. But after the Millbank fiasco, Fitwatch went too far, and provided explicit advice for protestors:
If you fear you may be arrested as a result of identification by CCTV, FIT or press photography;
DONT panic. Press photos are not necessarily conclusive evidence, and just because the police have a photo of you doesn’t mean they know who you are.
DONT hand yourself in. The police often use the psychological pressure of knowing they have your picture to persuade you to ‘come forward’. Unless you have a very pressing reason to do otherwise, let them come and find you, if they know who you are.
DO get rid of your clothes. There is no chance of suggesting the bloke in the video is not you if the clothes he is wearing have been found in your wardrobe. Get rid of ALL clothes you were wearing at the demo, including YOUR SHOES, your bag, and any distinctive jewellery you were wearing at the time. Yes, this is difficult, especially if it is your only warm coat or decent pair of boots. But it will be harder still if finding these clothes in your flat gets you convicted of violent disorder.
DO think about changing your appearance. Perhaps now is a good time for a make-over. Get a haircut and colour, grow a beard, wear glasses. It isn’t a guarantee, but may help throw them off the scent.
DO keep your house clean. Get rid of spray cans, demo related stuff, and dodgy texts / photos on your phone. Don’t make life easy for them by having drugs, weapons or anything illegal in the house.
Personally I think that’s all rather silly, partly because it is providing advice to individuals on how to hide from the consequences of criminal acts, and partly because anyone stupid enough not to cover their tracks frankly deserves what’s coming to them with a little extra thrown in for good measure.
Students – love ’em or hate ’em, you’re not allowed to hit ’em with a spade…
(c) Geoffrey Willans and Ronald Searle
But enough of that. The Metropolitan Police clearly didn’t like this, so CO11 got on to Fitwatch’s hosting provider and demanded that they take down the site and IP, which they did. Just like that. On the say-so of a Detective Inspector, the site was gone. No court order, no independent review, no scrutiny. And that’s just plain wrong in a country that once prided itself on both freedom of speech and freedom to protest. A few hours later, the police released images of the individuals they wish to interview, many of which were doubtless captured by the FIT team.
However, CO11 might be up to the task of taking down a website, but clearly someone there doesn’t quite get how the Internet works. Far from silencing Fitwatch, the effect has been to drive traffic to the numerous mirrors, caches and aligned sites that replicate all the offending content. Their Twitter followers are rising rapidly, there’s a busy Fitwatch meme on the go, political bloggers are all over the story, and the Fitwatch site will probably be back up by the time I finish typing this article. This time it’ll be hosted in a country that has a greater respect for freedom of speech than the UK, and then it’ll be pretty much impossible for the Met to shut it down. At the end of this process, many more people will know about the FITs, Fitwatch, and what happens if you upset the Metropolitan Police. In fact I’ll bet that the Fitwatch folk are delighted with this turn of events, and hopefully they’ll refrain from trying to assist offenders in future, and instead stick to doing what they do best – and good luck to them in that endeavour.
This incident demonstrates the futility of trying to silence free speech. Nobody is going to rally round for the right to incite violence or distribute child pornography, but when the authorities try to stamp on freedom of expression, even if that expression is rather misguided, then they’re going to feel the wrath of the Internet. Someone in CO11 needs to go back to skool [sic] to understand that if they really are hell-bent on intelligence gathering, then shutting down activist websites is not the way to go about it.
PS – For the avoidance of doubt, I wholeheartedly applaud every person who protests peacefully for their principles, but condemn the small minority who were engaged in the storming of Millbank tower. It only gives the State an excuse to suppress the rights of protest of others.
PPS – Please excuse the numerous Wikipedia references, but it seemed appropriate that in an article about students I shouldn’t bother to look for other sources 
A hidden gem on the BBC website points us to what has to be the very best helpdesk call that has ever been made: US presidential nuclear codes ‘lost for months’
It would appear that at some time around 2000, one of Bill Clinton’s aides misplaced the launch codes for the US’ nuclear arsenal, and that the mistake was covered up for a number of months before the periodical password change came around and the problem was rectified.
I’m left with a vivid image of a call to the helpdesk “We can arrange a password reset Mr President. Please could you give us the month and year of your birth, and the first two letters of your mother’s maiden name?”
[HT Caspar for pointing this one out]
So, to Lille today to attend IMRG’s inaugural EbizEU conference. The theme is e-commerce strategy, with a number of speakers looking to the future of cross-border e-retail.
The keynote pitch was delivered by Nils Muller of TrendONE, a ‘futurologist’ who distinguished himself from others who claim that title by delivering a fast-paced and technically demanding presentation about the future of the Internet, supported by a toy box of gizmos from various development labs to demonstrate the points he was making.
Nils walked the audience through his view on the coming iterations of the Internet, including:
– Web 2.0: social media and user generated content
– Web 3.0: immersion and augmented reality
– Web 4.0: internet of things
– Web 5.0: web of thoughts
There were a number of apparent non-sequiturs in the pitch, where the speaker seemed to assume that a magic wand will change the way things are, but overall Nils painted a fascinating vision of the next 10 years. Setting aside the sci-fi vision of a neural lace to deliver Web 5.0, his predictions seemed to be built upon a statement he made very early in the session: that by 2016 we will see the death of privacy. He assumes that we will relinquish control of our personal data and move into a completely transparent environment where we have little ability to establish boundaries over how that data is used.
At the core of this is his argument that a number of ambient technologies, and facial recognition in particular, will turn each of us into a physical hyperlink to our own online data. Augmented reality systems will be able to draw down information about an individual simply by recognizing their image (think Google Goggles on steroids – it’s pretty much achievable today). Other hyperlink identifiers, whether biometric or token-based (for example, distance-readable contactless cards) will inevitably emerge to support this vision.
Whilst the hyperlink concept is already a reality, I’m rather sceptical about Nils’ view of the future of privacy. The idea that we will give in trying to control what’s available online about us seems to be a little nihilistic (although it’s probably a meme that will appeal to the many marketeers at the event).
The defence of privacy in a near future where non-consensual identification is pervasive (think Minority Report) isn’t a futile battle. Quite the opposite, in fact.
I believe that over the next few years, individuals will increasingly and collectively demand respect for their personal information in ways that are not currently available to them. The single most important change will be a shift of ownership away from organizations and across to the individuals themselves: the individual will become the single trusted source of their own personal data.
In this environment, the only trustworthy data is that which has come directly from the data subject. By volunteering information to be held in a network of distributed Personal Data Stores, individuals will be able to grant access to personal information on a ‘need to know’ basis, and retain rights-managed control over how that data is used.
This isn’t a ‘futurology’ vision, but something that is happening now. The government is nurturing a private market for commercial provision of ID services through it’s G-Digital framework. Innovators such as Mydex are piloting Personal Data Stores for the management of volunteered personal information. Research projects such as EnCoRe and PVnets are developing innovative consent management models.
Of course my vision also has a few ‘magic wand’ moments as well. We need to find a way to promote and then protect this new environment; businesses need to establish viable commercial models for the new data architectures; individuals need to understand what this all means, and providers need to find ways to deliver it whilst hiding the technical complexity under the bonnet.
There are of course those who will argue that my understanding of privacy is outmoded – that we need to forget our middle-class, middle-aged views of how our information is shared, and instead think about how we value that information. I agree with that, and believe that the focus of the privacy debate will most likely shift towards accuracy, timeliness and consent as key metadata qualities that consumers and businesses alike will wish to address.
What is important to remember, however, is that the future of the Internet will be driven by the organizations that can most effectively milk it’s value, and the winners will be those companies that correctly understand and address mainstream needs – after all, would you adopt immersive and ambient technologies which you did not trust? Let’s hope that Mydex, EnCoRe and the other enlightened players in this new age can face down the Web 1.0 marketeers and give us the online future we want – and not the one that is being forced upon us at the moment.
Online child protection is all over the news this week, with the resignation of Jim Gamble of CEOP (and part of his team) being rued by mainstream media, and welcomed by ISPs. However, a lower profile headline is equally interesting: a teenager jailed for 16 weeks for refusal to disclose his encryption password to police investigating indecent images on his PC.
This is a rare example of the RIPA paradox in action. Under the Regulation of Investigatory Powers Act (2000), police can demand that an individual hand over encryption keys as part of an investigation. Refusal to do so can result in a jail sentence which, in theory, could become indefinite if they stand by that refusal. The Act was much criticised for this when it was originally passed, since privacy campaigners pointed out the stalemate that might arise when an individual feels that they have the right to privacy over their personal data and refuses to disclose a key for that reason alone. On the other hand, it is quite possible that the individual in this particular case is not acting from a position of principle, and does in fact have something more serious hidden on his PC, in which case a 16 week sentence might be considered ‘getting off lightly’ from his point of view.
In general, this particular aspect of RIPA hasn’t worked out as badly as campaigners originally feared, since very few law-abiding individuals would choose jail over the principle of their privacy (although that by implication means that in all probability an individual who does opt for jail probably has something they wish to keep hidden from the authorities). But it is an ongoing worry, a case of legislating that old lie “nothing to hide, nothing to fear,” and when that approach is linked with child protection then great care is essential – after all, if refusal to disclose is taken as an admission of guilt, then individuals who find themselves wrongfully accused are obliged to disclose all their personal information, regardless of sensitivity, simply to clear their names.
It’s that time of year again – Computer Weekly’s annual blog awards are now open for nominations. You can nominate your favourite blogs under categories that include:
- CIO/IT director
- IT consultant / analyst
- IT professional (male)
- IT professional (female)
- Company/corporate: large enterprise
- Company/corporate: SMEs
- Project management
- IT security
- Software development
- Best international blog
There’s also a category for IT Twitter user of the year. The awards will be announced in November, so get nominating now!
Yesterday saw an announcement by the Financial Services Authority that the UK arm of Zurich Insurance Plc has agreed a record-breaking fine of £2.4m as a result of losing 46,000 customer records. The records, which comprised personal details, ‘identity details,’ and in some cases bank account and credit card information, details about insured assets and security arrangements, were on an unencrypted back-up tape which was lost in transit during routine transfer to Zurich Insurance Company South Africa Ltd. The SA subsidiary was handling processing on behalf of the UK arm, but there were apparently no proper reporting lines between the two, and the loss was not reported to the UK for over a year after it occurred. There is no suggestion that the lost data has been misused.
In its statement, the FSA said:
“Zurich UK let its customers down badly. It failed to oversee the outsourcing arrangement effectively and did not have full control over the data being processed by Zurich SA. To make matters worse, Zurich UK was oblivious to the data loss incident until a year later.
“Firms across the financial sector would do well to look at the details of this case and learn from the mistakes that Zurich UK made.”
There are a few important implications arising from the FSA’s actions. A key issue is the valuation of data assets: by settling at an early stage of investigation, Zurich managed to get the fine down from £3.25m to £2.4m. This means that the FSA has assessed the value of each missing record as being approximately £70. That’s a figure that is substantially higher than has been assigned to many similar fines in the past, but is arguably much less than the damage per customer that could have been done if the data were misused. Of course the fine is not actually calculated in this way, and is in fact levied because the organisation “failed to take reasonable care to ensure it had effective systems and controls to manage the risks relating to the security of customer data resulting from the outsourcing arrangement.”
There is the (hopefully obvious) fact that whilst an organisation can outsource responsibility for proper data management, it cannot outsource accountability: the Data Protection Act makes it clear that the Data Controller remains accountable for proper management of data by a Data Processor acting on its behalf. Yet so many organisations fail to recognise this, particularly when they are passing data within the organisation – in many cases they fail to realise that a data sharing process is even occurring.
The scale of the fine is also clearly there to set an example to other regulated financial organisations to put their security arrangements in order. Nationwide, HSBC and Marks & Spencer have all fallen foul of substantial fines from the FSA, in each case being found guilty of systemic security failures. The FSA said “Firms across the financial sector would do well to look at the details of this case and learn from the mistakes that Zurich UK made.” Zurich’s shareholders can justifiably feel aggrieved at the scale of the fine compared with those applied elsewhere in the sector or across other sectors.
That brings us on to the issue of who is responsible for Data Protection regulation. The Information Commissioner’s Office does not have a reputation for enforcement in the way that many of its European counterparts have – for example, the Schleswig-Holstein Commissioner in Germany has a fearsome reputation and is unafraid to take on the likes of the federal government, Google or SWIFT. In comparison, the UK Commissioner rarely attempts enforcement actions on large firms or public authorities, and even then they normally settle for an Enforcement Notice rather than a financial penalty. The FSA on the other hand is clearly happy to hit companies hard for data protection breaches. Whilst I support that approach – poor data protection practices invariably arise from poor information security regimes coupled with a cultural disregard for personal data – I’m somewhat concerned that heavy penalties will deter firms from voluntarily notifying individuals or authorities about breaches when they occur, for fear of being penalised. In this particular case, Zurich voluntarily notified its customers of the loss, but I’m guessing that other financial firms in the same position might think twice in light of this penalty (government authorities and any unregulated body can of course carry on with relative impunity, not being subject to the FSA’s regime).
So what does all this mean? Whilst I fully support meaningful penalties for organisations that systemically fail to protect personal data, I’m concerned that the creation of scapegoats will simply serve to deter organisations – and financial organisations in particular – from voluntarily reporting incidents when they occur. We need to level the playing field such that fines are proportionate to the offence and the organisation’s ability to pay, regardless of size or sector. We need to consolidate to a single regulator for a single issue, rather than sector-specific regulators determining their own scale of penalties. And we need the Information Commissioner to recognise that after 20 years of promotion and awareness, it’s time to focus his resources on effective enforcement. Only then will all organisations, and public authorities in particular, start to treat personal data with the respect it deserves, and stop trying to duck accountability for protecting it properly.
The open season on Facebook (1 Jan – 31 Dec, no sniping on Sundays or religious holidays) continues with the sort of vigour that the media normally reserves for footballers’ wives and anyone who’s ever been in a talent show. Facebook brought some of this upon themselves with their repeated, poorly-publicised and complicated revisions to their privacy policies. But yesterday’s ‘headline‘ was that a researcher has used a script to skim all the public domain text from the social networking site and compiled it into a single file.
The only data in the file (which as someone who doesn’t frequent the torrents, I don’t have a copy of) is apparently the public domain information on Facebook – anything which was set to ‘private’ was not accessible. The file is only a few GB in size, which suggests it’s just text (please correct me if I’m wrong), so the richness of the pictures isn’t in there.
If the facts have been reported correctly then from a legal perspective nothing has happened here. Someone has taken a large volume of searchable public domain information and put it into a file where it is searchable. Data Subjects gave their consent to the publication at the time they originally published it. Of course there’s likely to be material in there that is subject to intellectual property rights, but it wasn’t our researcher who put it onto Facebook in the first place. I imagine that a skilled grep-er could find information in there that isn’t easily spotted using Facebook’s own search tools, and that some interesting patterns might emerge, but I’d also imagine that much of the context and interconnectivity of the information was lost in the download process, so it’s swings and roundabouts for the value of the offline data.
And yet… and yet… the media still throw up their hands in horror and cry that the death of privacy is upon us. That may be so, but it’s not because someone’s downloaded Facebook. Our problem is that in modern society we seem to think of privacy as being a form of urban anonymity* – less ‘the right to be left alone,’ and more ‘the right to do whatever the hell we please because nobody’s going to know who we are when we do it’.
This is not privacy, but an uncontrolled and irresponsible form of pseudonymity, where we assume that people don’t know our other personae: the ‘me’ on Facebook isn’t the ‘me’ at work or the ‘me’ that visits my parents or the ‘me’ that goes down the pub. These are all legitimate personae that we choose to portray, but we keep them from others because they simply don’t mix. Our fear stems from the threat of those personae ever getting together in public with a few drinks and telling the world who we really are. Once that happens, it’s all over for privacy under the urban anonymity model – there’s no way to separate those personae once their interconnectedness has been made public (particularly if your boss – or worse still, your mother – finds out what you did down the pub).
It’s never pretty when your various personae get together
So perhaps it’s time we look to a pre-industrial past for fresh concepts that will help us to understand privacy in the age of social networking, and to respond to non-incidents such as this in a more meaningful way.
Our current model is one of urban anonymity, but is that a reasonable model for the Internet? Surely we actually engage in communities? The Internet may provide an enormous anonymous backdrop, but most of what we do within it is community-driven, with those communities bounded by interests, professions, territories, applications, providers etc.
In a small community, there is no anonymity: everything you do in a public place is public knowledge. Your childhood sweethearts, exam successes and failures, your drunken teenage nights, health problems, run-ins with the police, financial problems – anything that happens in public becomes public domain, and you are completely identifiable by your peers when it happens. Yet village communities rarely seem to schism, ostracise or generally form lynch mobs (except when it’s as part of the local tradition, of course). Because everyone’s life is a matter of public record, and sooner or later everyone says or does something in public they’d sooner forget, then that quality of forgetting becomes embedded in the community: even if we can’t forgive, we choose to forget because we hope others will do the same for us. Otherwise we’d never venture out of our houses for fear of twitching net curtains.
So is this a valid model for considering how we use the Internet? I think so. We don’t go searching for embarrassing things online done by complete strangers (well I don’t anyway) unless they are truly staggeringly stupid/naked/funny, but we are interested when even a minor incident involves someone we know, or who exists within one of our online communities. Those are people that seek forgetfulness, and from whom we may require that forgetfulness to be reciprocated. The Internet itself won’t forget anything, so there’s no point in trying to erase the evidence.
As adults we fret that the re-appearance of a photo of teenage high spirits (i.e. that one wearing nothing but a pair of pants on your head and a winning smile) may come back to haunt us, but younger generations will have learned to see the Internet for what it is: a global network of interconnected communities, rather than a homogenous mass of users. Maybe they’ll be better at forgetting than we – or computers – will ever be. And maybe they’ll come up with a more grown-up model for online privacy than we currently use.
* – Thank you Robin for so ably explaining this idea to me last year…
The BBC reports that the financial failure of gay teenager magazine XY, and its associated database, has given rise to a painful privacy conundrum: what happens to the database of registered site users?
When a company fails, it is normal practice for the administrator to seek the greatest possible asset value on behalf of the creditors. In some cases this means running the company on their behalf, but most of the time the administrator will sell off assets, or offer them up to creditors in lieu of debts. 10 years ago, the administrator would have been selling off the IT assets based upon their hardware value, but as businesses become increasingly aware of the (often intangible) value of data assets, these are being put up for sale as well.
That all seems reasonable, but we run into a fundamental conflict where the data assets contain personal information. A sale of the asset causes two principal problems from a data protection perspective: the change of Data Controller, and potential change of purpose of the personal information. The database has very little commercial value to the buyer unless the vendor can obtain consent from data subjects to both of these changes, and would be in breach of the Data Protection Act (1998) unless this was obtained. In practice the new consent is so complex to obtain that this situation rarely arises in Europe, where the European Data Protection Directive provides parity of protection across member states.
But the US has no equivalent Federal legislation. Contrary to popular belief, US citizens have no constitutional right to privacy (although this is in part granted by various constitutional amendments), and instead achieve privacy through a powerful Federal Trade Commission, individual State legislation, and the ever-present threat of class action lawsuits against any company that infringes its own privacy policies.
And hence we have the situation arising with XY.com. The database, containing personal information about many tens of thousands of young gay men, many of whom will not yet have decided upon their own sexuality, or told family and friends about that sexuality, is now up for grabs. The creditors are keen to obtain the maximum value for the database, and this might include selling it for commercial purposes at odds with the original intentions. In the US, this may be legal, but the situation becomes increasingly complicated when we take into account that because of the global nature of the Internet, it is inevitable that EU citizens will be in that database. Does the Data Protection Directive apply? Can they demand protection of their personal data? As Privacy International’s Simon Davies points out,
“The selling off of private information, gathered under the supposition of privacy, is bad enough … Even worse if you’re forced into it. And positively untenable when the information is connected to kids who are dealing with a dawning sexual reality that in some instances is even more fraught than what straight kids go through. … I would argue that this is a case where the Information Commissioner should write directly to the US and ensure action is taken.”
That point about intervention by the Information Commissioner’s Office is an important one, and I agree that the Commissioner should get involved. But will the US listen? Probably not. More likely, the lawyers will weigh up the threat of a meaningful lawsuit being brought by young gay men in the EU, who may well have to disclose their details in order to take action (many will not wish to do so), and decide that the risk is acceptable. The situation is about as far from ideal as it could be, and underlines the pressing need for reform of the legal arrangements for transfer of personal data about EU citizens to the US in light of the general failure of the Safe Harbour agreement and companies’ poor implementations of Binding Corporate Rules.
The Information Commissioner launches his annual report today, and I hope that as his office publicly reviews the past year and speaks of the challenges of the year ahead, that the protection of those individuals threatened with a loss of privacy by the potential sale of XY.com’s database is one of the topics on his agenda.
* – This was almost certainly because of a lack of understanding of ID issues rather than a lack of compassion for the transgender community, and I’m not for a moment suggesting any prejudice on her part.