One of the greatest challenges faced by Privacy and Data Protection professionals is demonstrating that their organisations have complied with the requirements of the various laws governing the handling of personal data. The freshly revised BS10012 can help organisations to meet their privacy management obligations.
The EU Data Protection Directive (1995) has created a legislative landscape whereby each EU Member State has implemented local data protection laws that reflect their interpretation of the Directive and their local cultural and commercial sensitivities (Germany, for example, has famously rigorous data protection laws; Spain’s data protection act mandates the complexity of passwords). Member States have then applied their own regulatory approach, so that countries such as the UK and Ireland are perceived as traditionally having a relaxed, hands-off approach to enforcement, whereas France and Germany are quick to apply tough penalties for data protection infringements.
Then we have the added complexity of international data protection laws, and how organisations in EU Member States interact with other countries, in particular the US, which has a sectoral approach to privacy. Personal data cannot be transferred out of the EU to other countries unless suitable legal safeguards are in place, which can be achieved through a number of ways; a decision of ‘adequacy’ from the EC’s Article 29 Working Party to confirm that the destination country has suitable data protection laws and enforcement; ‘model clauses’ to which all parties subscribe to bring processing under the remit of EU laws and EU courts; ‘binding corporate rules’ which provide similar controls but permit them to be tailored to fit the specific relationship; or ‘explicit consent’ from the data subject to the transfer and processing (something which is much harder to achieve and manage than might be first thought). In the case of US transfers, organisations can also use the US-EU Privacy Shield, a legal framework to which organisations can subscribed to achieve similar outcomes.
But amidst this complexity there is an underlying challenge that none of these legal mechanisms helps to address: how should organsiations deliver the desired outcomes mandated in these laws?
Our problem is the contextual, changing and culturally sensitive nature of privacy. What works in one organisation does not necessarily work in the next; controls that might be appropriate in one country could hinder normal business operations in another; personal data processing that is considered intrusive on one continent might be of no consequence to individuals in another. In this context, laws that stipulate the detailed control objectives in organisations would be inappropriate, since the controls would in all likelihood be wrong in almost any situation (perhaps the most extreme example of this was the ill-fated Identity Cards Act which mandated the architecture for the system). The new General Data Protection Regulation (GDPR)does include some control objectives, such as the requirement for a data protection officer or use of data protection impact assessments, and it remains to be seen how successfully organisations can respond to these demands.
That’s why the British Standards Institute’s freshly rewritten BS10012 Data protection – Specification for a personal information management system is a welcome development. The original publication was arguably too high-level to be of much use as an implementation tool, but the fresh version, which is now open for consultation, provides a much more consistent, measurable way to implement the requirements of the GDPR by providing control objectives for data protection management, rather than relying on outcomes alone. It’s by no means a panacea for privacy management, but the approach specifies the organisational needs, leadership, planning, support, operational requirements, evaluation and improvements needed to implement, maintain and improve a personal information management system that is fit for purpose. The draft is open for comments until 7 November 2016, and I would urge you to take the time to read and comment.
Declaration of interest: I volunteer on the British Standards Institute’s IDT/001/0-/04 Data Protection committee.
The Department for Business, Innovation & Skills has released Debbie Wosskow’s independent review on the potential of the sharing economy,”Unlocking the sharing economy: an independent review”.
…or “How the Reputation Management Industry Came of Age”
Much fuss has been made in the press about the European Court of Justice’s decision that search engines (and Google in particular) must enable a ‘right to be forgotten’ – that is, that certain search results must be disregarded if the data subject can substantiate that they are not relevant to the search. Some of the best coverage of this comes from Chris Pounder, who reflects on the misinformation and press coverage and points out that Google routinely informs users when results have been changed at the request of a third party.
Google has implemented the ruling, and its process requires the user to prove that they are the data subject (and in all likelihood to check that the data subject is an EU citizen) and to put forward their reasons for the redaction – and a redaction is what it is: when Google removes results, the existence of a search result is noted at the foot of the search results, but the result is not provided.
The fact that Google notifies users when a search has been redacted is an important privacy protection, and one for which Google should be applauded: without that transparency, we might never be aware when a change has taken place, which in turn opens up a path for censorship and manipulation. Censorship is only truly effective if it is covert; if users are made aware that something has been modified, then they at least stand a chance of tracking it down.
But the idea that personal data might be struck from a search database as a result of this ruling is a fallacy: the rate of data collection, aggregation, sharing and analysis in any search engine is such that any ‘forgotten’ (i.e. deleted) reference would most likely be repopulated in a matter of hours, thereby rendering the original request to be forgotten redundant. So in order to comply with this requirement, Google and others will have to maintain a register of ‘redacted terms’ and possible ‘redacted URLs’ – those search results which have been deemed as forgettable.
That gives rise to the inevitable question about who determines what is a reasonable assertion for taking down a search result? Google has an advisory committee that oversees the process, and which has had to preside over 12,000 requests and counting in a matter of days. That’s too many requests for any sensible scrutiny of each one, so it’s reasonable to assume they’ll either set the bar very high or very low for such takedowns to be accepted.
And how do they judge the validity of a takedown request? For example, let’s imagine that a celebrity broadcaster with a history of charitable works is convicted for a string of sexual assaults. Should the individuals whom he supported be able to take down search references to his name bringing up associations with their names? I imagine that the broadcaster would want his charitable works to remain on record, and he might even argue for his own takedown request so that if someone searches on his name, plus the beneficiary of his charitable work, then results showing his conviction should not show up.
That’s not a process that is going to operate on an Internet-scale very easily.
Some commentators have suggested that this is the end of free speech on the Internet, and that politicians and corporates will use the ruling as a way to stifle or manipulate freedom of speech. That’s certainly a potential risk, particularly if this ruling were to stand (it will be challenged), if it were applied to all search facilities (e.g. within newspaper websites), and if search engines cease to notify users of modifications to search results. But the Internet has a habit of finding its way round such obstacles, and I’m confident it will this time as well.
The most significant outcome, at least in the short term, is likely to be the benefit for reputation management companies, who will be able to sell ‘right to be forgotten’ services to individuals, where the data subject notifies the company, which in turn notifies all the major search providers and checks for compliance with that notification. Search providers will probably welcome such a service if it saves them having to operate their own advisory committees.
So, the ‘right to be forgotten?’ Not a very accurate description. I’d like to propose the ‘right to have facts redacted (but not forgotten) in certain contexts, until we figure out a better way to live with our mistakes’ as a more meaningful and useful term.*
* And one which demonstrates why I’ve never pursued a career in product branding
The Government Digital Service’s (GDS) has announced the next round of procurement for the Identity Assurance Programme (IDAP), which will expand the use of a federation of private-sector Identity Providers (IDPs) to enable access to public services. There are few details at this time, beyond the announcement of a supplier event on 28th April.
Four years in, great progress has been made in cracking a very difficult project, but will this procurement be enough to get IDAP through the next year, and what does the future hold for identity assurance? Given that we’re all gearing up for tomorrow’s big oven-ready lasagne race at Aintree, let’s look at the risks associated with bidding for IDAP services.
How does Identity Assurance differ from other government ID approaches?
I’ve talked at length about identity assurance, and how IDAP differs significantly from ‘traditional’ government ID approaches, but if you’re not familiar with the programme then here’s a quick summary (and you can find out more at the GDS blog).
In the majority of population-scale identity schemes (including the abandoned National Identity Scheme), the government operates a central population database, which is used to authenticate individuals when they transact with public services. Under IDAP, government provides a federation hub, but IDPs come from the private sector and are responsible for registering and verifying users for the service. Users may hold as few, or as many identities as they wish, from as many providers as they wish, and the system is pseudonymous (i.e. no ‘root’ ID). Relying parties specify the level of assurance they need in a given transaction, and the IDP is paid accordingly, so for a low-risk transaction (e.g. query about library services) there is a low level of assurance; whilst for a major transaction (e.g. applying for a passport) there is a high level of assurance from the IDP.
There are no identity numbers, no identity cards, and no compulsion on users to register, or maintain the accuracy of their data. A ‘trust scheme’ operators oversees the service and ensures that everyone plays by the rules.
What is the current status of the programme?
The first round of IDAP procurement took place in 2012, and resulted in eight IDPs being recruited to the framework, of whom three declined to go through on the first call-off contract. That leaves us with DigIdentity, Experian, Mydex, Post Office, and Verizon Business. They have been working on the first services, which will connect to a hub provided by GDS. The first private beta services are now running, and will shortly be made public, with selected users being able to enquire their driver records using IDAP. In anticipation of expanding the breadth and depth of the service, and increasing robustness, GDS is now returning to the market to seek additional IDPs.
GDS is hosting a procurement event on 28th April, at which the procurement will be explained, and candidate IDPs can have their questions answered. There is one burning question I’d like to have answered at that event, and in anticipation of the end of the month, I’ll outline it here.
The challenge for GDS
This next round of work is not going to be without its challenges: IDAP has to deliver some ambitious objectives, including:
– providing services for multiple central government departments with conflicting needs, architectures, and timescales;
– enabling cross-channel service delivery that enables users to engage with IDAP online, over the telephone, and face-to-face;
– shifting delivery away from the‘traditional’ public-sector providers who are equipped for major project delivery, and instead working with a range of small and large companies, some of whom are not accustomed to working with the UK government;
– rolling out a robust service delivery that does not risk denying services for users if systems face teething problems;
– creating collaborative federation between potentially competing IDPs;
– establishing a trust framework and oversight mechanism that ensures legal protection for all parties;
– building consumer confidence in a new concept which does not yet have a recognised brand, interface or use case;
– growing an ecosystem of IDAP services which is as attractive for private sector providers and relying parties as it is for public authorities.
Each of these is a major change for central government; collectively they are a huge obstacle, and whilst GDS has a track record of delivering ‘impossible’ projects under challenging circumstances, there is no denying that this next phase of work for IDAP is likely to be the toughest yet.
Commercial challenges for potential IDPs
But the challenges aren’t exclusive to GDS – in fact, the current and future IDPs have perhaps the toughest environment of all, since the risks are rising but the possible rewards are a long way off, and we don’t yet have a commercially viable IDAP ecosystem. IDPs are currently paid on a “per unique user, per IDP, per annum” basis: that is, for each person who uses an IDP to access IDAP services, the IDP is paid a one-time fee each year, even if that person also uses other IDPs. That means that the IDP must win over users and persuade them to use IDAP if it is going to recoup its investment in IDAP services.
Anecdotal evidence suggests that the minimum cost of standing up an IDP service which could pass muster with the trust scheme, would be in the region of £1.5m – £2m (probably much more for a large company). Add to that the costs of operating, marketing, auditing, etc, and we’re probably looking at another minimum £500,000 per annum. This isn’t a cheap proposition for the IDP, and the up-front costs drive all the risk to the IDP, with no assured transaction volumes from government.
The transaction payments to IDPs are not publicly available, but if we guess at, say, £20 per user per annum, with an operating cost of £10 to verify and credential each user, that means an IDP would need to run a population of 250,000 users in the first year just to have a chance of breaking even. That’s going to be a problem for stretched Sales Directors who are evaluating bid risks and trying to determine where to focus their sales resources. Why bid the high-risk job with the deferred payback, when they could go for safer projects with up-front payment (that is, if any such projects still exist in public sector, but that’s another matter).
And the political challenge…
In just over a year from now, Britain will go to the polls. In his Editor’s Blog, Bryan Glick considers how GDS is likely to become a focal point for political fighting both before and after the next election. If we end up with a Conservative-led government, then the GDS vision is safe; but if we have a Labour-led government, then there will be those wishing to exact revenge on Conservative policies, including senior political figures who still support the idea of National ID Cards, and in that situation IDAP looks like a pretty easy target for them to cancel and switch back to a more traditional ID approach. Our IDPs would find their contracts cancelled without having made so much as a penny, and potentially having sunk several million pounds into their delivery.
IDAP is therefore a high-risk commercial proposition, not just because of the nature of the service and its commercial model, but because of broader political pressures, and it would be a negligent Sales Director who didn’t take that into account when deciding where to focus bid resource. GDS could of course do many things to mitigate this risk, including offering up-front payments to IDPs; ensuring that there are appropriate termination clauses in the contracts; delaying the delivery phase until after the election; or changing the commercial model altogether.
So my question to GDS is: what can GDS do to assure candidate IDPs that the risks associated with bidding and delivery are successfully mitigated by the potential prize and the likelihood of winning it? Until that question is answered, I think I’d rather put my money on a 5-horse accumulator than an IDP bid team.
[Declaration of interests: I am not associated with any of the incumbent IDPs or bidders, although I was part of the Post Office’s bid team. I have an unpaid role in the GDS Privacy and Consumer Advisory Group. And I’d like to see IDAP succeed, because a return to ID Cards doesn’t bear thinking about]
This week is Gartner’s annual Identity and Access Management shindig in London. I was fortunate enough to attend for the first time in 2011, when there was a real sense of mixed feelings amongst the delegates: the big vendors were split into those who were upset at the cancellation of the National Identity Scheme, and those delighted at the opportunity to compete for whatever might replace it; end user organisations were generally ambivalent, but for some there seemed to be a relief that they could move on from the black hole created by ten years of the NIS.
Three years later, I’ll be speaking in this afternoon’s session on the government’s Identity Assurance programme, and specifically how it might disrupt the way that we buy and sell identity services in the UK.
The Identity Assurance Programme (IDAP) depends upon reuse of existing credentials through federation, rather than commissioning substantial new systems, and providers are having to seek innovative business models to justify their investment. This has created a somewhat surprising list of Identity Providers (IDPs) in the first tranche of suppliers: some welcome SMEs, and a new role for the Post Office, but no big name UK online brands, retailers or financial services providers.
IDAP’s success will rest upon whether potential providers and consumers of IDAP services can be persuaded that IDAP’s interests align with their own, and that any investment they make in technology, marketing and business transformation will give them a future return. The Government Digital Service will have their work cut out delivering the commercial models that these companies need to justify their investments – maybe we’ll see some good ideas at today’s conference?
One of our long-standing problems with Internet privacy is the tracking of user activities, more often than not without any meaningful opt-out mechanism: if you don’t want to be profiled by, say, Facebook then don’t go on Facebook. That’s all very well to say, but no use to someone whose social life depends on the social network (it’s one of the areas which the new EU Data Protection Directive might be able to address, if it ever sees the light of day). There is, however, a sense of balance in Facebook mining user data, since the site offers a free service which its users find invaluable. Users receive value in return for the value in their data. Not a transparent relationship, almost certainly not equitable, but at least it’s commonly understood.
More disturbing is the potential for behavioural monitoring and online tracking by communications service providers. When Phorm’s adventures in deep packet inspection came to light, users were quite justifiably outraged: secret monitoring of their online use of a paid service by a third-party organisation without their knowledge or consent was clearly a big step over the line of acceptable intrusion. When users pay for their services, they expect a degree of respect for their privacy.
But there’s no doubting that a key aspect of consumer empowerment is the potential for users to trade some of their privacy for a reward. If behavioural data is that valuable to advertisers, then why not pass that value all the way through the chain to the data subject, rather than holding it with a service provider?
It’s interesting to see AT&T taking this a step further in Austin, Texas, by offering discounts to internet customers who choose to submit to online profiling of their behaviours. Customer plans are discounted by 30% for customers agreeing to opt into “AT&T Internet Preferences,” which is the company’s user profiling tool, used to target behavioural advertising. I’d be interested to see the small print – does it allow users to use VPNs to obscure their online activities from AT&T? I suspect the relevant protocols would be blocked.
Whilst it’s not a service I’d personally subscribe to, it’s good to see a provider offering to extend the profiling value chain all the way back to the user. As Constantijn van Oranje-Nassau said at this week’s IAPP Data Protection Congress, “you can be at the table or on the menu,” and even if rewarding consumers for surveillance isn’t quite a seat at the table, at least we’re getting to haggle with the Maitre D’ about whether there might be a seat available.
This year’s RSA Conference Europe is themed around how ‘Big Data Transforms Security,’ requiring support from and feeding into the corporate security function. The tone was set by one quotation from RSA’s CEO Art Coviello in his welcoming keynote, where he proclaimed that “Anonymity is the enemy of privacy.” In other conference sessions, the implications of processing personal information have come up time and again as flashpoints between the security and privacy communities – but are these disciplines really poles apart?
In his keynote, Coviello went on to explain that in his opinion anonymity is used by digital adversaries to misuse data without fear of being caught or prosecuted. That’s fighting talk for privacy advocates, who would of course argue that anonymity is a critical privacy tool, which must be interpreted in subtle and granular ways: zero-knowledge proofs, anonymous attributes and pseudonymous interactions are applications of anonymity which preserve privacy without impeding business objecties or putting data at risk. But within the corporate user environment, which is RSA’s customer heartland, the argument holds sway and few employees would have an expectation of privacy that extends to anonymity in their working environment.
Not all of the keynote was quite so contentious, and Coviello used the analogy of privacy and security functions as opposite magnetic poles, which can attract each other when aligned, and can form a powerful bond. It’s a lofty ambition, but for many organisations the security and privacy functions still exist in a state of polar repulsion, with security and privacy teams located in different divisions, serving different masters for different outcomes. Privacy functions in particular, hidden away from the sharp end of business delivery in the likes of compliance or legal teams, too often retain a risk-averse culture and a tendency to say ‘no’ when confronted with a challenging business objective.
Unfortunately, for organisations which suffer this bipolar management of personal information, the nexus between security and privacy is too often in incident management, as the Privacy Officer and Security Officer fight over who should have secured the missing personal data asset, and what to do about its loss. The result is that everyone loses, including the individuals whose data has been leaked or misused, and the security and privacy functions remain in conflict, confined to reacting to incidents rather than taking proactive control of processing risks.
If organisations are to exploit big data, then privacy and security functions need to align to create a shared understanding of risk throughout every part of the project lifecycle. Business cases and change requests should be checked not only for security compliance, but also to ensure that they meet corporate risk appetites in the handling of personal information, as well as legal and sectoral responsibilities for data protection. A truly aligned security and privacy operation should feature co-location of delivery teams, both reporting to a single responsible officer who can identify and resolve problems before they boil over, but equally can ensure that risk decisions take into account both security and privacy needs.
The RSA Conference will of course remain the preserve of the information security community, but with this level of focus on privacy needs, it’s likely to become a compelling event for privacy professionals too – and that can only be a good thing for personal data risk management.
[Declaration of interest: I am a member of the RSA Conference Europe programme committee]
The call for papers for 2014’s IAPP Europe Data Protection Intensive comes to a close this Wednesday. If you’re a privacy professional then this will be the most important event in London next year, and will be well worth attending.
You can find more details about the event here: https://www.privacyassociation.org/events_and_programs/iapp_europe_data_protection_intensive_2014/