Identity, Privacy and Trust

September 14, 2016  1:05 PM

Data Protection – Objectives or Outcomes?

tobystevens tobystevens Profile: tobystevens
GDPR, privacy

One of the greatest challenges faced by Privacy and Data Protection professionals is demonstrating that their organisations have complied with the requirements of the various laws governing the handling of personal data. The freshly revised BS10012 can help organisations to meet their privacy management obligations.

The EU Data Protection Directive (1995) has created a legislative landscape whereby each EU Member State has implemented local data protection laws that reflect their interpretation of the Directive and their local cultural and commercial sensitivities (Germany, for example, has famously rigorous data protection laws; Spain’s data protection act mandates the complexity of passwords). Member States have then applied their own regulatory approach, so that countries such as the UK and Ireland are perceived as traditionally having a relaxed, hands-off approach to enforcement, whereas France and Germany are quick to apply tough penalties for data protection infringements.

Then we have the added complexity of international data protection laws, and how organisations in EU Member States interact with other countries, in particular the US, which has a sectoral approach to privacy. Personal data cannot be transferred out of the EU to other countries unless suitable legal safeguards are in place, which can be achieved through a number of ways; a decision of ‘adequacy’ from the EC’s Article 29 Working Party to confirm that the destination country has suitable data protection laws and enforcement; ‘model clauses’ to which all parties subscribe to bring processing under the remit of EU laws and EU courts; ‘binding corporate rules’ which provide similar controls but permit them to be tailored to fit the specific relationship; or ‘explicit consent’ from the data subject to the transfer and processing (something which is much harder to achieve and manage than might be first thought). In the case of US transfers, organisations can also use the US-EU Privacy Shield, a legal framework to which organisations can subscribed to achieve similar outcomes.

But amidst this complexity there is an underlying challenge that none of these legal mechanisms helps to address: how should organsiations deliver the desired outcomes mandated in these laws?

Our problem is the contextual, changing and culturally sensitive nature of privacy. What works in one organisation does not necessarily work in the next; controls that might be appropriate in one country could hinder normal business operations in another; personal data processing that is considered intrusive on one continent might be of no consequence to individuals in another. In this context, laws that stipulate the detailed control objectives in organisations would be inappropriate, since the controls would in all likelihood be wrong in almost any situation (perhaps the most extreme example of this was the ill-fated Identity Cards Act which mandated the architecture for the system). The new General Data Protection Regulation (GDPR)does include some control objectives, such as the requirement for a data protection officer or use of data protection impact assessments, and it remains to be seen how successfully organisations can respond to these demands.

That’s why the British Standards Institute’s freshly rewritten BS10012 Data protection – Specification for a personal information management system is a welcome development. The original publication was arguably too high-level to be of much use as an implementation tool, but the fresh version, which is now open for consultation, provides a much more consistent, measurable way to implement the requirements of the GDPR by providing control objectives for data protection management, rather than relying on outcomes alone. It’s by no means a panacea for privacy management, but the approach specifies the organisational needs, leadership, planning, support, operational requirements, evaluation and improvements needed to implement, maintain and improve a personal information management system that is fit for purpose. The draft is open for comments until 7 November 2016, and I would urge you to take the time to read and comment.

Declaration of interest: I volunteer on the British Standards Institute’s IDT/001/0-/04 Data Protection committee.

November 27, 2014  7:53 AM

Identity assurance and the sharing economy

tobystevens tobystevens Profile: tobystevens
Identity assurance

The Department for Business, Innovation & Skills has released Debbie Wosskow’s independent review on the potential of the sharing economy,”Unlocking the sharing economy: an independent review”.

I haven’t had an opportunity to read the document in full yet, but there are recommendations in there for GOV.UK Verify, specifically that the service should be opened up to private sector businesses in 2015. The recommendation is entirely in keeping with GDS’ stated aspirations for Verify, but I would imagine would be difficult to fulfil within the stated time, not because of lack of will or funding, but simply because of the time needed to extend the necessary trust frameworks and hub functionality into attribute provision. That’s a big step for identity assurance, and GDS’ strategy of iterative delivery will want to build up to it over time.
It’s important to understand that attribute exchange doesn’t mean wholesale sharing of personal data between the parties: rather, that an individual can authorise one authorised provider with whom they have a relationship, to release a defined set of personal data to a relying party, with an associated level of assurance so that the relying party understands how trustworthy that data is. In most instances that would be done as a one-off transaction, rather than any ‘gateway’ or similar ongoing sharing capability – indeed, attribute exchange offers the potential to do away with many of the gateways currently used to permit free sharing of personal data between government departments. From a privacy perspective, that has to be a good thing.
I would guess that in the first instance, attribute exchange capabilities will be confined to the selected identity providers and service providers. Identity assurance only works if all parties can trust each other, and therefore be trustworthy for service users. Any organisation that wishes to offer or consume attributes within the identity assurance ecosystem will need to have subscribed to the trust scheme; implemented the technologies needed to interface with the hub; had those certified as fit for use; and then built the relationships needed with relying parties so they are able to ask service users for the appropriate attribute data from the appropriate source. 
It is also worth bearing in mind that by the time an organisation has done all that, it is effectively able to be an identity provider in its own right if it wishes to, as it is then able to issue and consume both identity and attribute data. That means that once there is a business case for doing so, the existing identity providers (and those that will emerge from the forthcoming procurement process) will be the private-sector organisations effectively able to issue and consume identity and attribute data, just as recommended in the review.
Identity assurance has the potential to transform how we exchange personal data, but attribute exchange is not going to happen overnight, regardless of how much money is thrown at it. As business cases emerge for individual private sector organisations to join the sharing economy, the path should be open for them to do so.
[These views are my own and do not necessarily reflect those of any organisation associated with the GOV.UK Verify scheme]

November 19, 2014  11:14 AM

Privacy Seals and Privacy Snake Oil

tobystevens tobystevens Profile: tobystevens
One of the constant problems of privacy is knowing who to trust with your data. Laws, policies, technical controls and trustworthy brands go a long way to building consumer confidence in an organisation’s data handling, but it’s only a matter of time before some bright spark suggests “maybe we could have a privacy seal to prove we’re trustworthy?” After all, on the face of it, this seems like a good idea: a trust mark to demonstrated that an organisation handles personal data in accordance with a defined set of good practices.
The problem is, it just doesn’t work.
There are a number of privacy seal schemes out there, but the majority are US-centric, with key players including TRUSTe, BBBonline, EuroPriSe and WebSeal*. Each organisation offers its members a set of standards, a self-assessment method, and a logo they can use in customer-facing materials.
Advocates argue that the strength of a privacy seal scheme is that it provides its members with a common standard for personal data management. In an environment that is law-rich but standards-weak, the scheme provides confidence that the members are working from an ‘approved’ starting point. Individuals are assured that participating organisations will deliver against these standards, and that they can complain to the scheme in the event of a problem. Members hopefully maintain good practices in the management of personal information because they wish to maintain their certification, and in all likelihood their staff will improve their practices through greater awareness of personal data maangement.
A privacy seal scheme also provides a basic confidence that an organisation has a degree of commitment to good privacy practices, otherwise why would it bother to engage in the first place? The process of joining a scheme will most likely raise awareness, and result in improved practices.
Unfortunately, there are some significant potential downsides to privacy seals as well. Firstly, the scheme can only be as good as its underlying standards, and there are a range of standards used by the schemes. Consumers may assume that all schemes are equal, thereby obtaining a false sense of assurance that the weaker schemes are in fact respecting their personal data.
Secondly, the schemes use different approaches to certification. EuroPriSe and WebSeal are both independently assessed by experts to ensure that members comply with standards, whereas the entry point for many other schemes is self-certification. That means we have a broad spectrum of possible privacy outcomes for consumers dealing with seal schemes, since organisations can gain entry to a scheme relatively easily.
Thirdly, and perhaps the most difficult of all, is the ability of schemes to monitor and police their members. If you are a scheme operator, dependent upon your members for your income, then the last thing you want to do is to suspend a high-profile member because they’ve failed to submit an annual recertification; or to strike off a member for proven poor privacy practices. You’ll have to do so very publicly for the scheme to maintain its credibility, otherwise the other members, and the public, may accuse you of opaque practices. You’ll need to inspect those members, in response to consumer complaints, to be sure they’re doing what they claim, and those inspections aren’t going to be cheap. And you’ll have to ensure that your members correctly represent the nature and trustworthiness of your scheme, otherwise they might abuse it for their own purposes.
Unfortunately, this last point appears to have been at the heart of a failure for TRUSTe, which is predominantly US-based, and has many thousands of members who use the TRUSTe seal to assure their customers that their data handling practices are up to scratch. TRUSTe has had to enter into an agreement with the US Federal Trade Commission, which has levied a US$200,000 fine, for falling short of a pledge “to hold companies accountable for protecting consumer privacy.” TRUSTe is alleged to have failed to conduct annual recertifications of its privacy seals in at least 1,000 incidents over a five-year period; and to fail to ensure that its members correctly described TRUSTe as a for-profit entity. The FTC takes this stuff seriously, and has enforcement powers beyond the UK ICO’s wildest dreams, so in all likelihood the agreement offered by the FTC was preferable to going to a full regulatory punishment. TRUSTe has responded to assure members that the problem was remedied long before the fine was levied.
TRUSTe’s woes are not necessarily indicative of problems unique to TRUSTe, but of the fundamental challenge for a privacy seal: how do you stay on top of the practices of all the members, all of the time? Full audits are too expensive for all but a handful of potential members, self-certification is open to abuse, and unless the seal provider can stay on top of that abuse, the credibility of the scheme (and all similar schemes) becomes doubtful.
The UK ICO consulted on the topic a few months back, with a view to whether it should support commercial privacy seals in future, and I argued some of the reasons why that’s not a good idea. I would imagine that they’re having a long, hard think about whether they want to support privacy seals now.
If you want to find out more about trust marks and privacy seals, do check out Gilad Rosner’s definitive paper on the subject here:
* (Apparently a key requirement for being a privacy seal provider is a shameful abuse of proper capitalisation)

June 10, 2014  6:15 AM

The Right To Have Facts Redacted (But Not Forgotten) In Certain Contexts

tobystevens tobystevens Profile: tobystevens
ecj, Google, privacy, rights

…or “How the Reputation Management Industry Came of Age”

Much fuss has been made in the press about the European Court of Justice’s decision that search engines (and Google in particular) must enable a ‘right to be forgotten’ – that is, that certain search results must be disregarded if the data subject can substantiate that they are not relevant to the search. Some of the best coverage of this comes from Chris Pounder, who reflects on the misinformation and press coverage and points out that Google routinely informs users when results have been changed at the request of a third party.

Google has implemented the ruling, and its process requires the user to prove that they are the data subject (and in all likelihood to check that the data subject is an EU citizen) and to put forward their reasons for the redaction – and a redaction is what it is: when Google removes results, the existence of a search result is noted at the foot of the search results, but the result is not provided.

The fact that Google notifies users when a search has been redacted is an important privacy protection, and one for which Google should be applauded: without that transparency, we might never be aware when a change has taken place, which in turn opens up a path for censorship and manipulation. Censorship is only truly effective if it is covert; if users are made aware that something has been modified, then they at least stand a chance of tracking it down.

But the idea that personal data might be struck from a search database as a result of this ruling is a fallacy: the rate of data collection, aggregation, sharing and analysis in any search engine is such that any ‘forgotten’ (i.e. deleted) reference would most likely be repopulated in a matter of hours, thereby rendering the original request to be forgotten redundant. So in order to comply with this requirement, Google and others will have to maintain a register of ‘redacted terms’ and possible ‘redacted URLs’ – those search results which have been deemed as forgettable. 

That gives rise to the inevitable question about who determines what is a reasonable assertion for taking down a search result? Google has an advisory committee that oversees the process, and which has had to preside over 12,000 requests and counting in a matter of days. That’s too many requests for any sensible scrutiny of each one, so it’s reasonable to assume they’ll either set the bar very high or very low for such takedowns to be accepted.

And how do they judge the validity of a takedown request? For example, let’s imagine that a celebrity broadcaster with a history of charitable works is convicted for a string of sexual assaults. Should the individuals whom he supported be able to take down search references to his name bringing up associations with their names? I imagine that the broadcaster would want his charitable works to remain on record, and he might even argue for his own takedown request so that if someone searches on his name, plus the beneficiary of his charitable work, then results showing his conviction should not show up. 

That’s not a process that is going to operate on an Internet-scale very easily.

Some commentators have suggested that this is the end of free speech on the Internet, and that politicians and corporates will use the ruling as a way to stifle or manipulate freedom of speech. That’s certainly a potential risk, particularly if this ruling were to stand (it will be challenged), if it were applied to all search facilities (e.g. within newspaper websites), and if search engines cease to notify users of modifications to search results. But the Internet has a habit of finding its way round such obstacles, and I’m confident it will this time as well.

The most significant outcome, at least in the short term, is likely to be the benefit for reputation management companies, who will be able to sell ‘right to be forgotten’ services to individuals, where the data subject notifies the company, which in turn notifies all the major search providers and checks for compliance with that notification. Search providers will probably welcome such a service if it saves them having to operate their own advisory committees.

 So, the ‘right to be forgotten?’ Not a very accurate description. I’d like to propose the ‘right to have facts redacted (but not forgotten) in certain contexts, until we figure out a better way to live with our mistakes’ as a more meaningful and useful term.*


* And one which demonstrates why I’ve never pursued a career in product branding

April 4, 2014  10:08 AM

Taking a punt on Identity Assurance

tobystevens tobystevens Profile: tobystevens
GDS, idap, identity, Identity assurance, tScheme

The Government Digital Service’s (GDS) has announced the next round of procurement for the Identity Assurance Programme (IDAP), which will expand the use of a federation of private-sector Identity Providers (IDPs) to enable access to public services. There are few details at this time, beyond the announcement of a supplier event on 28th April.


Four years in, great progress has been made in cracking a very difficult project, but will this procurement be enough to get IDAP through the next year, and what does the future hold for identity assurance? Given that we’re all gearing up for tomorrow’s big oven-ready lasagne race at Aintree, let’s look at the risks associated with bidding for IDAP services.


How does Identity Assurance differ from other government ID approaches?


I’ve talked at length about identity assurance, and how IDAP differs significantly from ‘traditional’ government ID approaches, but if you’re not familiar with the programme then here’s a quick summary (and you can find out more at the GDS blog). 


In the majority of population-scale identity schemes (including the abandoned National Identity Scheme), the government operates a central population database, which is used to authenticate individuals when they transact with public services. Under IDAP, government provides a federation hub, but IDPs come from the private sector and are responsible for registering and verifying users for the service. Users may hold as few, or as many identities as they wish, from as many providers as they wish, and the system is pseudonymous (i.e. no ‘root’ ID). Relying parties specify the level of assurance they need in a given transaction, and the IDP is paid accordingly, so for a low-risk transaction (e.g. query about library services) there is a low level of assurance; whilst for a major transaction (e.g. applying for a passport) there is a high level of assurance from the IDP. 


There are no identity numbers, no identity cards, and no compulsion on users to register, or maintain the accuracy of their data. A ‘trust scheme’ operators oversees the service and ensures that everyone plays by the rules.


What is the current status of the programme?


The first round of IDAP procurement took place in 2012, and resulted in eight IDPs being recruited to the framework, of whom three declined to go through on the first call-off contract. That leaves us with DigIdentity, Experian, Mydex, Post Office, and Verizon Business. They have been working on the first services, which will connect to a hub provided by GDS. The first private beta services are now running, and will shortly be made public, with selected users being able to enquire their driver records using IDAP. In anticipation of expanding the breadth and depth of the service, and increasing robustness, GDS is now returning to the market to seek additional IDPs.


Procurement event


GDS is hosting a procurement event on 28th April, at which the procurement will be explained, and candidate IDPs can have their questions answered. There is one burning question I’d like to have answered at that event, and in anticipation of the end of the month, I’ll outline it here.


The challenge for GDS


This next round of work is not going to be without its challenges: IDAP has to deliver some ambitious objectives, including:

– providing services for multiple central government departments with conflicting needs, architectures, and timescales;

– enabling cross-channel service delivery that enables users to engage with IDAP online, over the telephone, and face-to-face;

– shifting delivery away from the‘traditional’ public-sector providers who are equipped for major project delivery, and instead working with a range of small and large companies, some of whom are not accustomed to working with the UK government;

– rolling out a robust service delivery that does not risk denying services for users if systems face teething problems;

– creating collaborative federation between potentially competing IDPs;

– establishing a trust framework and oversight mechanism that ensures legal protection for all parties;

– building consumer confidence in a new concept which does not yet have a recognised brand, interface or use case;

– growing an ecosystem of IDAP services which is as attractive for private sector providers and relying parties as it is for public authorities.


Each of these is a major change for central government; collectively they are a huge obstacle, and whilst GDS has a track record of delivering ‘impossible’ projects under challenging circumstances, there is no denying that this next phase of work for IDAP is likely to be the toughest yet.


Commercial challenges for potential IDPs


But the challenges aren’t exclusive to GDS – in fact, the current and future IDPs have perhaps the toughest environment of all, since the risks are rising but the possible rewards are a long way off, and we don’t yet have a commercially viable IDAP ecosystem. IDPs are currently paid on a “per unique user, per IDP, per annum” basis: that is, for each person who uses an IDP to access IDAP services, the IDP is paid a one-time fee each year, even if that person also uses other IDPs. That means that the IDP must win over users and persuade them to use IDAP if it is going to recoup its investment in IDAP services.


Anecdotal evidence suggests that the minimum cost of standing up an IDP service which could pass muster with the trust scheme, would be in the region of £1.5m – £2m (probably much more for a large company). Add to that the costs of operating, marketing, auditing, etc, and we’re probably looking at another minimum £500,000 per annum. This isn’t a cheap proposition for the IDP, and the up-front costs drive all the risk to the IDP, with no assured transaction volumes from government.


The transaction payments to IDPs are not publicly available, but if we guess at, say, £20 per user per annum, with an operating cost of £10 to verify and credential each user, that means an IDP would need to run a population of 250,000 users in the first year just to have a chance of breaking even. That’s going to be a problem for stretched Sales Directors who are evaluating bid risks and trying to determine where to focus their sales resources. Why bid the high-risk job with the deferred payback, when they could go for safer projects with up-front payment (that is, if any such projects still exist in public sector, but that’s another matter).


And the political challenge…


In just over a year from now, Britain will go to the polls. In his Editor’s Blog, Bryan Glick considers how GDS is likely to become a focal point for political fighting both before and after the next election. If we end up with a Conservative-led government, then the GDS vision is safe; but if we have a Labour-led government, then there will be those wishing to exact revenge on Conservative policies, including senior political figures who still support the idea of National ID Cards, and in that situation IDAP looks like a pretty easy target for them to cancel and switch back to a more traditional ID approach. Our IDPs would find their contracts cancelled without having made so much as a penny, and potentially having sunk several million pounds into their delivery.


IDAP is therefore a high-risk commercial proposition, not just because of the nature of the service and its commercial model, but because of broader political pressures, and it would be a negligent Sales Director who didn’t take that into account when deciding where to focus bid resource. GDS could of course do many things to mitigate this risk, including offering up-front payments to IDPs; ensuring that there are appropriate termination clauses in the contracts; delaying the delivery phase until after the election; or changing the commercial model altogether.


So my question to GDS is: what can GDS do to assure candidate IDPs that the risks associated with bidding and delivery are successfully mitigated by the potential prize and the likelihood of winning it? Until that question is answered, I think I’d rather put my money on a 5-horse accumulator than an IDP bid team.


[Declaration of interests: I am not associated with any of the incumbent IDPs or bidders, although I was part of the Post Office’s bid team. I have an unpaid role in the GDS Privacy and Consumer Advisory Group. And I’d like to see IDAP succeed, because a return to ID Cards doesn’t bear thinking about]

March 17, 2014  8:16 AM

Reflections on Identity and Access Management

tobystevens tobystevens Profile: tobystevens
Business, Gartner, Identity assurance

This week is Gartner’s annual Identity and Access Management shindig in London. I was fortunate enough to attend for the first time in 2011, when there was a real sense of mixed feelings amongst the delegates: the big vendors were split into those who were upset at the cancellation of the National Identity Scheme, and those delighted at the opportunity to compete for whatever might replace it; end user organisations were generally ambivalent, but for some there seemed to be a relief that they could move on from the black hole created by ten years of the NIS.

Three years later, I’ll be speaking in this afternoon’s session on the government’s Identity Assurance programme, and specifically how it might disrupt the way that we buy and sell identity services in the UK.

The Identity Assurance Programme (IDAP) depends upon reuse of existing credentials through federation, rather than commissioning substantial new systems, and providers are having to seek innovative business models to justify their investment. This has created a somewhat surprising list of Identity Providers (IDPs) in the first tranche of suppliers: some welcome SMEs, and a new role for the Post Office, but no big name UK online brands, retailers or financial services providers.

IDAP’s success will rest upon whether potential providers and consumers of IDAP services can be persuaded that IDAP’s interests align with their own, and that any investment they make in technology, marketing and business transformation will give them a future return. The Government Digital Service will have their work cut out delivering the commercial models that these companies need to justify their investments – maybe we’ll see some good ideas at today’s conference?

December 13, 2013  12:46 PM

Online Tracking: Keeping Austin Weirder

tobystevens tobystevens Profile: tobystevens
AT&T, austin, behaviour, Profiling, Surveillance

One of our long-standing problems with Internet privacy is the tracking of user activities, more often than not without any meaningful opt-out mechanism: if you don’t want to be profiled by, say, Facebook then don’t go on Facebook. That’s all very well to say, but no use to someone whose social life depends on the social network (it’s one of the areas which the new EU Data Protection Directive might be able to address, if it ever sees the light of day). There is, however, a sense of balance in Facebook mining user data, since the site offers a free service which its users find invaluable. Users receive value in return for the value in their data. Not a transparent relationship, almost certainly not equitable, but at least it’s commonly understood.

More disturbing is the potential for behavioural monitoring and online tracking by communications service providers. When Phorm’s adventures in deep packet inspection came to light, users were quite justifiably outraged: secret monitoring of their online use of a paid service by a third-party organisation without their knowledge or consent was clearly a big step over the line of acceptable intrusion. When users pay for their services, they expect a degree of respect for their privacy.

But there’s no doubting that a key aspect of consumer empowerment is the potential for users to trade some of their privacy for a reward. If behavioural data is that valuable to advertisers, then why not pass that value all the way through the chain to the data subject, rather than holding it with a service provider? 

It’s interesting to see AT&T taking this a step further in Austin, Texas, by offering discounts to internet customers who choose to submit to online profiling of their behaviours. Customer plans are discounted by 30% for customers agreeing to opt into “AT&T Internet Preferences,” which is the company’s user profiling tool, used to target behavioural advertising. I’d be interested to see the small print – does it allow users to use VPNs to obscure their online activities from AT&T? I suspect the relevant protocols would be blocked.

Whilst it’s not a service I’d personally subscribe to, it’s good to see a provider offering to extend the profiling value chain all the way back to the user. As Constantijn van Oranje-Nassau said at this week’s IAPP Data Protection Congress, “you can be at the table or on the menu,” and even if rewarding consumers for surveillance isn’t quite a seat at the table, at least we’re getting to haggle with the Maitre D’ about whether there might be a seat available.


October 30, 2013  4:15 PM

RSA Conference Europe 2013 – When Security Met Privacy

tobystevens tobystevens Profile: tobystevens
Big Data, privacy, RSA Conference, Security

This year’s RSA Conference Europe is themed around how ‘Big Data Transforms Security,’ requiring support from and feeding into the corporate security function. The tone was set by one quotation from RSA’s CEO Art Coviello in his welcoming keynote, where he proclaimed that “Anonymity is the enemy of privacy.” In other conference sessions, the implications of processing personal information have come up time and again as flashpoints between the security and privacy communities – but are these disciplines really poles apart?

In his keynote, Coviello went on to explain that in his opinion anonymity is used by digital adversaries to misuse data without fear of being caught or prosecuted. That’s fighting talk for privacy advocates, who would of course argue that anonymity is a critical privacy tool, which must be interpreted in subtle and granular ways: zero-knowledge proofs, anonymous attributes and pseudonymous interactions are applications of anonymity which preserve privacy without impeding business objecties or putting data at risk. But within the corporate user environment, which is RSA’s customer heartland, the argument holds sway and few employees would have an expectation of privacy that extends to anonymity in their working environment.

Not all of the keynote was quite so contentious, and Coviello used the analogy of privacy and security functions as opposite magnetic poles, which can attract each other when aligned, and can form a powerful bond. It’s a lofty ambition, but for many organisations the security and privacy functions still exist in a state of polar repulsion, with security and privacy teams located in different divisions, serving different masters for different outcomes. Privacy functions in particular, hidden away from the sharp end of business delivery in the likes of compliance or legal teams, too often retain a risk-averse culture and a tendency to say ‘no’ when confronted with a challenging business objective.

Unfortunately, for organisations which suffer this bipolar management of personal information, the nexus between security and privacy is too often in incident management, as the Privacy Officer and Security Officer fight over who should have secured the missing personal data asset, and what to do about its loss. The result is that everyone loses, including the individuals whose data has been leaked or misused, and the security and privacy functions remain in conflict, confined to reacting to incidents rather than taking proactive control of processing risks.

If organisations are to exploit big data, then privacy and security functions need to align to create a shared understanding of risk throughout every part of the project lifecycle. Business cases and change requests should be checked not only for security compliance, but also to ensure that they meet corporate risk appetites in the handling of personal information, as well as legal and sectoral responsibilities for data protection. A truly aligned security and privacy operation should feature co-location of delivery teams, both reporting to a single responsible officer who can identify and resolve problems before they boil over, but equally can ensure that risk decisions take into account both security and privacy needs.

The RSA Conference will of course remain the preserve of the information security community, but with this level of focus on privacy needs, it’s likely to become a compelling event for privacy professionals too – and that can only be a good thing for personal data risk management.

[Declaration of interest: I am a member of the RSA Conference Europe programme committee]

October 19, 2013  12:34 PM


tobystevens tobystevens Profile: tobystevens
conference, iapp, London

The call for papers for 2014’s IAPP Europe Data Protection Intensive comes to a close this Wednesday. If you’re a privacy professional then this will be the most important event in London next year, and will be well worth attending.

You can find more details about the event here: 

September 30, 2013  7:04 PM

The future of eID in Europe

tobystevens tobystevens Profile: tobystevens
In recent months the fuss about surveillance revelations has distracted attention from some good work in the European Commission to try to align and push forward a harmonised electronic identity and trust services approach. The problem of cross-border identity and trust services is a old one, and because of the competing influences of different legal regimes, divergent commercial interests, and the mix of standards out there, one which is still far from resolved. I last looked at this in detail in 2008, in a report for the Institute for Prospective Technological Studies.
The UK is particularly far from aligned with the broader European Union in this area because we lack a national population register, citizen identity cards, widespread use of notaries, or a common online trust infrastructure (PKI or similar). All the building blocks are available, but first we need to resolve the political and commercial issues around our national identity services (not to be confused with ID cards) before we start to worry about international interoperability. The Cabinet Office-sponsored Identity Assurance Programme (IDAP) is our best hope of achieving that outcome, but it’s still far from ready for the big time. International needs are being considered within IDAP’s scope of work, but first we need to make it work locally.
With that in mind, I was fortunate to contribute to a conference in Brussels last week on eID and Trust Services. The day was much more practical than many similar events, and the highlight was a speech by Prof Jane Winn of the University of Washington, in which she referred to Gall’s Law:’s_law
A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.
This is so very true for eID: poor online ID services can take a good working system and destroy it completely for the sake of adding complexity. The most glaring example was the National ID Scheme, which was neither simple not evolutionary, instead preferring a ‘big bang’ delivery with little opportunity to prove the system first. IDAP is running small-scale proofs of concept (the ‘Alpha’ projects, some of which have only a handful of users) to explore basic concepts before it moves to larger implementations.
The European Commission is now running a survey to support its study activities, and I’d recommend that if you have an interest in this space then you should contribute before it closes at the end of November.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: