Bored? Killing time before the excitement of Monday’s return to work? No, me neither, but either way Lifehacker is carrying a good set of tips to lock down privacy in your personal life, which is particularly timely in light of this week’s International Switch to Firefox Day (more to follow on that).
Welcome back. Or if you’ve not been here before, Welcome. I’ve rather neglected this blog for the past two years because of other commitments, but hopefully I’m now in a position to restart writing about privacy and identity-related issues and what they mean to government, industry and individuals.
It’s certainly been an eventful few years for privacy and identity. The election saw the new government follow through on manifesto commitments to terminate the National Identity Register and Contactpoint programmes, and in consequence there has been a slight shift in civil society interests towards monitoring the activities of private companies, in particular search engines and social networking sites.
Meanwhile, the government has been developing plans for Identity Assurance, a new approach that will allow individuals to access online services by reusing existing trust relationships they hold with commercial providers, rather than a government-issued credential. It’s a complex but clever idea that will shift government thinking away from the ‘deep truth’ and ‘gold standard of identity’ philosophies of the past, and instead uses a risk-based approach that should, hopefully, leave individuals in control of their online relationships whilst protecting their privacy. If we can learn from the mistakes of the past then we might just end up with a good foundation upon which to build privacy-positive ID services.
In these past few years, my role has shifted, although whether that has been from poacher to gamekeeper, or the other way around, I’m still not sure: I’ve been working with the Post Office for the past 18 months, in a role that is closely linked to the Identity Assurance programme, and for that reason there will be aspects of the subject that I am not in a position to discuss because of confidentiality agreements and public procurement rules.
With that in mind, roll on the blogging!
The CONSENT project – a collaborative project co-funded by the European Commission under the FP7 programme – is seeking opinions on the use of personal information, privacy and providing consent online. You can participate in their survey here.
If you’re an innovator with public service delivery ambitions, then you may wish to take a look at DotGovLabs – DirectGov’s Innovation Hub which brings together 1,600 SMEs, entrepreneurs and innovators looking at how digital can help solve social challenges. Part of the government’s Skunkworks programme, the Innovation Hub aims to help government to engage with experts in digital delivery.
The Innovation Hub was, until recently, only open to invited participants, but now that a critical mass of users has been reached, it’s been opened up to anyone who wishes to register.
Declaration: I have no connection with DotGovLabs other than being a registered user.
(Please excuse the lack of posts in recent months. I’ve been heavily involved with aspects of the new cross-government identity assurance initiative, which has taken up all of my time. I’m hoping to be in a position to talk about that programme very soon).
On Tuesday the Information Commissioner Christopher Graham announced the outcome of his office’s investigation into alleged security failures by ACS:Law, and the imposition of a £1,000 fine on the company’s owner, solicitor Andrew Crossley. This case demonstrates why the Information Commissioner’s Office is failing to apply fines in a meaningful manner, and we need a fresh approach to data protection penalties.
The ACS:Law case
In 2009 and 2010, ACS:Law sent approximately 10,000 letters to individuals accusing them of breach of copyright through peer-to-peer file sharing technologies, and threatening them with legal action unless they settled the claim out of court (typically with a sum of around £500). Whilst the number of victims who gave into these threats is disputed, Crossley himself allegedly claimed to have recovered over £1m from suspected copyright infringers.
Lists of suspects were provided by major ISPs such as BT and Sky Broadband, and from the very beginning there were anecdotal tales of incorrect or non-existent evidence of any copyright breach; it appeared that in many cases there was simply no way to substantiate the claims which gave rise to the threats. The media and public were outraged, and it was at that point that hacker collective 4chan waded in with a denial of service attack on ACS:Law’s website. However, that attack revealed unexpected results. It transpired that ACS:Law had stored its claim files in unencrypted form, and as the company restored its website from backup, those files were accidentally copied over. The files became publicly visible – a list of around 6,000 defendants was revealed, including personal details, payment information and details of their alleged copyright infringement.
In September 2010 the ICO investigated the alleged breach, and it became clear that not only was the claim file accidentally published, but in some cases the information provided by ISPs had been transferred in unencrypted form on memory sticks, and was stored in an online service that was not intended for business use. Apparently ACS:Law did not seek any professional advice on how to protect that information.
So, in consequence, yesterday the ICO issued a fine of £1,000 to Mr Crossley personally (since ACS:Law has now ceased trading), stating that were ACS_Law still extant, the fine might have been closer to £200,000. If Mr Crossley pays up by 6th June, the fine will be discounted to £800.
Why the ACS:Law fine undermines the ICO’s credibility
There is an important legal principle to protect company directors from the full extent of company liabilities where there has been no misconduct – otherwise no-one in their right mind would become a company director. There is an even more important principle that individual laws should not be used to punish individuals where their moral or legal misconduct cannot be prosecuted under more appropriate legislation – in other words, the Data Protection Act (1998) shouldn’t be used to punish ACS:Law for other alleged failings. But in this case, the Information Commissioner really does seem to have failed to applied a proportionate fine.
ACS:Law had a single employee in the form of Mr Crossley. He is a solicitor, so he cannot claim ignorance as a defence for failing to comply with the Data Protection Act. He has been able to escape his punishment by winding up the company, even though what allegedly occurred in ACS:Law cannot possibly be the fault of anyone else but himself. ACS:Law and its director should not be able to escape the full penalty for breach of the Act.
By applying a fine that is proportionate to the director’s ability to pay, the ICO has made it clear that a company’s directors can escape full censure simply through their accounting declarations. There really is nothing left to fear for companies that wilfully abuse the Data Protection Act, since that abuse has become a simple risk decision, and there is no meaningful obligation for them to comply. The ICO’s ability to enforce the Act has been critically undermined by this case.
Applying an appeals process
A far more appropriate way to enforce data protection penalties would be for the Information Commissioner’s to apply its fines regardless of the recipient’s ability to pay. The only proportionality in the basic fine should be against the size of the business concerned, not whether it has fallen on hard times since its original breach of the Act. The fine can then be suspended or reduced subject to a public appeals process – as opposed to discussions behind closed doors – where the recipient argues their case for reduction.
Introducing the Data Protection Offender’s Register
We also need to introduce a new concept of ‘being struck off’ the register of Data Controllers.
Just as a prosecuted company director can be prevented from holding that office again for a set period; or a professional might be struck off by their professional body and hence lose their license to practice; or a driver found guilty of repeated or serious motoring offences may be banned for a period; so individuals found guilty of knowingly mishandling personal data should be legally prevented from doing so again for a set period. This could include:
- banning the individual from registering as a Data Controller;
- banning the individual from setting or managing company policy for the handling of personal information;
- banning the individual from handling personal information in their professional capacity without supervision form another individual (much as a learner driver may not drive without a qualified driver in the passenger seat);
- forcing the individual to declare their ban to any future employer within the period of censure;
- applying a further fine or criminal conviction in the event of breach of these rules.
This new regime of applying fines regardless of the individuals’ ability to pay, followed by an appeals process; and then forcing convicted individuals to sign a register of Data Protection offenders, there would be a meaningful way to enforce the Data Protection Act. Until then, the Information Commissioner’s efforts are likely to have very little deterrent effect, and incidents such as ACS:Law will keep happening.
If the government is serious about its policy objectives of slashing administrative costs, bolstering the UK’s cyber defences, moving away from proprietary software systems, putting data into the Cloud, and treating personal data with the respect it deserves, then it is time to reassess the role of information assurance and how it is delivered. There is a pressing need to reform the information assurance function so that we have proper security governance, and so that information assurance supports, not hinders, the government’s policy objectives.
Public Sector Data Leaks
With the announcement of a £650m budget for cybersecurity, coupled with the axing of defence infrastructure that until recently would have been considered critical to the protection of Britain’s national interests, Prime Minister David Cameron has delivered the unequivocal message that cybersecurity is a cornerstone of the UK’s broader defence interests. UK defence companies will be switching their research budgets away from military hardware and into homeland security products, and information security companies around the world will doubtless be examining the UK security market, keen to get their share of the new government spend.
All this has to be a good thing for the central and local government authorities who have seen public confidence in their ability to protect information eroded by a seemingly endless string of high-profile data loss incidents. Ever since Chancellor Alastair Darling informed Parliament that HM Revenue & Customs had misplaced the details of child tax credit claimants, we have been bombarded with reports of files left on trains, memory sticks dropped in the street, emails accidentally sent to the wrong mailing lists, hard disc units lost, laptops stolen from cars; and despite senior managers time and again promising the Information Commissioner that ‘lessons have been learned’ the incidents keep on happening. Public authorities appear to be incapable of protecting information. What can possibly have gone so badly wrong with information assurance that our authorities are apparently unable to keep anything secret, at a time when the Prime Minister tells us that our cyber security has never been more important to the nation?
The Department of ‘No’
The UK government’s information assurance function is distributed across government through a number of agencies. Perhaps the best known of these is CESG (formerly known as the ‘Communications Electronic Security Group’ of Government Communication Headquarters), the national technical authority for information assurance. Based in Cheltenham, and reporting to the Cabinet Office, CESG is tasked with delivering a range of products and services including threat monitoring, product assessment, advisor training and system testing.
The information assurance function is not exclusive to CESG. The Cabinet Office has a Security Policy Division (COSPD) which produces part of the Security Policy Framework (SPF) that replaced the Manual of Protective Security (the government’s primary standards document for information assurance), and CESG produces the rest of the SPF. The National Cybersecurity Strategy also sits within Cabinet Office, but focuses more on protecting the broader Critical National Infrastructure (CNI) from major disasters, terrorist threats, foreign intelligence services and serious/organised crime, than general systems security. The MoD uses equivalent standards and administration internally, which refer back to the products and services provided by the government’s other security centres (all of which have a common root in standards that evolved into ISO/IEC27001:2005), but which operate completely separately. Other parts of the security governance function are fragmented across many committees and boards.
Significantly, this substantial infrastructure is focussed mainly upon advisory services rather than actually implementing and managing systems security: that burden falls upon the Senior Information Risk Owner (SIRO) in individual public authorities. This individual, who should ideally be from an information risk background, is the focus for information assurance delivery at a Board level within their authority. In smaller bodies, the role of SIRO is often shared with other duties such as Chief Information Officer.
The Cabinet Office has recently established the Office of Cyber Security and Information Assurance (OCSIA) , which has yet to have an opportunity to reform the information assurance function, but publicly appears to be more focussed upon the cyber defence agenda than the day-to-day mechanics of running information assurance.
With this advisory capability, one would imagine that the government’s information assurance function would be robust and strong, drawing upon a wealth of shared expertise that is delivered in such a way that security enables and supports service delivery. Unfortunately, all too often the opposite is true.
Cost-Effective Information Assurance? The Department Says ‘No’
Government lacks a focal point for information security: there is no ‘Government Chief Information Security Officer’ or ‘Office for Government Information Assurance’ – in other words, no one individual or organisation accepts accountability for the proper governance of data in the public sector.
The fragmented approach to information assurance has developed over many decades, and the cultural unwillingness for government bodies to accept responsibility for an issue as ‘toxic’ as information assurance has left the subject in the long grass as far as most CIOs are concerned. Even the proliferation of Quangos under the last Labour government did not lead to the creation of a body that might deal with this critical issues, despite some of the highest-profile data loss incidents ever to impact the public sector occurring during their term of office.
Instead the various bodies tasked with information assurance focus upon their own jurisdictions and rarely cooperate successfully: the MoD does not discuss its security standards, although they are little different from those in use across the rest of government; CESG and COSPD will only release information to suitably cleared individuals, and rarely reference each other’s work. Each department and agency has to pay to support its own security infrastructure rather than drawing upon the economies of scale that might be achieved by a central security team working for the common good of government. The information assurance environment is far from cost-effective.
Information Risk Management? The Department Says ‘No’
This lack of cooperation doesn’t just mean that key activities are duplicated: it also means that without support from their managers, those tasked with protecting systems are afraid to take risks, for fear of being blamed if an incident occurs.
The problem is that information assurance is not about absolute control, and any professional security manager will acknowledge that there is no such thing as 100% risk avoidance. Instead, it is about assessing the information risks faced by the organisation, developing mitigating controls and actions, and ensuring that they are managed properly so that the risk levels are reduced to a point where they are proportionate and acceptable.
This means that incidents will always happen. This may be because security controls are judged to be disproportionately expensive (for example, spending many millions of pounds on security to protect assets worth only some thousands of pounds); because individuals failed to comply with the instructions given to them (for example, downloading unprotected files on to a memory stick to take home, then losing that memory stick); because the system is attacked by a capable and dedicated enemy (for example, an authorised user taking copies of MP’s expense claims); or because of a ‘zero day’ exploit (for example, a hacker breaking into a system using a weakness that was previously unknown to the security officer).
Whatever the cause, security incidents will always occur, and the public sector culture is to look for someone to blame – remember how the HMRC incident was almost immediately blamed upon a ‘junior clerical officer’ before it was revealed that systemic failures were at the root of the problem? Security officers are rightly fearful of being blamed for incidents, and in the absence of someone who will act as an advocate for them when things go wrong, they are forced to fall back on the only safe path available to them, which is to say ‘no’ when the business wants to do anything which might carry an associated security risk. The likelihood of the current information assurance community being willing to support the government’s cloud computing ambitions seems slim indeed.
As a result, most public servants view information assurance as an obstacle, not an asset. Because of poor leadership, excessive bureaucracy, and a culture of unnecessary secrecy, public authorities are unable to obtain cost-effective information security controls. The current infrastructure will neither permit nor support the new commitment to respecting personal data, making government data available, or protecting data that needs to be kept secret.
Secure Systems? The Department Says ‘No’
Ironically, the culture of ‘No’ has not resulted in better security within the public sector. Project managers, afraid of having their plans thrown into disarray by uncooperative security professionals, simply avoid seeking security advice. Enterprising users who need to get their jobs done seek out risky ways to bypass security controls because the security departments won’t allow them to get on with what they have to do. For example, it is common to find use of unauthorised online file sharing services to exchange information because the security department has shut down USB memory sticks and CD drives without providing an alternative. That’s how accidents happen.
The problem has even deeper consequences outside of Westminster and Whitehall. Most important standards, guidelines and publications are protectively marked such that they are only available to individuals with appropriate levels of security clearance working on appropriately secured PCs. But local government bodies, for example, rarely conduct background checks on their staff beyond a basic criminal records check, so individuals tasked with securing local authority systems don’t know how to secure them in line with government requirements because they are not cleared to see those requirements – and aren’t allowed to hold copies because their PCs aren’t sufficiently secure. Without the intervention of costly consultants who have the correct clearances and computers, this paradox can’t be broken.
Those consultants are a very special breed indeed. Only the few hundred members of the CESG Listed Advisor Scheme (CLAS) are officially qualified to provide security advice across government. They hold the necessary clearances, and have access to CESG’s source materials. What they do is not particularly ‘special’ compared with their private-sector colleagues, and because the pool of available talent is so small, and the barriers to entry are high (CESG only accepts a limited number of candidates once a year, and they need to pay a substantial fee for clearance, acceptance and training), public authorities have to draw from a relatively small – and therefore uncompetitive – pool of consultants for their information assurance advice.
CESG has for some years been attempting to move parts of the CLAS environment into the private sector, but this has yet to deliver any significant change in the way that systems are secured. The outcome of this ‘closed shop’ is that local authorities and arm’s length bodies very often fail to comply with government security standards simply because they don’t know that those standards even exist, and if they do, they can’t gain access to either the standards or cost-effective individuals who are able to assist them. We therefore have a public sector environment in which the prevailing culture and practices conspire against effective information assurance.
Privacy by Design? The Department Says ‘No’
Clearly a public sector that struggles with information assurance will also struggle to respect privacy: if personal data cannot be kept secret, then it cannot be kept private either. But public authorities’ inability to effectively manage personal data runs much deeper than that, since CESG’s formal policies until recently simply didn’t get the idea of privacy. Formal risk assessment processes tried to assign protective markings according to the volumes of personal records rather than the sensitivity of the data: so, 999 personal records might be considered Not Protectively Marked, whilst 1,000 would be marked at the higher level of Restricted. Authorities could circumvent the more onerous controls by simply breaking databases down into smaller files of less than 1,000 individual records.
What’s more, those risk assessment models are designed around an assumption that authorised users are always trustworthy – how else could designs such as the ill-fated Contactpoint, or the NHS Summary Care Record, be allowed to exist where hundreds of thousands of users can access millions of individuals’ sensitive private records? The risk assessment processes treat individuals as low-value assets whose privacy is significantly less valuable than, say, a Minister’s public reputation.
Private companies, and in particular those in the financial sector where the FSA has demonstrated an appetite to impose punitive fines for misuse of personal data, woke up long ago to the need to show greater respect for personal data. The public sector, where senior public servants are rarely held accountable, and the sternest sanction generally applied is a letter from the Information Commissioner’s Office, has not kept up with the change. In their defence, CESG have made some positive revisions to their personal data handling rules in recent years, but much more needs to be done if the government is ever to meet individuals’ expectations of privacy.
Open Source Software? The Department Says ‘No’
The new government’s commitment to open source systems represents perhaps the greatest challenge that the information assurance community has faced in many years. The use of software that has been collectively developed, with publicly available source code, flies in the face of long-established security policies and practices, which have traditionally demanded that source code comes from an approved developer, is scrutinised for vulnerabilities, and is kept out of the public domain.
In general, software and hardware vendors are expected to have their products pre-tested for use in government systems (something which is not required in the private sector), and to pay up front for that testing. CESG has a number of services such as the CESG Claims Tested Mark (CCTM) and CESG Assisted Products Service (CAPS), that are used to test the security of products that are being sold to public sector organisations. When private companies sell to government, they can justify the expense of the testing process, since that will grant them access to a lucrative new market. But the same does not hold true for open source software: in the same way that drugs companies won’t pay for clinical trials on products that they can’t patent, vendors won’t pay for the testing of public domain software when they cannot expect to charge for it at the end of the process. Furthermore, the test processes are notoriously long-winded and complicated, so even vendors of proprietary systems are reluctant to invest in them. Whilst not all products have to be subject to this test approach, failure to demonstrate test approval can count against them during procurement, and as a result public authorities are driven towards a small number of approved, tested, and often outdated technologies.
Once products have been selected, and designs are in place, the complicated process of accreditation begins. Security Officers – or more commonly CLAS consultants – conduct a tightly-proscribed risk assessment that is used to determine whether the system requires formal accreditation (a certificate to prove that a system is fit to handle a given level of data, and to interconnect with similarly secure systems), and potentially to prepare a Risk Management Accreditation Document Set (RMADS) that is used to define security controls. Accreditation of open source systems, where there are no vendors to make assertions about security levels, is very difficult indeed using current processes.
But accreditation isn’t the end of the problem: like all software, open source software requires patching and upgrading to keep up with technology developments and newly-discovered security vulnerabilities. Without a vendor to pay for security testing the patches and updates under the current regime, open source software will remain largely inaccessible for government.
In the private sector, where there is no obligation to verify the security claims of systems vendors, and organisations can select their own risk assessment approaches, these problems simply doesn’t exist. Independent testing schemes can be used to provide customers with greater assurance of security capabilities, but in general market forces drive vendors towards delivering secure systems, since a major failure will count against them in procurement processes. Government’s open source goals will remain hampered by information assurance until a new way of dealing with the security of open source software can be developed.
The Department of ‘Yes’: Treating Information Assurance as a Business Enabler
The relative success of private-sector security practices, and the fact that large corporations do not struggle to manage information security in the way that government does, shows that it should be perfectly possible to move to an environment in which information assurance helps, rather than hinders, delivery of public services. In particular, we need to ensure that:
• information is available to all that legitimately require it, is appropriately protected, delivered, and of assured integrity and accuracy, so that information can support the needs of government, industry and individuals;
• public confidence in the ability of public authorities to handle personal information is restored;
• public authorities can adopt open source systems and cloud technologies without security being a disproportionate burden;
• public authorities break away from the negative mentality of information assurance that blocks innovation, and instead move towards a new culture that is able to support, rather than hinder, the delivery of new technology policies.
The significant changes in government IT policy, the shake-up of its delivery driven by the spending review, the government’s commitment to cybersecurity as a cornerstone of the UK’s defence strategy, and the establishment of the Office of Cybersecurity and Information Assurance (OCSIA) within the Cabinet Office collectively drive the need for reform. OCSIA may be the best hope for achieving reform, but will only succeed if there is a collective will in that office to do things differently. Ministers and senior civil servants are too quick to defer to Cheltenham on the assumption they are ‘the experts,’ when evidence suggests they are behind the times and operating in a mainframe mindset in an Internet age. They have become unaccountable arbiters of what does and does not happen, and what can and cannot be used. There is a clear need for leadership, to remove duplication of responsibilities, to improve availability of security standards and technologies, and to change the way that security is perceived across government. There is also a need for greater participation by local government and the private sector, whilst recognising that some aspects are better suited to remaining under government control.
A few simple actions would suffice to create the ‘Department of Yes.’
1. Appoint a pan-government Chief Information Security Officer as a new focal point for information assurance
Just as a large company would be expected to have a Chief Information Security Officer (CISO), the OCSIA should appoint a Government CISO responsible for the proper implementation of information assurance across government. This role must not be one that is any way combined with the cyber defence agenda (which invariably becomes politicised and distracted from the day to day running of information assurance), but rather a ‘hands on’ leadership position that provides a figurehead for information assurance issues. Bringing in a CISO from industry, rather than public service, would ensure a break with past practices and a fresh approach to the task in hand.
2. Create a government CISO Council
The Government CISO should chair a new Government CISO Council within the OCSIA. This group, comprising CISOs and/or SIROs from all major parts of government, should act as the focal point for all information assurance issues, and hold responsibility for development and maintenance of security standards, accreditation, product certification and professional development across government. The CISO Council should be engaged in policy development across central and local government to ensure compliance with national and international legal obligations. Where public sector security incidents occur, the CISO Council should be involved in independent investigation and reporting.
3. Consolidate existing duplicate information assurance services
The Government CISO should work with OCSIA across government to amalgamate existing policy and solutions branches, including CESG, COSPD and the relevant parts of MoD, into OCSIA. This by implication will require consolidation of duplicated services and roles. The newly-amalgamated security body should take responsibility for all aspects of establishing standards and procedures for the defence of public sector ICT infrastructure, and should develop and publish security standards for use in government and the private sector. The body should also operate ‘How To’ teams of experts who look for cost-effective solutions to security problems, and constantly improve the advice and controls available to government, drawing upon the best the private sector has to offer.
4. Ease the administrative security regime for lower-value data
The UK government operates a protective marking policy for its data assets to ensure that they are used and secured in accordance with the value of those assets. Clearly some of that information – particularly when assigned a Top Secret or Secret marking – requires very robust security controls. But the vast majority of data, particularly outside of Whitehall, sits at Restricted or even lower, and the nature of the data is not dissimilar to that which might be held by a private company such as a bank. Yet that information is subject to ‘special’ information assurance controls that are often significantly more onerous and administratively complicated than might be found in the private sector, despite those controls having their roots in the same standards.
If OCSIA were to relax the administrative processes for securing Restricted data, such that authorities may use any commercial services or products so long as they comply with the basic security policy requirements defined in the Security Policy Framework and supporting materials, then the market would be opened up for any commercial product or service vendor to compete in the public sector. Rules will need to remain in place to ensure that data is correctly marked, and not ‘upgraded’ to higher protective marking levels than appropriate. Implemented correctly, this change would not result in chaos or insecurity, but instead allow greater competition to bring down the cost of delivery, and free up local authorities to get on with securing their information without the burden of complying with security frameworks that are intended to deal with data at much higher levels of security. CLAS consultants could shift their focus to systems operating at the higher protective marking levels.
5. Sort out the existing mess of unaccredited Whitehall systems
The imperative for information assurance reform does not apply solely to new systems: there is little value in securing new ICT infrastructure if it has to interface with older systems with unproven levels of security. The government’s own report into data losses (the Hannigan report) identified approximately 2,300 government systems that have not been subject to any form of assurance certification (known as accreditation) but made no demands that those legacy systems should be secured. Significantly, recent changes to the Security Policy Framework introduced a new marking level of Protect which was, in part, intended to ease the burden of accreditation, but anecdotal evidence suggests that it has the opposite effect, and is instead seen as a new, unfunded administrative burden. This needs to be addressed as a matter of urgency: those systems must either be secured or scrapped.
Equally importantly, senior civil servants still have the power to over-ride the need for accreditation if they so choose: in other words, to disregard security requirements if these are too expensive or likely to take too long. This exemption must also be dropped: if system security is too expensive, then the system itself is too expensive, and other more affordable ways must be found to deliver the same outcomes.
6. Voluntarily accredit open source software where appropriate
If the current approach to accreditation remains in place, then government must take responsibility for accrediting testing and maintaining the security of open source software. There is no reason why OCSIA, or even a private company, could not provide and maintain its own secure builds of the likes of Linux, OpenOffice or OpenSQL for use in government. Builds and patches would be checked and tested by a central team without the need for a vendor to sponsor the work, thus making the software available across government, and saving costs on software licensing and duplicated testing.
7. Develop the information assurance profession
If the government is to obtain access to the best possible security expertise, then the profession needs major reform. OCSIA should take responsibility for development of the information assurance profession, working in close partnership with relevant information security professional bodies. This will include:
• defining a career structure for information assurance in the public sector;
• developing information assurance professional development and training syllabuses for delivery by commercial organisations;
• providing examination and certification of government security professionals, with an emphasis on facilitating simple and affordable cross-qualification from the private sector, so as to expand the pool of professionals available to government;
• governing the certification and management of inspectors and accreditors;
• maintaining a pool of expert instructors and project managers to coach and where necessary manage particularly large, innovative or sensitive public-sector projects.
CESG has already taken steps down this route by moving aspects of the professional qualifications across to the Institute of Information Security Professionals (IISP). If it were to go a little further and insist that all government information assurance professionals must become IISP members who maintain Continuing Professional Development (CPD) training, and could be struck off for malpractice, unprofessional conduct or incompetence, then there would be a case to argue for abandoning the overhead of CLAS altogether.
More for Less from the Department of ‘Yes’
In the world of the Department of ‘Yes,’ information assurance will be a service enabler. Public authorities will have the confidence to adopt innovative new technology schemes, knowing that they will be supported in doing so by their information assurance teams. They will be look to their information assurance groups for support at the earliest stages in projects, rather than trying to hide from them. They will understand that on the rare occasions that their security advisers say ‘No,’ there is a good reason for them to do so.
If we want an information assurance function that really supports public authorities, and that can deliver more for less, then these changes are cheap and easily done. We simply have to ask OCSIA to reform the information assurance function, give that office the power to do so, and support it when it encounters inevitable resistance from within the security establishment. All it takes is the will to say ‘Yes.’
Those of you who follow NO2ID, arguably the most successful civil society pressure group of the past generation, may be aware that National Coordinator Phil Booth has just stepped down from the role after six years leading the organisation.
Phil’s quite a remarkable individual, both physically and intellectually a very big guy, and has achieved many remarkable things in his time with NO2ID. He grew the group’s membership into one of the largest and well-connected lobby groups in the country; established a personal network of peers, politicians, civil servants, technology experts, industry leaders and academics; and successfully beat down one of the last government’s cornerstone manifesto commitments with just a tiny budget and his own undrainable energy reserves.
What’s most remarkable about Phil has been his ability to engage across the entire spectrum throughout that time. He recognised the need to work with everyone from the ministers pushing the programme, through the suppliers pushing the technology, to the hard core of ID Card opponents who pledged civil disobedience rather than compliance. He remained courteous and focussed even at times when the government was engaged in some very underhand tactics to destabilise both his, and NO2ID’s, position.
Of course Phil would be mortified to be solely credited with NO2ID’s success, and the power and passion of that body has to be applauded, but there’s little doubt that he has been instrumental in getting us to where we are today. I very much hope that once he’s taken a break we’ll see him back in the ID space, perhaps this time designing the new citizen-centric, privacy friendly authentication schemes that will emerge from Whitehall over the next few years?
Quite a lot actually, particularly in the world of social media. The popularity of Facebook, Twitter etc is very much driven by their flexibility in extending our real-world lives into the virtual in whatever manner we wish, including allowing us to completely reinvent – or fabricate – ourselves online.
The BBC reports on the rather odd case of Facebook allegedly taking down a user’s account because she was ‘impersonating’ Kate Middleton. She wasn’t doing that, she just happens to be called Kate Middleton, and I’m sure there are plenty of other Kates out there who share that surname. It’s unusual because in most cases, social media sites leave it to users to sort out name ownership amongst themselves, except where there is a clear criminal intent to defraud or mislead.
Our problem is that the glue that binds online personae to their friends/followers/acolytes is their name: it is the primary identifier for the account, and often the tool against which friends may search for each other. For example, I have three social networking accounts: a Facebook profile which I use mainly for social purposes, a Twitter account that is largely focussed on my professional network, and a second Twitter account in which I take on the persona of an entirely fictional character. Annoyingly, the fictional character has more followers than I do, but that’s probably because he’s much more interesting than I am, and has some very interesting fictional friends.
We have invented a social media world that reflects the simplest of our identifying conventions from the real world. Just like the real world, we can be pseudonymous. After all, a name is not a fixed attribute, and an individual can have multiple names and change those whenever they wish. That may be fine for social media applications, but it’s not good enough for a broader ID system, except possibly as a selector that allows an individual to point to the attributes that they wish to associate with a particular transaction or relationship.
Whilst our chosen identifiers are not unique, and whilst we continue to use contextual, changing identifiers such as names as public identifiers, this problem will continue. Names also provide a simple way for third parties to track us across multiple accounts, or to incorrectly assume that individuals who share a name are one and the same, and that is a key privacy weakness. We need the option to use meaningless but unique identifiers that prevent that tracking but ensure that we can uniquely identify ourselves when we wish to do so. More on that in another article.
In the meantime, I’m pleased to see that the top handful of hits against my name in Google report on my many acting successes, my distillery and US real estate business. Maybe I am as interesting as my fictional persona after all?
The European Commission’s Institute for Prospective Technological Studies (IPTS) has published a report on ‘The State of the Electronic Identity Market: Technologies, Infrastructure, Services and Policies.’ I co-authored the report together with teams from IPTS and Consult Hyperion, with the objective of exploring where individuals’ identity data are converted into credentials for access to services.
The document concludes that the market for electronic ID is immature. It claims that the potentially great added value of eID technologies in enabling the Digital Economy has not yet been fulfilled, and fresh efforts are needed to build identification and authentication systems that people can live with, trust and use. The study finds that usability, minimum disclosure and portability, essential features of future systems, are at the margin of the market and cross-country, cross-sector eID systems for business and public service are only in their infancy.
This was a particularly tough document to write, since the scope of ID is potentially so large, yet there are so many confused and conflicting concepts, terminologies and delivery approaches. Qualitative data about the value of ID services is almost non-existent, and tends to focus principally upon enterprise identity management technologies. At the time we wrote the document, the UK was gripped by the inertia and non-delivery of the failing National Identity Service, and the impact of that is reflected in the document.
The report is available for free and can be downloaded here.
In 1890, Samuel Warren and Louis Brandeis famously described privacy as “the right to be let alone.” For over a century since then, society has developed legal, technical and social frameworks that protected a concept of alone-ness, of isolation, of keeping others away from the individual and information about that individual. Our concept of privacy has become one of ‘urban anonymity:’ we believe we have some degree of anonymity when we are in public, since if nobody knows who we are, then our actions cannot have consequences since we can’t be identified.
But Richard has described how the emergence of the Internet has stood that idea on its head in the past ten years. The explosion of data, of access to that data, of tools to search, filter, analyse, interrogate, present and disseminate that data, placed in the hands of government, companies and individuals have stripped away that veneer of anonymity and created a dystopia in which our privacy is fading, not because of our failure to control privacy, but because privacy itself has changed, and the old controls are no longer able to contain or to manage the ways in which we share information with others. Nor has this erosion been gradual: great swathes of our privacy have been cut away by tragic catalyst events such as the killings of Jamie Bulger, Holly Wells and Jessica Chapman, Baby P; the attacks on the World Trade Centre and London’s transport system.
Privacy is no longer about keeping our personal information secret, but is instead about controlling how it is used. And unless we can enforce that control, the only possible outcome for our society is total transparency: a world in which nobody has any secrets at all, and individuals have no meaningful control over how those secrets are used. Nothing is ignored, nothing is forgotten, nothing is forgiven. That is the surveillance society which four years ago Richard warned the government we will sleepwalk into if we continue down this path.
There is still hope: during his tenure as Information Commissioner, Richard recognised the critical need not to prevent access to information – something which is now impossible, as Wikileaks have shown the world’s governments – but to render individuals, organisations and governments accountable for how that information is used. This evening he has described how the legal approach to accountability can work. But I would argue that if we continue to rely solely upon regulation to enforce that accountability, then we will never win, since there will always be those corporations – and in particular global ones – who choose to operate above the law, and Richard’s successor has discovered just how difficult it can be to fight the corporate spin machine.
True accountability must depend upon mathematics, not who has the best lawyers. As consumers, we must demand that privacy controls are coded into every aspect of our online world, so that we regain control of our information. It is consumers, not corporates and governments, who should dictate what is collected, processed, stored, disseminated, derived and deleted. And this can only happen when we have delivered the technical, as well as the regulatory, demands of Privacy by Design.
And that accountability will, ironically depend upon us delivering a truly effective population-scale identification and authentication system – not the control-freakery daydream that is thankfully now being struck from the statute books, but a proportionate, federated, privacy-enabling infrastructure that will provide the cryptographic roots of true information accountability. Individuals will be able to control how their information is used and by whom, and to easily identify and prove when misuse has occurred. In fact in a utopia where the cryptographers rule, I’m sorry to say for Richard that there might even be no further need for lawyers, or even an information commissioner.
But for now we have to live in reality, and that reality needs the rules and regulators that Richard has described. What I hope we can discuss now are the implications of his ideas for us as individuals, organisations and professionals, and how we can move forward from our imperfect present to a pretty good – if not actually perfect – future for privacy.