The latest select committee report by MPs into the progress of digital government in the UK has resurrected a question that has reared its head on several occasions in the past – do we need a unique single identifier for every citizen, to be associated with our online presence?
This debate was most recently quashed in 2010 with the advent of the coalition government that quickly scrapped the outgoing Labour administration’s ID cards scheme and its associated central database. It was widely accepted that the concept of a physical card to prove who we are was a step too far in terms of individual liberty and personal rights.
That political decision led directly to the creation of Gov.uk Verify, the troubled digital identity scheme that has gone out of its way to avoid having a single identifier, instead working on a federated model.
While most experts agree that federated identity is the ideal solution, it’s hard to deliver on a national scale – as Verify has proved.
A single identifier has many benefits, say supporters – it makes identity verification easier, and it could allow citizens to quickly associate all the data government holds on them, to check it is correct and even enable some form of personal control over that data.
Critics, however, point out that a unique identifier could just as easily be used by government to connect personal data together in negative ways – for example, look at how the Home Office used health records to identify immigrants as part of its controversial hostile environment policy.
This is to some degree a peculiarly British debate. Most European countries have a single identifier for citizens – often in the form of a physical card – and the success of digital identity schemes in the Nordic countries is at least in part down to the existence of a unique identifier. The Science and Technology Committee report cited Estonia as a successful digital identity scheme, based also on a unique identifier.
It’s true to say that Scandinavians tend to trust their governments more than we do in the UK – which not an insignificant difference in this debate. And Estonia, as a former Soviet country, has very different cultural attitudes to the issue.
But it’s also relevant to point out that we already have a unique identifier in the UK – two in fact – in your national insurance (NI) number and NHS number.
The NI number, however, is not considered secure enough to use – it’s too easy for people to have multiple NI numbers, and there are more NI numbers in existence than there are people in the UK, partly thanks to historic IT system problems in the past.
The National Health Service in England is using the NHS number as part of the digital ID system it is developing for patients. It’s fair to say the NHS is a lot more trusted by citizens than the wider public sector.
Of course, the reason why MPs on the Science and Technology Committee suggested opening up this topic for discussion is because of the failure of Verify, and the way it has impeded the development of a wider commercial market for federated digital identity systems in the UK. If Verify worked, we wouldn’t be having this debate all over again.
While it’s generally agreed that a single unique identifier is not the right way forward in the long term for digital identity, we nonetheless find ourselves in a position where it is right to have this debate again. Let’s hope it’s for the final time.
Three months ago, the Government Digital Service (GDS) appointed the first ever director of digital identity, Lisa Barrett – tasked with taking on the troubled Gov.uk Verify programme.
Barrett arrived just as the National Audit Office found that “it is difficult to conclude that successive decisions to continue with Verify have been sufficiently justified”.
Barely two months later, MPs on the Public Accounts Committee branded Verify as “failing its users”, not delivering value for money, and added that its leaders have not accepted “proper accountability” for the programme and its difficulties.
Well, now it seems Barrett is going to be accountable. Good luck with that.
On 7 June, she made her first public appearance, at the Think Digital Identity in Government conference in London, to offer an update on what’s next. Clearly, in a short space of time, she has made an impact.
Other speakers described her as “a breath of fresh air”, and welcomed the way she has been reaching out to the many and varied stakeholders in the UK digital identity sector. In her talk, Barratt said that GDS needs to “tell a better story” around Verify, which is certainly true.
She laid out the priorities for Verify and digital identity policy, emphasising the importance of standards and collaboration between public and private sectors to accelerate the use of Verify and other compatible digital ID products. She identified the need for a “clearer set of rules” around digital identity to encourage more private sector investment. She said the user experience for Verify needs to be improved.
Heads nodded around the gathered digital identity experts. There was nothing to disagree with in her talk. The problem – which is nothing to do with Barrett – is that her predecessors could have (and possibly did) give exactly the same presentation at any time in the past five years.
There remain more questions than answers. But Barrett hinted that could change soon – in response to a query from Computer Weekly, she said there were “things that can’t yet be announced”. Which, given the earlier goal of telling a better story, did seem to frustrate her. It’s a shame she wasn’t able to use the occasion to make some of those announcements to a room full of people desperate to hear them.
As an example of the greater openness she wants to establish, Barrett also became the first person from GDS to publicly acknowledge that the government’s Infrastructure and Projects Authority (IPA) recommended in July 2018 that Verify be scrapped. When Computer Weekly revealed this fact, in September last year, GDS declined to comment and has not done so since.
Barrett explained that the reason for the IPA decision was down to doubts whether the existing identity providers (IDPs) involved in the programme would continue to support Verify. Subsequently, two of the seven IDPs decided not to, while five signed up to new contracts. That seems an important piece of information for wider stakeholders to know, and it’s a good example of how badly GDS has communicated about Verify in the past.
The heart of the challenge now facing Verify was clear when the event heard from Martin Edwards, managing director of identity services at the Post Office – one of the two largest Verify IDPs. He listed the four things he needed to see from government:
- More PR and communication – “Be less embarrassed about Verify,” he said – to promote the brand and its purpose. Edwards called for more visible ministerial backing, adding that too many people have never heard of Verify or don’t know what it does, which is a hindrance for a project that once intended to reach 25 million users by 2020.
- Align regulations for identity behind the Verify standards – for example, Edwards pointed out that much government regulation still specifies that people need to give a written signature, which is clearly incompatible with digital solutions.
- Better access to data sources – one of the biggest reasons Verify has performed so badly is the limited datasets available to establish a citizen’s digital footprint, especially for those who don’t have passports, driving licences, credit cards or mortgages. The Post Office is involved in a trial at the London Borough of Tower Hamlets, which looks at ways to use local authority data to assure a digital identity.
- More co-ordination and much faster user take-up across Whitehall – Edwards highlighted the fact that even some of the 19 online public services that do offer Verify, only do so as one option. The attraction for the IDPs involved with Verify was always to get at the millions of users of online tax services such as self-assessment, and Universal Credit. So far, only 4% of HM Revenue & Customs’ (HMRC) online users opt for Verify over the well-established Government Gateway; while the Department for Work and Pensions continues to push back the roll out of its controversial welfare reforms.
These are all reasonable concerns, and none of them are new. But the issue here is that one of the most important organisations involved in promoting Verify and digital identity in general, is still asking these questions barely nine months before GDS hands Verify over to the private sector.
There are rumours that GDS may announce that Whitehall departments will soon be able to bypass GDS and deal directly with the IDPs – potentially a forerunner to allowing departments to choose other IDPs beyond those directly involved with Verify. Such a move would benefit the IDPs, but would remain to be seen if that’s enough of an incentive for the likes of HMRC to put its full weight behind Verify.
Notably, Barrett at times played down the importance of Verify in favour of government’s role in establishing a wider digital identity ecosystem. At one point, she referred to Verify as just “one technology implementation of the standards”. That’s a technology implementation that will cost taxpayers at least £175m, mind you.
GDS wants to broaden the conversation beyond Verify and highlight the goal of stimulating a digital identity market in the UK. That’s always been a worthy objective – but one that cannot be divorced from the widespread criticisms of Verify, not to mention that £175m of spending.
Very few people would complain if GDS were to become more open with its communications around Verify, and if Barrett can deliver on that she will have achieved an early win. Meanwhile, lots of key players are waiting to see what those secret announcements are going to be.
A couple of points mentioned during the event may be of interest to those following the progress of digital identity:
- The Post Office is looking at how it could use its branch network to help create Verify identities face-to-face, instead of purely online, for people with too limited a digital footprint for current digital-only methods.
- There are six digital identity pilots going through the Financial Conduct Authority’s sandbox programme, designed to test new finance-related products for regulatory compliance. If approved, that potentially promises a big boost to the idea of using banking products, and open banking technologies, to enrol millions of people into digital identity schemes.
Gov.uk Verify is the Theresa May of digital government.
The embattled Prime Minister faces calls to quit from all around – her own Conservative MPs, grassroots Tory activists, the right-wing press and more. Yet she refuses to go until she delivers Brexit, oblivious to the criticism, convinced of the validity of her actions, blinkered by determination to prove to everyone she is right.
The embattled digital identity programme has been similarly slammed from every angle.
The Infrastructure and Projects Authority – the government’s own major projects watchdog – recommended Verify should be scrapped.
The National Audit Office (NAO) – the official financial watchdog of Whitehall spending – said “it is difficult to conclude that successive decisions to continue with Verify have been sufficiently justified” and “the performance of Verify has consistently been below the standards set out in each of its business cases”.
And now MPs on the Public Accounts Committee – the parliamentary body that provides oversight to government spending – has added its view that Verify “is failing its users”. It goes on: “Key government departments do not want to use the system and members of the public are facing problems signing up.” And the MPs didn’t stop there, adding that the project has been “characterised by poor decision-making by the Cabinet Office and GDS” that was “compounded by their failure to take proper accountability”.
But, like Theresa May, GDS is unperturbed. In a statement to the press following the PAC report, the Cabinet Office and GDS said: “Verify has saved taxpayers more than £300m and is a world-leading example of how to enable people to use services securely online. The PAC report reflects that this has been a challenging project – but challenges like these are to be expected when the government is working at the forefront of new technology. Verify is now at a point where it can be taken forward by the private sector, so people will be able to safely and securely access both private and public online services.”
Like a statement by May on her next set of deadlines for Brexit, even this cannot be taken at face value.
“Verify has saved taxpayers more than £300m”?
According to the NAO, “On the evidence made available to us, we have not been able to replicate or validate the benefits estimated by GDS.”
“Challenges like these are to be expected when the government is working at the forefront of new technology”?
According to the PAC, “Verify clearly demonstrates many of the failings we see all too often on large government projects: expectations were over-optimistic from the start, key targets have been badly missed and results simply not delivered.”
“Verify is now at a point where it can be taken forward by the private sector”?
According to the PAC, “The Cabinet Office and GDS have no meaningful plan for what will happen to Verify post-2020,” when the system is handed over to the private sector.
As with May, only two outcomes are possible from here. Perhaps GDS will be proved right, and Verify will become a triumphant example of public sector innovation stimulating a valuable new market for digital identity. Perhaps the Prime Minister will get her Brexit Withdrawal Agreement through Parliament too.
Or perhaps every expert body that has assessed the situation will be proved right instead. Perhaps the phrase, “When you’re in a hole, stop digging,” might be proved accurate once more.
The sacking of defence minister Gavin Williamson is another indication of how technology is influencing politics, and vice versa. While his crime was to leak details from a national security meeting, the fact the discussions were about Huawei technology is significant.
The fears over Huawei’s links to the Chinese government’s intelligence and security services are political, not technological. The networking supplier has been part of the UK’s telecoms infrastructure since 2005, when BT awarded a contract to supply equipment for its 21CN project – a major initiative to overhaul BT’s core transmission network and move from old Public Switched Telephone Network (PSTN) technology to internet protocol on a digital infrastructure.
The decision to choose Huawei had political implications even then – BT dumped the ailing British supplier Marconi in favour of the Chinese firm, which effectively led to the demise of one of the oldest and most famous names in UK technology.
Acknowledging concerns over Huawei’s position, the UK government subsequently made unprecedented stipulations to maintain oversight of Huawei’s products. The company had to set up the Huawei Cyber Security Evaluation Centre (HCSEC) in 2010, which is now overseen by a board chaired by Ciaran Martin, the GCHQ lead for cyber security and chief executive of the National Cyber Security Centre (NCSC).
Huawei has to open up its products, specifications, source code and even allow assessment of the capabilities of its technical staff. No other tech supplier undergoes such intense scrutiny – a reflection of the significant role its products play in the UK’s critical national infrastructure.
Any problems uncovered by the HCSEC oversight board are publicly declared – as they were earlier this year, via a report to the UK National Security Advisor regarding concerns about technical deficiencies in Huawei’s software engineering processes which “exposed new risks in UK telecoms networks”.
The NCSC remains publicly comfortable with Huawei’s current position in UK telecoms. The concerns from the UK’s security partners in the Five Eyes network, especially the US, are political – a distrust of the Chinese government and disbelief that Huawei would not accede to demands from the state to use its equipment for spying or to disrupt UK infrastructure.
Of course, China is not the only country that makes such stipulations of its technology companies. The US has very similar laws, and the Snowden revelations showed that big US tech firms have been forced to place backdoors in their products. Imagine the furore if China banned, say, Cisco because of alleged closeness to the US security services.
Huawei is not the first and will not be the last example of the growing intersection between politics and technology. But on this sort of trajectory, there is only one inevitable outcome – that the tech sector splits between products and suppliers that the US approves of, and those which China and perhaps Russia approve of. Such a split would make things easier for politicians, but would polarise the tech industry in ways that surely nobody wants to see.
The digital transformation of government and public service delivery is well under way and is by now – hopefully – unstoppable. It’s taken longer to get here than it should have, and it will take longer to fully embed the new public sector practices and technology than it needs to, but it should get there.
This leads us inevitably to the area of public life that remains almost entirely untouched by the 21st century – when will we see the digital transformation of politics?
There’s an appetite for it – six million people signing an online petition, regardless of what it’s for, is a sign that large parts of the citizenry want to have new and better ways to engage with their elected representatives. The dismissal of that petition by the government shows the disdain with which such digital engagement is held by politicians.
I’ve written before how Brexit can be seen as a symptom of the societal changes being brought about by the digital revolution. We’re entering a period of backlash against the way technology is breaking down hierarchies and replacing them with a networked society – a shift that represents a huge threat to individuals and organisations whose power is derived from their position at the peak of those old 20th century hierarchies.
Have we now reached the pinch point, where politics itself is reacting to this change and will inevitably do whatever it can to resist?
In the UK, we hear the phrase “will of the people” increasingly bandied around to suit ideological needs. It’s a phrase born of hierarchies – by politicians who reluctantly allow citizens to engage with them once every five years in a general election, knowing in all but the minority of swing seats that the outcome is pretty much guaranteed in the incumbent’s favour.
How threatened would those politicians and the parties supported by our 19th century first-past-the-post voting system feel, by the idea that the will of the people can be expressed in real time, thanks to technology? That the will of the people can be measured and monitored at a granular level on the issues that mean the most to voters? How different, for example, would climate change policy be if there were a reliable way to ascertain the will of the people, other than marches, rallies, protests and petitions?
This is a challenge for the European Union too. Seen as remote, bureaucrat and unaccountable, would the EU be able to change perceptions across the continent if it pioneered new digital concepts for outreach and engagement that allow people to overcome the sense that Europe is undemocratic and over-reaching?
It will happen – society will change, and will force politics to change. But it’s going to take a generation at least, for the will of the people to force it through – unless there’s an enterprising, digitally savvy group of politicians willing and able to start a process that could see the old, tired orthodoxies that underpin so much of the Brexit debate swept away.
Hello and good luck, to Lisa Barrett – the newly appointed director of digital identity at the Government Digital Service (GDS). She’s taken on what many people outside GDS see as something of a poisoned chalice.
You might think it strange, that this job has been created for the first time barely 12 months before GDS hands its digital identity system, Gov.uk Verify, to the private sector and government ceases further funding.
But Barrett’s introductory blog is full of positivity, as you would expect. There will be no shortage of people wishing her success in her goals. For there’s one thing GDS has never quite grasped – that the growing band of Verify critics generally have one thing in common, that they want Verify (or something like it) to succeed, and have done so all along. Their criticisms often come from frustration at what they see as such a critical project, struggling as a result of poor decision-making in GDS.
The timing of Barrett’s appointment is all the more difficult after recent events in Whitehall, with a highly critical National Audit Office (NAO) report followed last week by a parliamentary select committee meeting at which, if you didn’t know better, you might have taken away the conclusion that apart from a little over-optimism by that other lot who started the project, Gov.uk Verify is all going swimmingly well. Tickety boo. Nothing to see here, move along now.
That seemed to be the message that the leaders of the troubled digital identity programme wished to impart to MPs on the Public Accounts Committee (PAC).
The committee was investigating progress on Verify after the NAO questioned the value for money of the £154m spent so far on the project and doubted the scale of benefits claimed by GDS.
Making the case for how misunderstood Verify has been were Cabinet Office permanent secretary and civil service CEO John Manzoni, alongside GDS director general Kevin Cunnington.
At times it was difficult to tell whether committee members were seeing hubris or denial in action. Let’s examine some of the highlights:
Gareth Snell, MP from PAC asked: “Mr Cunnington and Mr Manzoni, could you give me any of the quantitative KPIs from any of the five business cases that Verify has actually met? Just one example of any quantitative KPI from any of the five business cases.”
Cunnington: “On the current business case [from 2018], we are on track in terms of the volume of… verifications that we have.”
Note that the original business case was made in 2015. The 2018 business case was intended to rescue the project after the Cabinet Office project management watchdog recommended it be shut down.
Snell: “Okay. Any others?”
Cunnington: “You are testing my memory now. I cannot think of any off hand.”
Snell drilled down into the costs and claimed benefits of the programme and concluded: “So you are saying that spending less, because you failed to meet targets, is now a success, in the Cabinet Office’s definition of success.”
Manzoni acknowledged there have been problems, but did so in a way that he basically threw Cunnington’s predecessors as GDS chief under a bus.
Manzoni: “I do not think there is any question that Verify did not meet its original business case benefits. Nobody is sitting here saying, ‘Right, it met its projections.’ But I think the issue is more one of hopelessly optimistic projections in the original business case than failure,” he said.
Snell: “I will come to the previous business cases, because you are right that they were significantly way off, and several reviews and a number of business cases still maintained optimistic trajectories – let’s say optimistic for the sake of diplomacy. To what do you ascribe the low take-up across government for this system?”
Cunnington them blamed the slow progress of digital transformation at other Whitehall departments for the low take-up of Verify, to which Snell replied:
“Without wishing to paraphrase you, Mr Cunnington – I will do it anyway — it is the government’s fault that they have not transformed their service efficiently to catch up with your groundbreaking technology?”
Diverting away from the failed KPIs, Cunnington instead quoted the four strategic aims of Verify.
“One was to create a standard in the marketplace. That is the most important thing. I think we can say that we have absolutely completed that,” he said.
Snell replied: “It is quite an expensive standard though, is it not – £154m for a standard?”
It’s worth noting that identity standards are now increasingly being driven by the financial services sector in support of open banking legislation. GDS’s preferred GPG45 standard is seen by some as too prescriptive.
Cunnington continued on the three other objectives:
A commoditised cost model – “We have got the price down to a point where it can be bought by the government and by the private sector,” he said. However, the NAO found that GDS was subsidising the price paid by departments to such an extent that it feared it would become unaffordable once Verify is handed to the private sector in March 2020.
A safe and secure service across a multiplicity of private sector vendors – “Which again we have achieved”. Most private sector identity providers complain that the effective monopoly handed to five companies for Verify has hindered UK market progress.
Mass adoption of Verify. “You are right to say that is where we have not had the success we were hoping for,” Cunnington acknowledged.
To which Snell replied: “So there was the application of the first three, which are qualitative, and then the one quantitative that was meant to derive actual benefit to the government and the taxpayer, which has not actually been met.”
Let’s consider these objectives. I will make an admission – I’m a supporter of Liverpool FC. In the style used above, I would say Liverpool’s four strategic objectives in recent years have been to improve the team, qualify for the Champions League, expand the stadium, and win the Premier League.
The club has achieved the first three, but not the fourth. Everybody who knows Liverpool, or English football, will know that only the fourth objective matters. User adoption is Verify’s Premier League.
Later in the inquiry, as Manzoni again pointed the finger at Cunnington’s predecessors, Snell pointed out that Manzoni was in charge when the 2015 business case was signed off.
Manzoni replied: “I am not sure I was concentrating on the business case in 2015; I cannot remember, actually.”
Former Verify boss Nic Harrison was also being questioned by the committee, and he revealed he had reviewed Verify on taking the job in October 2016 and made 11 recommendations – right after the latest business case had been signed off. Here’s the conversation:
Snell: “Okay, so what did you change from the 2016 business case for your 11 recommendations?”
Harrison: “The 2016 business case had been written and had been through the Cabinet Office investment committee and the Treasury – it was a business case. As I came in, I recognised that it was unlikely to deliver in its current form.”
Snell: “Literally a month after it had been signed off and agreed by the Treasury, you said to the Cabinet Office, ‘This is not going to work’?”
Harrison: “I certainly said it was not on a glide path to success.”
To be fair to Harrison, Verify has been reviewed more than 20 times in its five-year existence.
The committee examined that 2016 business case a little further:
Snell: “Bearing in mind that Verify’s original programme business case came in 2015, who agreed to sign off the contracts for additional suppliers in 2016 when there was no immediate evidence that there would be an increase in supply?”
Manzoni: “I can’t remember.”
Snell: “You can’t remember. You are forgetting a lot of things today, Mr Manzoni.”
There was also potential forgetfulness when it came to discussion of Verify failing to get anywhere near its target of a 90% verification success rate:
Harrison: “The 90% figure appears in the 2015 business case, which is highly speculative and shows best probabilities. It was derived from what the likely success rate could be, based on demographics and looking backwards from a period when the product had been a success.”
Snell: “It is also the figure that was in the government business case that was used to allow sign-off of this programme to go forward. Regardless of whether that is a figure that is right or not, it is the figure on which this business case was predicated, and it failed to meet it.”
Cunnington: “No question. It was adjusted down to the range of 20 to 30 in the 2016 business case, when we recognised that wasn’t going to be true. As Nic says, the 90% was based on a future state where a lot of people are already verified – not the challenge of signing up.”
Really? The 2015 business case said, explicitly, as an objective: “90% of people can verify their identity online with a 90% success rate by April 2016”. I have a draft copy of the business case, it’s there in black and white and there has never been any suggestion the final document claimed any different. It has never been suggested that the 90% figure was a “future state” where people are “already verified”. To be fair to Cunnington, he was not in GDS at the time.
UPDATE 26 March 2019: An eagle-eyed reader also pointed out this GDS blog post from 2016, which clearly states the target of 90% and laid out the activities that were underway to achieve, and even exceed, this verification success rate.
When Verify was given its final lifeline in October 2018, its saviour was the Department for Work and Pensions, which uses Verify as part of its Universal Credit (UC) system – despite achieving only 38% verification rate.
Cunnington revealed that Verify is not actually mandatory for UC: “The primary policy objective is to get claimants paid, and claimants go all the way through the process before saying, ‘Wouldn’t it be nice to verify yourself?’”
Not only that, but Cunnington admitted that DWP is “also looking at [its] own alternatives” to Verify. Even Verify’s last great Whitehall hope is considering its options.
Manzoni repeatedly claimed that Verify had delivered significant financial benefits, quoting a figure of £366m. As the committee pointed out, the NAO said it had “not been able to replicate or validate the benefits estimated by GDS.”
GDS remains convinced that Verify will prove itself a success as the vanguard of an emerging digital identity market in the UK, which will be stimulated by handing Verify to the public sector, using the standards established by GDS.
I had an interesting conversation last week with Don Thibeau, chairman of OIX, the identity standards body that has championed Verify and received significant funding from the Cabinet Office.
Thibeau, historically a strong supporter of Verify, said OIX is about to make a significant move – shifting its headquarters from Silicon Valley to London, because it believes the capital is set to be the centre of a nascent global identity market.
Sadly for GDS this is not a vote of confidence in the Verify programme.
Thibeau is excited by the UK’s regulatory support for open banking, which is driving retail banks to work together on standards for ID interoperability that are likely to become adopted globally. While there is overlap with the work done on Verify standards – and GDS has tried repeatedly to get banks enthused about Verify – open banking is instead emerging as the most likely driver for a digital ID market.
Let’s reiterate – people want Verify to succeed, they always have. Airbrushing its troubled history is not going to help. There will be a lot of people wishing Lisa Barrett every success in her new job.
UPDATE 26 March 2019: The originally published version of this article included a quote taken from the official Public Accounts Committee transcript, where Kevin Cunnington said: “Our best performing LOA2 services are in the range of 7% to 18%.” GDS subsequently informed us that the transcript was incorrect and the correct figures are “70% to 80%”, and that Cunnington’s words were transcribed incorrectly. GDS said it will ask the committee to also update its transcript accordingly. The incorrect quote and associated references have been removed from this article.
The Post Office likes to describes itself as “the nation’s most trusted brand”. Anyone following the latest developments in the increasingly heated High Court case about its Horizon IT system would be justified in questioning that assertion.
In a move that stunned the court – including the Post Office’s own legal team – the organisation has accused the judge, Sir Peter Fraser, of bias, and suggested he step down from the case. Fraser has been a barrister since 1989, Queen’s Counsel since 2009, and has practised in the Technology and Construction Court field since 1990. He is an experienced judge, who has shown during the case that he has extensive understanding of the often complex technological issues under examination.
Less than a week before, Fraser issued a damning ruling against the Post Office from the first of four trials in the case, in which over 500 subpostmasters claim they were wrongly held responsible for accounting errors allegedly caused by Horizon. In the ruling, Justice Fraser effectively told the most senior Post Office witness that he believed she had lied.
The shock application to recuse the judge came very soon after evidence was presented in court that could potentially undermine many of the Post Office’s previous denials of problems with Horizon. A senior employee of Fujitsu, the IT partner that runs Horizon, admitted that a mistake by one of its staff may have caused a discrepancy in one branch’s accounts, for which the subpostmaster in question was held responsible. Journalist Nick Wallis, who has been live tweeting the case every day from the High Court, called the evidence a “smoking gun”.
The Post Office has consistently claimed that such incidents cannot and have never occurred. Subpostmasters have lost their jobs, livelihoods and sometimes their liberty, after the Post Office refused to acknowledge potential errors in Horizon.
The accusation of bias came from a solicitor working for the Post Office, who claimed that the earlier ruling showed the judge could not be impartial. It’s a stunning accusation. It also means the case has been suspended for two weeks, and even if the claim is denied, it is likely to mean further costly delays to the trial, costing the claimants and all of us taxpayers who are funding the Post Office legal fees. If denied, it may also give the Post Office grounds for appeal should future rulings go against them.
Throughout the 10 years since Computer Weekly first revealed the plight of subpostmasters and the allegations around Horizon, the Post Office has refused to admit one simple, inescapable fact that everyone in IT knows to be true: software is not perfect. Even in the most efficient software application – Horizon has run the Post Office branch network for 20 years – there will be unforeseen bugs, or tiny glitches that are often near-impossible to replicate. Blinkers on, the Post Office has taken technology hubris to new levels.
The claimants’ barrister suggested the Post Office move was “calculated to derail” the trial. The Post Office stands to lose tens of millions of pounds if it loses the case, not to mention the reputational damage. The unanswered question now is how far the organisation is willing to go to prevent that happening.
It’s not commonly known outside Whitehall that by the time the National Audit Office (NAO) – Parliament’s independent spending watchdog – publishes a report, it will have been through several iterations. The subjects of the report – government departments and agencies, civil servants and ministers – are offered a chance to comment and object to the contents, which means that conclusions are often watered down.
So as a journalist who has to read through such reports – typically many tens of pages long – there’s sport to be had in looking for phrases that subtly show the real opinion of the auditors that compiled the document.
“It is difficult to conclude that successive decisions to continue with Verify have been sufficiently justified.”
Let’s dissect this masterpiece of Whitehall bureaucrat-speak.
“Difficult to conclude” – in other words, we cannot conclude.
“Successive decisions to continue” – they wouldn’t listen when they were told to stop (the NAO revealed there have been more than 20 internal and external reviews of Verify).
“Sufficiently justified” – not justified.
The highly critical report on Verify offered one other piece of classic NAO phraseology: “On the evidence made available to us, we have not been able to replicate or validate the benefits estimated” by the Government Digital Service (GDS), developer and now perhaps the only remaining champion for Verify.
In other words, you can’t rely on (believe?) GDS’s claim that Verify will generate £217m of benefits – a figure already 75% lower than promised in the original 2015 business case. You can believe that it has spent £154m already, with another £21.5m to come – plus an uncountable amount spent by the few departments that integrated Verify into their digital services. You’ll notice that the total spend comes in just nicely below £217m.
The rest of the NAO report simply catalogued what we already know – that Verify has missed every performance target for verification success rate, service adoption, and user take-up. And missed them by a country mile.
Verify has become tribal in Whitehall digital government circles. Outside GDS, there is widespread disappointment that the project has ended up here – the concept and aims were widely welcomed and plenty of people wanted it to succeed.
Over time, according to critics, GDS withdrew ever further into a bunker, determined to prove outsiders wrong. Still now, criticism is reluctantly received – even when the government’s major projects watchdog recommended the programme be scrapped last year.
GDS is right to say this was a challenging project at the leading edge of its field, with ambitious targets – an attempt to set standards that would create a thriving commercial market for digital identities in the UK. That was always going to be tough – it was brave to take it on. Others can comment on whether it was correct.
Talk to commercial providers of digital identity today, and they will say that Verify has been mostly a hindrance, restricting the market to a small number of providers given a contractual monopoly by GDS over access to public services. Two of the seven chosen providers pulled out when the contracts were recast last year, seeing that their business cases no longer stacked up, leaving 380,000 Verify users to have to go through the whole registration process again at some point to choose another provider.
One Whitehall insider posed an important – if so far rhetorical – question: how can it be right that the private sector was allowed to become the exclusive gatekeeper for deciding whether or not citizens can access online public services?
GDS is determined – convinced – that Verify can and will become the foundation for a standards-based approach to federated digital identity once it is handed over to the private sector in March 2020. The organisation insists that if that objective is achieved, Verify will be a success – even if it may not address whether, at £154m and counting, that success was value for taxpayers’ money.
It’s worth pointing out that, in functionality terms, Verify does exactly what it said it would. This is not one of those government IT systems that failed because the software didn’t work. But GDS was over-optimistic about what that software could deliver. Assurance levels were set too high. Not enough public data is available to prove in a 100% digital process that enough people are who they say they are. And critics say that the developers seemed to forget that the user’s real need was simply to access an online service, not to have to go through an extensive registration process that more than half of them failed.
Verify was originally justified as a means to reduce fraud, eliminate costly legacy technology, remove paper processes, and cut call centre costs – these remain a missed opportunity.
At the core of Verify is the document checking service – an API-addressable system that allows verification of government documents such as passports and driving licences. That’s the real prize for the private sector – that’s what they now want access to. And it’s also likely to be the most obvious success of the Verify programme, once government decides to open it up.
Sources close to GDS point to the behaviour of certain large Whitehall departments that set out to frustrate progress and dictate unachievable goals, setting Verify up to fail. There may well be some truth in that. But it has to be said – and has been, frequently – that GDS did itself few favours along the way.
Here is our quote of the week: “We spend £8m a year on paper. We spend £2m a year on envelopes. We can save lives, save staff time and cut costs by using an extraordinary piece of technology that has the ability to allow two people to communicate instantaneously, and to allow groups of people to communicate instantaneously. If you take the result of someone’s test you can immediately communicate those results with the analysis, to the patient and the patient can reply. It’s called email. I don’t know whether any of you have heard of it.”
Don’t adjust your eyesight, the date on this blog post is correct, and no, you haven’t fallen through a tear in the fabric of space and time and found yourself in 1989 by mistake – although if you were reading Computer Weekly 20 years ago you probably saw very similar quotes from enlightened business leaders at the time.
The quote above was spoken on 13 February 2019. By what terrible luddite organisation, you may well ask? Has someone just introduced Jacob Rees-Mogg, minister for the 18th century, to a computer?
Well, not far off – a few rows of the House of Commons at most.
Those words were spoken, dripping with sarcasm, by secretary of state for health and social care Matt Hancock, announcing a new policy to use email by default for NHS England to communicate with patients. Therein lies the scale of the challenge in digitising the NHS.
How often have our highly tech-literate readers – and indeed, most of the highly tech-literate UK population – tutted and raised their eyebrows at the inability of their GP or hospital doctor to simply send them an email? Let’s not get ahead of ourselves and start suggesting instant messaging.
And yet still there was scepticism from some NHS dinosaurs about the wisdom of a policy that promises patients the ability to choose if they prefer to continue receiving paper letters.
Of course, email has to be secure – especially for such personal information as medical tests – but modern email can be made secure, and anyway shouldn’t patients have the choice?
Meanwhile, the NHS is working on an app that’s in trials, which it hopes will become the digital front door to the health service, using APIs to plug into third-party applications and data, as well as GP patient records, appointment bookings, prescriptions and other services. If it works, it’s the ideal way to help deliver digital health – but its biggest challenges will be changing NHS culture, let alone making the technology function correctly. Don’t tell the dinosaurs though – better still, send them an email.
It’s understandable that people express cynicism when a politician – particularly one with a clear ideological bent – proclaims that “digital” or “technology” can solve the core Brexit issue of the Irish border. With very few honourable exceptions, politicians have the same understanding of technology and how it works as the average voter has about Erskine May, the authoritative publication on parliamentary procedure.
But when an IT company makes a similar claim, should we sit up and take notice? It’s notable that we’re now seeing leaks of technology supplier proposals just as the Brexiteers in Parliament are pushing their “alternative arrangements” ideas to overcome objections to the UK/EU Withdrawal Agreement.
The Sun got its hands on a Fujitsu document, while Buzzfeed published details of a report from wireless sensor maker UtterBerry. It’s clear that the Brexiteers leaking these ideas believe this gives substance to the possibility of a technological solution to the Irish border.
From what’s been released into the public domain, these proposals read like classic, old-school, Big IT plans that offer simple-sounding solutions to difficult problems couched in ways that make it seem like the supplier has all the answers, knowing that the recipient won’t know the right questions to ask.
It’s true that technologies exists to automate border functions between two countries – internet of things, wireless, sensors, GPS tracking, facial recognition, automatic number plate recognition, cloud, real-time data analytics, to name a few – of course one of the proposals throws in blockchain, because, well, you have to these days.
And it may even be the case that such a solution could be demonstrated on a small-scale – a couple of delivery lorries perhaps, being tracked from start to finish across the border.
But what every conversation about a technology solution fails to grasp is the complexity such a project would have to deal with. Ask anybody that has worked on large-scale IT projects and you will hear the same thing – the technology is rarely the problem. Big government IT initiatives mostly fail because they underestimate the complexity of making large-scale technology work.
Sure, you can point to Amazon and say: look, here’s a hugely scaled technology that works. But when Amazon started, it sold books via a simple website. It’s taken two decades, billions of dollars and many millions of working hours to get to where it is now.
Has anybody at any of these tech companies that believe they have a solution done any in-depth user research to properly understand the problem? You suspect not.
This is a great Twitter thread from a director of a specialist vehicle routing software company, that touches on the complex questions that would need to be answered before even considering a technology solution such as those proposed.
All this also overlooks the unique political complexity inherent in the Irish border.
IT companies do themselves, the IT sector, and UK politics no favours by promoting apparently simple solutions to such an enormously complex problem. Of course, that’s what most of them have done in government IT for decades, so you can’t expect them to change when there’s a multibillion-pound contract in the offing.
But whenever technology is thrown up as the answer to this most difficult of Brexit issues, the question that needs to be asked is not whether the tech exists, but whether anyone can explain how to manage the complexity involved in making it work. If they claim they can, be very cynical.