Here is our quote of the week: “We spend £8m a year on paper. We spend £2m a year on envelopes. We can save lives, save staff time and cut costs by using an extraordinary piece of technology that has the ability to allow two people to communicate instantaneously, and to allow groups of people to communicate instantaneously. If you take the result of someone’s test you can immediately communicate those results with the analysis, to the patient and the patient can reply. It’s called email. I don’t know whether any of you have heard of it.”
Don’t adjust your eyesight, the date on this blog post is correct, and no, you haven’t fallen through a tear in the fabric of space and time and found yourself in 1989 by mistake – although if you were reading Computer Weekly 20 years ago you probably saw very similar quotes from enlightened business leaders at the time.
The quote above was spoken on 13 February 2019. By what terrible luddite organisation, you may well ask? Has someone just introduced Jacob Rees-Mogg, minister for the 18th century, to a computer?
Well, not far off – a few rows of the House of Commons at most.
Those words were spoken, dripping with sarcasm, by secretary of state for health and social care Matt Hancock, announcing a new policy to use email by default for NHS England to communicate with patients. Therein lies the scale of the challenge in digitising the NHS.
How often have our highly tech-literate readers – and indeed, most of the highly tech-literate UK population – tutted and raised their eyebrows at the inability of their GP or hospital doctor to simply send them an email? Let’s not get ahead of ourselves and start suggesting instant messaging.
And yet still there was scepticism from some NHS dinosaurs about the wisdom of a policy that promises patients the ability to choose if they prefer to continue receiving paper letters.
Of course, email has to be secure – especially for such personal information as medical tests – but modern email can be made secure, and anyway shouldn’t patients have the choice?
Meanwhile, the NHS is working on an app that’s in trials, which it hopes will become the digital front door to the health service, using APIs to plug into third-party applications and data, as well as GP patient records, appointment bookings, prescriptions and other services. If it works, it’s the ideal way to help deliver digital health – but its biggest challenges will be changing NHS culture, let alone making the technology function correctly. Don’t tell the dinosaurs though – better still, send them an email.
It’s understandable that people express cynicism when a politician – particularly one with a clear ideological bent – proclaims that “digital” or “technology” can solve the core Brexit issue of the Irish border. With very few honourable exceptions, politicians have the same understanding of technology and how it works as the average voter has about Erskine May, the authoritative publication on parliamentary procedure.
But when an IT company makes a similar claim, should we sit up and take notice? It’s notable that we’re now seeing leaks of technology supplier proposals just as the Brexiteers in Parliament are pushing their “alternative arrangements” ideas to overcome objections to the UK/EU Withdrawal Agreement.
The Sun got its hands on a Fujitsu document, while Buzzfeed published details of a report from wireless sensor maker UtterBerry. It’s clear that the Brexiteers leaking these ideas believe this gives substance to the possibility of a technological solution to the Irish border.
From what’s been released into the public domain, these proposals read like classic, old-school, Big IT plans that offer simple-sounding solutions to difficult problems couched in ways that make it seem like the supplier has all the answers, knowing that the recipient won’t know the right questions to ask.
It’s true that technologies exists to automate border functions between two countries – internet of things, wireless, sensors, GPS tracking, facial recognition, automatic number plate recognition, cloud, real-time data analytics, to name a few – of course one of the proposals throws in blockchain, because, well, you have to these days.
And it may even be the case that such a solution could be demonstrated on a small-scale – a couple of delivery lorries perhaps, being tracked from start to finish across the border.
But what every conversation about a technology solution fails to grasp is the complexity such a project would have to deal with. Ask anybody that has worked on large-scale IT projects and you will hear the same thing – the technology is rarely the problem. Big government IT initiatives mostly fail because they underestimate the complexity of making large-scale technology work.
Sure, you can point to Amazon and say: look, here’s a hugely scaled technology that works. But when Amazon started, it sold books via a simple website. It’s taken two decades, billions of dollars and many millions of working hours to get to where it is now.
Has anybody at any of these tech companies that believe they have a solution done any in-depth user research to properly understand the problem? You suspect not.
This is a great Twitter thread from a director of a specialist vehicle routing software company, that touches on the complex questions that would need to be answered before even considering a technology solution such as those proposed.
All this also overlooks the unique political complexity inherent in the Irish border.
IT companies do themselves, the IT sector, and UK politics no favours by promoting apparently simple solutions to such an enormously complex problem. Of course, that’s what most of them have done in government IT for decades, so you can’t expect them to change when there’s a multibillion-pound contract in the offing.
But whenever technology is thrown up as the answer to this most difficult of Brexit issues, the question that needs to be asked is not whether the tech exists, but whether anyone can explain how to manage the complexity involved in making it work. If they claim they can, be very cynical.
“Nothing changes, on New Year’s Day,” sang U2 many years ago. This piece of self-evident wisdom doesn’t stop the world of tech punditry from excitedly making hyperbolic forecasts at the start of every year on the assumption that opening up their new wall calendar will magically change the fortunes of the sector.
The technology development cycle will roll on during 2019 much as it did during 2018 and every year before. The stuff people got excited about last year will be pretty much the same stuff they get excited about this year.
Likewise, the challenges faced by IT leaders when they left for their Christmas holidays will not miraculously have shifted to something else by the time they return from their seasonal excesses. Maybe their budget might be a little higher this year, as CEOs increasingly accept they need to invest in digital and innovation to stay competitive. Just ask the failing high street retailers who were sure they could keep up against the convenience and popularity of internet shopping.
So putting aside the tech predictions, let’s think ahead. What might be the things we’re talking about in December, when we look back on the year? Here are five areas we think will be key influences:
Obviously. One reason why it’s difficult to predict anything for 2019 is that the UK doesn’t even know where it will be come April. Anything from sunlit uplands, to no-deal disaster, or business as usual remains a possibility. Even the UK tech sector is still arguing about what it wants. It’s certain though, that the subsequent nine months of the year will be determined by what happens in Westminster in the first three months.
It was inevitable there would be a backlash against Big Tech at some point. As the capabilities of technology overtake the cultural capability of society to absorb the pace of change, society will rebel. It’s happened before – notably the dot com boom and bust either side of the new millennium. The anger is directed at the likes of Facebook, Google, Amazon and others for now. Governments have to come to terms with the new landscape of privacy and data protection, and stop trying to pigeonhole tech firms into existing regulatory regimes. Tech needs regulation but it needs to be new approaches for the digital age.
Pretty much every major data breach makes front-page headlines these days, and leads radio and TV news bulletins. Think Marriott, Dixons Carphone, British Airways, Facebook – the list goes on. Of all the old issues that are still new issues, cyber security is the biggest. But have any of these high-profile breaches actually changed anything? For all the inconvenience caused to people affected by such incidents, nothing has happened yet to cause a real public backlash to the degree that it forces companies to do better. Maybe the first GDPR fines might change that. But the public mostly sees security like they do insurance – a necessary evil. That won’t last – before long there’s going to be a security incident that causes widespread economic damage to people’s lives. The worry is that security weaknesses won’t be fully addressed until the real dangers are demonstrated on a large scale.
Already this month, the British Chambers of Commerce has warned that 81% of manufacturers and 70% of service firms are struggling to recruit the skilled staff they need. Last month, the Trade Union Congress called on the government to create a million new high tech and manufacturing jobs by 2030. Skills shortages are real, as are the fears they will become even worse after Brexit with the loss of freedom of movement and stricter immigration policies. It’s going to be a tough year for anyone needing more IT staff to support growth. This is perhaps the single biggest issue facing the UK’s digital economy – if we don’t have the talent, we won’t have global leadership. Again, it’s hard to see how this will be solved without a radically different approach from the government.
There’s an easy way to work out which technologies will be the ones to watch over the next 12 months – over any 12 months, frankly. So many people say the pace of change in tech is amazing these days. Actually, it’s no different to how it’s always been. The pace of invention hasn’t changed much at all. What’s happened is that commoditisation of technology is affecting more and more areas of our everyday life and work, so it feels like things are changing faster. The key is to watch what technologies are about to become commoditised – not which ones are emerging on the scene, many of which may never reach commodity status. Once a tech becomes a commodity, that’s when it takes off and changes things. The internet, smartphones, cloud, big data – all are examples of tech becoming cheap enough, powerful enough and scalable enough to become increasingly ubiquitous. What’s next? Basic forms of artificial intelligence (AI) are the most likely – process automation, simple machine learning, for example. More advanced AI as a commodity is still a way off. Blockchain? No – nowhere near commoditisation yet. Internet of things? Nearly there. Don’t expect any surprises here for 2019.
So it turns out the O2 mobile network failure that took out data access for some 30 million people this week, was caused by an expired software certificate – no great conspiracy, no programming error, no undiscovered bug, no malicious interference, but one of the most basic systems administration mistakes you can imagine. Someone somewhere forgot to renew a certificate.
As a wise voice once said, there’s no patch for stupidity. And herein lies the great unspoken conundrum at the heart of the digital revolution. Computers go wrong. Why? Because they’re designed, manufactured, programmed, configured, secured and operated by the most fallible, unpredictable and unreliable resource in the technology world – people.
Of course, it’s those same people who every day ensure that the IT systems supporting every company and government in the world work mostly as intended, who keep the internet running and protect the vast majority of our personal data. That’s because people are pretty good at computers these days. But we’ll never be perfect.
The job of running IT systems is becoming increasingly abstracted from the technology – virtualisation, cloud, containers, serverless, orchestration, all these trends aim to remove that human fallibility from everyday tasks. Not forgetting that it still takes another human somewhere to make those technologies work in the first place.
Much as artificial intelligence (AI) and automation are replacing or augmenting corporate jobs, so the IT department will see further dramatic change as more of its responsibilities are taken over by software robots. Of course, those software robots were created and programmed by humans too. And they aren’t exactly perfect – as the Amazon workers in a New Jersey warehouse found out this week, when a robot accidentally punctured a can of bear repellent, sending 24 staff to hospital.
There is, correctly, much debate about ethics in AI and technology, not least the need to prevent human bias from becoming too infused in the algorithms they rely on. People outside IT are taking more of an interest in the workings of IT than ever before. It’s fair to assume those non-IT types are pretty fallible too.
When O2 went down, there was much humour taken from the site of people trying to consult paper maps to find their way around, and attempted insights from those who found a whole new world beyond the smartphone they’d been glued to until then. The outage was a small reminder of how reliant most of us have become on technology.
For all the great advances of recent decades, it’s going to be a long time before we no longer see headlines screaming “outage”. Whether through malice or simple error, human fallibility is a part of our digital future too.
Sometimes reporting the latest tech news at Computer Weekly throws up an entertaining juxtaposition. Take these two headlines, for example, from last week:
Just as IT industry trade body TechUK is pushing health and social care secretary Matt Hancock to accelerate his grand technology vision for the NHS, the Institute for Public Policy Research publishes a report that suggests digital healthcare is the lowest priority for patients.
Sound familiar? The tech sector telling its prospective customers they should be buying more stuff, while the ultimate users of that stuff show somewhat less enthusiasm? In one form or another, this has been a recurring trait throughout enterprise IT history.
Of course, it goes back even further than that. Vehicle manufacturing pioneer Henry Ford is widely attributed as saying, “If I had asked people what they wanted, they would have said faster horses”.
While NHS patients might like the idea of faster doctors and nurses, there’s no escaping the reality that technology will accelerate their capacity and capability far more effectively.
But this particular juxtaposition highlights a deeper trend at the moment. In the decade since the launch of the iPhone, we’ve seen technology becoming ever-more ubiquitous and popular, but in the past year or so we’ve seen the start of what is, perhaps, an inevitable backlash.
Much of the growing negativity towards tech is coming from the dominance of US internet giants like Facebook, Google, Amazon and Apple – and especially continuing revelations about topics such as Facebook’s cavalier attitude towards our personal data, Amazon’s working practices, or Google’s tax policies.
It’s a concern that the phrase “big tech” is becoming commonplace, carrying with it the hint of malfeasance that originated from “big tobacco” and “big pharma”.
The underlying positive aspect is that the backlash is a natural response to our increasing reliance on tech, and the influence it’s having on driving social and cultural change. Such a process is never easy, but if you believe the benefits of technology outweigh the concerns, then it’s incumbent on tech evangelists to continue to make the case.
The next few years are likely to be difficult for the tech sector, and those who come through successfully will be the ones who change their behaviour – grow up, so to speak, from tech’s adolescence. Tech is not the young upstart anymore, it needs to be a responsible member of society.
For everyone who works in IT, it’s your responsibility too, to focus on the benefits, and mitigate the potential downsides of the digital economy.
As we, seemingly, edge closer to something resembling a UK deal for leaving the European Union (and by the time you read this, that statement could quite possibly have been superseded by events), so the government is starting to reach out to the tech sector to ease its ongoing concerns.
This week, Brexit secretary Dominic Raab came along to meet a room-full of tech leaders and journalists in London, to answer their questions and put forward his case for Brexit.
It was on this occasion that Raab exposed himself to widespread mockery – far beyond his tech audience – for admitting that he had not fully understood the importance of the Dover to Calais crossing for UK trade. Cue facepalms all round.
But did he reassure the gathered leaders that Brexit is not the disaster that most of the industry fears it will be? Whether you feel he achieved that objective depends mostly on how much work you are willing to allow the word “if” to undertake on his behalf.
When asked about the prospects for the UK’s artificial intelligence (AI) sector – in which the UK likes to see itself as something of a pioneer and where there is undoubted future opportunity – Raab said all will be well, “if you get it right”.
That’s an answer that seems to sum up most of his pitch. Be reassured, leaders of a critically important UK industry, everything will be fine, if you get it right.
That poor ‘if’ is left to support all the justified concerns about data flows, regulatory compliance, lack of a deal for services, losing the customs union, exiting the digital single market, ending free movement of talent from the EU, attracting foreign investment – add your own weights to the straining bar that ‘if’ is holding.
“Brexit may create opportunities,” Raab continued – note, “may” not “will”, and this from one of the most ideologically committed Brexiteers.
“I want to deliver a global, outgoing Britain,” he said, to an audience of outgoing, globalist tech leaders, no doubt somewhat surprised to learn they were not already outgoing and global.
He repeated the government line that losing EU freedom of movement will not hinder the industry, because we’ll have a global immigration system instead. We do, of course, already have a global immigration system in the UK, which fails to attract enough overseas talent, and fails thus solely because of self-imposed restrictions.
“Most of the growth markets in the future will be in the non-EU markets, whether Latin America or Asia, and so, for instance we want to be able to promote e-commerce,” said Raab. It might be a surprise to UK e-commerce firms that apparently they don’t have the ability to target non-EU growth markets or promote their services therein. Pretty sure they do that now.
Computer Weekly has long held the view that Brexit is bad for tech. Should Raab’s positivity prove correct – if “if” steps up and carries all that weight – even then we’re still to be convinced the growth opportunities will be better than they would be within the EU. If we’re wrong, we’ll stand up and say so. But our “if” is a lot smaller than Raab’s.
Gone, or so it seems, are the days when Computer Weekly laments after every Budget statement from the Chancellor of the Exchequer that tech has been overlooked. There is little doubt that government now realises that support for, and investment in, the technology sector is critical to the UK’s future.
Of course, we can and will still observe that there is more to do, but Philip Hammond’s latest Budget put at least a small finger in many of the necessary pies – startups, R&D, skills, digital government, broadband and more – even if he did anger the IT contractor community by extending controversial IR35 reforms to the private sector.
Hammond’s digital services tax – targeting the big web giants and their creative tax accounting – has unsurprisingly been attacked by the tech industry, but putting aside the rights or wrongs of the policy, it’s another reflection of the growing importance and influence of the IT world on government decision-making.
As with so much coming from the UK government, however, the positive Budget announcements exist in a parallel world to the uncertainties and concerns over Brexit.
A few days earlier, digital minister Margot James told Parliament that she still cannot guarantee a data-sharing agreement with the European Union in the event of a no-deal Brexit. As Labour’s Liam Byrne, who was questioning James at a session of the European Select Committee, said: “Without data sharing our exports will grind to a halt”.
That same week, a National Audit Office report on the UK border’s preparedness for leaving the EU found that 11 of the 12 “critical systems” at the border are at risk of “not being delivered on time and to acceptable quality”.
Not quite Hammond’s previous glib and spectacularly uninformed observation when asked about a possible digital solution to the Irish border issue, where he proclaimed: “I don’t claim to be an expert on it but the most obvious technology is blockchain.”
Congratulations to whichever tech lobbyist persuaded whichever civil servant to tip the Chancellor on that fantastical idea.
The government’s new-found love for the tech sector is welcome, but its desire to make technology a panacea for all the ills of Brexit needs quickly to be tempered.
For technology leaders, the dilemma continues – eager to take advantage of Westminster’s tech-friendly approach, but fearful that a bad Brexit of whatever form might rapidly unravel the advances made in recent years.
We look forward to a future Budget where support for tech is not just welcome, but unequivocal and full of certainty.
A large and toxic cloud has hung over NHS IT since the failure of the £12bn National Programme that saw billions wasted on systems that barely worked. Since then, we’ve seen the collapse of Care.data, the botched attempt to share patient records through a central database, and a plan for a “paperless NHS” that first aimed to deliver by 2018, was put back to 2020, and is unlikely to be achieved before 2023.
The opportunity for technology to reform and improve the UK’s health and social care system is obvious to anyone who’s ever used a smartphone. There are undoubtedly pockets of excellence in the NHS, but the gap between the best and the worst is enormous. IT leaders have never managed to get over the argument that says: do you want to spend more money on doctors and nurses, or on computers? In the austerity hit NHS, there’s only ever one answer.
Even in the better NHS trusts, there’s a hugely complex legacy to unravel. At Leeds Teaching Hospitals – a great example of a forward-thinking health organisation – there are 460 different IT systems in use. Multiply that across the whole health system and the transformation to digital becomes an ever-bigger challenge.
One day, hopefully sooner than later, somebody has to get NHS technology right. Enter Matt Hancock.
The new secretary of state for health and social care comes with technology squarely in his comfort zone. Through ministerial appointments at the Cabinet Office and the Department for Digital, Culture, Media and Sport (DCMS), it’s been the strongest thread in his political career. There was disappointment in the tech sector when Hancock was promoted from DCMS to health because he was seen as a passionate advocate for digital at the highest levels of government, and he understood the issues better than any of his predecessors (although admittedly, that’s not always been an especially high bar).
He’s wasted no time in putting technology overhaul at the heart of his plans to reform the NHS and social care systems. In July, he promised to make £487m available for NHS technology projects and to replace paper-based systems. In September, he announced a £200m fund for digital centres of excellence and plans to pilot the NHS app across England.
His Labour shadow, Jonathan Ashworth, observed: “This isn’t a serious plan for technology and innovation in the NHS – it’s a pipe dream”.
Now Hancock has launched his “technology vision” for the NHS – a digital future based on open standards, interoperability and APIs, but which retains local autonomy of IT decision-making. It’s a perfectly sensible, ambitious plan, which appears to have learned the lessons of the National Programme. But to paraphrase an old saying, if that’s where you want to go, you wouldn’t start from here.
To be fair, Hancock told Computer Weekly that he doesn’t underestimate the challenge – but he’s looking for a real change in attitude and approach to technology at a local level. His predecessor, Jeremy Hunt, became the longest-service health secretary in history – not far short of six years. Hancock may need to be in place even longer to see through the digital transformation he wants – but this time, the NHS needs finally to get IT right.
Depending on your perspective, Gov.uk Verify is now either secure in its future at the heart of the UK’s emerging digital identity ecosystem, or it has one foot in the grave and is on the way to its inevitable demise.
The Cabinet Office has produced a carefully worded announcement that leaves room for interpretation, while also giving the Government Digital Service (GDS) a way to save face. You can read the full announcement to Parliament here, but the essence is this:
In 18 months’ time, after a “capped expenditure” approved by the Treasury has been spent, government will cease further public investment in Verify – but Verify itself will not cease. Instead, five identity providers (IDPs) will use the technology they developed as part of the Verify programme – and, most importantly, the Verify users they each registered – to offer a private sector digital identity solution, based on “state-backed assurance and standards”.
Whether those providers choose to still call their products “Verify”, or to use some form of Verify branding, will be up to them. As part of the new contracts those companies have signed with the Cabinet Office, they will have permission to reuse their Verify technology without requiring government approval, which they would have had to seek under their previous deals.
What’s important, as far as government is concerned, is that the (so far) 2.9 million citizens signed up to Verify will still have a digital identity they can use that allows them to maintain access to the online public services for which they set up that identity. Those citizens should, in theory, also be able to use that identity in future to access any private sector services that accept the same standards.
GDS will still support whatever in-house Verify technology – and whatever staff – is needed to support users accessing digital public services. But by the time the IDPs take over, this will be done on a “cost neutral” basis to government – in other words, the IDPs will have to pay for it.
It’s not clear – and not yet decided – what services GDS will need to offer those IDPs. It could be none (although that’s unlikely on day one) – or it could slowly decline to none. It remains to be seen how the IDPs will be able to offer a service with only a 47% verification rate as a commercial product – and if they get that rate up to an acceptable figure, why didn’t they do so before?
GDS will continue to advise Whitehall departments on appropriate use of digital identity in their services – but any previous attempts to mandate the use of Verify are over. GDS will keep departments on the straight and narrow, but it’s up to the departments to decide which standards-based identity products they want to use. That includes using suppliers that have had no involvement in Verify.
Digital identity market
The five chosen IDPs will be responsible for helping to build the wider digital identity market in the UK. GDS can then say it has been instrumental in the establishment of a digital identity ecosystem that did not exist before. Others will decide if the £130m and more invested in Verify was justified to reach that end (and that’s not to mention the many millions more invested by HMRC, DWP, NHS and others in building their own digital identity systems because they couldn’t rely on Verify).
The 18 digital services that currently use Verify – along with three others in private beta – will continue to offer Verify to users for as long as they want to, but it’s undecided how new users will select which provider to use after the 18-month transition. Previously, users registering with Verify have been asked to select one of seven (now five) IDPs. It’s yet to be decided what will be offered to users at that point in future – for example, a selection of IDPs, or a preferred IDP per service, or simply a statement that they can use any IDP that conforms to the standards.
Think about how that’s going to come across to any citizens uncomfortable with technology at the best of times. But perhaps there’s a solution yet to be determined.
Two of the existing IDPs, Royal Mail and Citizen Safe, have dropped out. At the start of 2017, those two accounted for approximately 3% of all Verify users, so in the grand scheme of things they’re not much of a loss. But with 2.9 million users, that still equates to 87,000 citizens. Royal Mail and Citizen Safe will continue to support those people for the next 12 months, but after that they will have to re-register with another IDP to continue to access government services – there will be a communications plan to explain and help.
Verify faces its biggest challenge yet in 2019 – perhaps the reason why the 18-month transition period has that duration. By the end of this year, the digital version of Universal Credit (UC) will be rolled out to all Jobcentres, and next year millions of existing benefits claimants will be told they have to apply for UC. As part of that process, they will have to use Verify.
DWP already knows Verify can’t cope on its own, and has had to develop its own system to work alongside. Verify has consistently struggled to successfully register even half of the citizens who attempt to use it. Early tests on UC suggested that only 35% were able to set up a Verify account online. UC could potentially more than double the number of Verify users – the system has never been asked to work at such scale, and especially not for a service under the intense political and public scrutiny of Universal Credit.
The five IDPs have already been involved with the UC programme, but will be working more closely with DWP over the next 18 months.
Potentially the biggest winners here will be the identity companies that have been excluded from Verify in the past, and whose business growth has been stifled as a result. As long as they conform to standards, the public sector market will finally open up to them.
Those standards will be set, not by GDS, but by the Department for Digital, Culture, Media and Sport (DCMS), which took over policy responsibility earlier this year. DCMS has little interest in supporting Verify, and privately sees its standards-based approach as heralding the end of Verify. Like many other departments, DCMS has simply lost confidence in what was meant to be the government’s flagship digital identity product.
The lessons of Verify
Over the last two or three years, as its critics increasingly claimed that Verify was travelling down a dead-end street, GDS has retreated into secrecy and silence over its plans. It took the government’s Infrastructure and Projects Authority to recommend the termination of Verify to reach this point.
Verify was always an ambitious and important programme – digital identity is just hard to do – and GDS deserves credit for taking it on.
But somewhere along the line, it got lost. Remember that as recently as February 2017, the Cabinet Office set a target of 25 million Verify users by 2020. Only last month, minister Oliver Dowden reiterated that goal. It’s unlikely there will even be 25 million citizens with any form of standards-based digital identity in the UK by that time.
GDS will tell us – and it may be correct – that the time, resources and money invested in Verify has been worth it to help establish a UK digital identity ecosystem. The difficulty is, we just don’t know if that’s true.
GDS has learned a lot – about what works and what doesn’t work – on digital identity through Verify. If it really wants to be viewed as the prime instigator of a market that will be critical to the success of the UK digital economy, it needs to now be fully open and transparent about its Verify journey. There is surely much that the whole ecosystem can learn too.
The Department for Digital, Culture, Media and Sport (DCMS) has been conducting a review of digital identity since taking over policy responsibility from the Government Digital Service (GDS) in June.
Computer Weekly has learned that at the core of the DCMS proposals to boost the UK’s digital identity ecosystem is a plan to open up government databases via APIs to the private sector – a move that could also administer the last rites to GDS’s troubled Gov.uk Verify system.
Under the proposals, databases containing vital identity information such as passports and driving licences could be accessed through APIs by identity providers. Any company seeking to offer digital IDs for online transactions would, in theory, be able to quickly and cheaply validate data against recognised government information – the closest thing the UK has to a “gold standard” for identity data.
Such a system would not mean third parties accessing data directly, only checking that ID data provided by an individual to that third party is correct.
The concept is a reversal of the principles underlying Verify where only a small set of government-selected companies are allowed access to these databases through a GDS-developed document checking service which performs a similar function.
Where Verify is a closed shop, the API approach would allow any suitable provider – including other parts of government – to offer assured digital identities, creating a wider, market-based ecosystem.
DCMS is understood to believe its plan would be significantly cheaper to run than Verify – potentially costing a fraction of a penny per transaction. Using Verify, by contrast, GDS pays its pool of identity providers on average about £5 for each user they successfully register.
Verify is designed around a “hub” where users are directed to one of seven identity providers (IDPs) when they wish to establish a digital identity to access one of the 18 online government services that currently use Verify.
Under the DCMS plan, theoretically any digital government service could choose to accept approved identities from any third-party that has used the database APIs. The department’s review is understood to be based on the principle government should enable a digital identity market using public data, rather than building its own system.
The future of Verify is already in question after government watchdog the Infrastructure and Projects Authority recommended it be scrapped, which would mean writing off more than £130m spent so far by GDS rather than throwing more money at a programme that many in Whitehall see as a failure.
GDS is fighting to keep Verify going – only this month Cabinet Office minister for implementation Oliver Dowden confirmed the government is still committed to its target of for 25 million Verify users by 2020. Whitehall internal politics may yet find a way to rebrand the DCMS plan as “Verify mark two” or something similar, in order to be seen to deliver on a promise that was part of the Conservative Party election manifesto in 2017.
GDS’s existing contracts with the Verify IDPs are understood to be ending soon, and if the DCMS proposal is accepted it seems unlikely those IDP contracts would need to be renewed other than to manage existing users as the service they provide is wound down.
Private sector identity providers have long been frustrated at the way the Verify model has shut them out of government, and will hope that the DCMS plans will kick-start the development of a growing market in an area that’s hugely important for the UK’s digital economy.
Other areas of the public sector could benefit from the API approach too, with HM Revenue & Customs, Department for Work & Pensions, NHS England and the Scottish government all working on their own digital identity systems rather than using Verify.
Long-term GDS watchers will recall that the organisation was set up following a 2010 recommendation by web entrepreneur Martha Lane Fox in a report commissioned by then Cabinet Office minister Francis Maude. One of the main suggestions put forward by Lane Fox was to “mandate the creation of application programming interfaces (APIs) to allow third parties to present content and transactions on behalf of the government. Shift from ‘public services all in one place’ (closed & unfocused) to ‘government services wherever you are’ (open & distributed)”.
There would be a certain irony if the eventual use of APIs through another department brought about the end for GDS’s flagship project.