The digital revolution is challenging the regulatory environment across every westernised, developed economy. Governments in the EU, UK, France, Germany and the US are each trying to take a lead in working out how to deal with the new challenges presented by internet companies such as Facebook and Google. There are debates taking place around data protection, privacy. responsibility for content, copyright, algorithms and there will be other issues arising that have barely been considered even now – not least around artificial intelligence (AI).
It has long been the case that regulation lags well behind technology, and as a result regulators tend to try to shoehorn new digital developments into existing structures. A prime example comes with social media, especially after the fallout from Facebook and Cambridge Analytica over use of customer data.
Internet platforms have always been regulated as if they were telecoms companies – US President Bill Clinton established in 1997 that web companies were classified as “mere conduits”. This means that, like a phone company or broadband provider, such firms were not considered legally responsible for content shared on their platforms when it was created by their customers / users.
More recently, issues such as extremist content and child abuse images have challenged that doctrine, with many observers calling for web platforms to be treated like media companies, which are legally responsible for all content published on their sites.
Neither scenario works, and it’s clear that a new style of regulation is needed for a different type of company. Can regulators ever keep up with the pace of technological change? Probably not – but they can be better prepared for it.
The next great challenge for regulators will be AI – how, for example, should we oversee the algorithms that will increasingly make decisions that affect our lives? How do we ensure those algorithms are fair and unbiased? What about the development teams that create them – should they be sufficiently diverse to make sure everyone in society is considered? And what about machine learning, where algorithms evolve without human intervention as the AI system “learns”?
Nigel Shadbolt, one of the UK’s leading academics in AI and open data, told Computer Weekly that if the UK wants to take a lead in AI, then an area for focus is ethics. Realistically, the UK can’t compete with the multibillions that China is throwing at the sector – but China’s social and political culture is unlikely to take the same approach to regulation and ethics as we would.
It’s an easy thing to say, much harder to do – but the UK has a unique opportunity to lead the world in ethical regulation of the digital revolution. Don’t regulate on specifics – regulate on values and principles that can underpin technology development for years, maybe even decades to come.
The UK government is already setting up a Centre for Data Ethics and Innovation, and Theresa May has called for the UK to be a world leader in ethical AI. We have a genuine opportunity to set the standards that the world will follow. In such uncertain times for the UK tech sector, ethics is one area where we can and must take the lead.
The government’s flagship digital identity system Gov.uk Verify has become a car crash in motion, accelerating towards its demise.
The Cabinet Office’s project watchdog has condemned Verify by recommending it be scrapped because the departments that would have to fund its continuation have lost confidence in its ability to deliver.
But the Government Digital Service (GDS) is hoping it can resist the inevitable and is making one more attempt to gain funding to keep Verify going. Realistically, the best outcome GDS can hope for is a managed decline to allow those few services that adopted Verify time to find an alternative.
GDS is pinning its hopes on Universal Credit, which has Verify as part of its user registration process. Given that the Department for Work and Pensions had so little confidence in Verify as long ago as 2015 that it decided to develop its own digital ID system to fill in the gaps, this seems an unlikely hill on which to fight Verify’s last battle.
The cost will be significant – GDS has spent at least £130m on Verify so far, which doesn’t include how much other departments have spent integrating Verify or – in too many cases – developing an alternative.
Verify promised so much, but GDS leadership failed to listen to the counsel of those who have said for some time it needed to change direction. A recent attempt to “reset” the project was too little and too late.
Ed Tucker, the former head of IT security at HM Revenue & Customs (HMRC), put it well on Twitter: “There has been some simply awesome work done by GDS. Truly awesome. Changing the face of delivery. This though, is classic horrific government IT programme. Flogged to death at great expense to save face.”
The implications of scrapping Verify are significant. Private sector identity companies have invested heavily in developing software to be compatible with the GDS system. The UK has pinned its plans for EU-wide digital identity interoperability on Verify.
But GDS has already lost control of Verify’s role in the wider UK identity ecosystem – that responsibility moved to the Department for Culture, Media and Sport (DCMS) in June. GDS is contributing to a review of digital identity policy, but that’s now being led by DCMS. It seems unlikely that DCMS will be a lone voice in support of Verify when so many other departments have lost confidence.
Verify’s original goal was to replace the ageing Government Gateway system now owned by HMRC. But after seven years, Verify has less functionality than the clunky old Gateway – a system developed in just 90 days that is still in widespread use 17 years after its launch.
What next for GDS?
What does this mean for GDS? There’s no denying it is a huge blow – Verify is one of the core systems for GDS, taking up a significant chunk of the funding allocated to GDS in the 2015 spending review.
This will be the second major failure for GDS – the first coming back in March 2015 when a GDS-led project at the Rural Payments Agency (RPA) collapsed. To GDS’s credit, two major failures in seven years is a huge improvement on what came before. But RPA dented confidence in GDS’s ability to deliver on large-scale systems integration projects, and gave ammunition to the Whitehall mandarins who never liked GDS in the first place.
Scrapping Verify could have even wider repercussions – it was included in the Conservative manifesto for the general election in 2017, and ministers don’t like seeing their high-profile policies shot down from within.
With the Cabinet Office’s own major projects authority recommending the termination of the biggest project in another Cabinet Office organisation, you have to think this implies a loss of confidence in GDS from the highest levels of the Cabinet Office too.
The recommendation from the Infrastructure & Projects Authority seems to have come as a shock to GDS. When Computer Weekly requested a statement about our story and asked specific questions about the future of Verify, GDS and the Cabinet Office declined to comment. That in itself is an indictment of any organisation under scrutiny over the potential waste of £130m of taxpayers’ money.
At this point it’s difficult not to raise the issue of leadership. There was a brief rumour recently that GDS director general Kevin Cunnington was leaving – he’s not – but sources say there were a lot of happy people in GDS when the story first circulated.
In December last year, a GDS staff survey leaked to Computer Weekly revealed major concerns about leadership. Only 28% of GDS employees gave a positive response – down five percentage points on the previous year; 16 percentage points below the average across the whole Cabinet Office; and 26 percentage points down on the top teams across all of the civil service departments that took part in the survey.
Talk privately to senior people in digital government in Whitehall and leadership is the biggest concern they raise over GDS’s future. The Institute for Government has also been critical of Whitehall leadership around digital, data and technology.
Cunnington chose – and was backed – to change GDS’s role to be more of a support, education and consultancy operation, rather than the disruptive transformational approach of his predecessors. He has been largely successful in achieving that change – but at the price of diminishing GDS’s influence across Whitehall, according to multiple sources. GDS also lost control of data policy to DCMS earlier this year.
Note that when we talk generally about”leadership” here, it’s not specifically a reference to Cunnington – many experts are critical of his boss, permanent secretary John Manzoni, and some GDS staff have in the past expressed concerns about others in the GDS senior team.
GDS is taking an important role in supporting the technology implications of Brexit, and while EU exit remains the biggest priority for this government it’s unlikely anyone will want to get in the way. But the next spending review is coming up. GDS has 860 staff and an annual budget of £128m – it seems incredibly unlikely the Treasury would approve similar resources if £130m is written off from the failure of its highest-profile flagship project.
There is an impending storm heading towards corporate IT, and the outlook doesn’t look sunny for cash-strapped, time-constrained IT departments. Just like in 1999 with Y2K, there is an absolute deadline – support for SAP ECC 6, the core component of SAP’s enterprise software platform, will end in 2025.
Then there is the upgrade. As Computer Weekly has found, moving to the next version of SAP, S/4 Hana, is not a simple upgrade. Yes, the technical stuff to get S/4 Hana running can be achieved in three months or so. But this is only viable in a greenfield installation. In the real world, businesses have accumulated vast amounts of enterprise software over time, intricately linked to enable business data to flow across these disparate systems in as seamless a fashion as possible. The people who originally implemented these highly complicated integrated systems are either well on their way to retirement, if they are in-house, or long gone, if the project was handled by a system integrator. And new IT staff are not exactly banging on the door with SAP skills.
In fact, it is widely recognised that there is an S/4 Hana skills shortage and the younger generation of IT professionals simply do not find SAP sexy enough to invest the time and effort to learn it. In many ways, the SAP systems currently running businesses have become a kind of technical debt. The data that flows through these companies has become essential to keep the business running, yet few organisations have a sound understanding of these workflows, how the data flows between enterprise applications to support a business process.
Experts have forecast that most of the time in implementing S/4 Hana will be taken up by understanding and cataloguing these data flows. Some companies will opt for third-party maintenance to extend the life of their existing SAP system, while others will replace SAP with something else. But as businesses start building a digital core, irrespective of whether they deploy S/4 Hana or implement something entirely new, the preparatory work in understanding data flows will be an essential step to take.
Amid the global debate about trade tariffs, prompted by US president Donald Trump’s unorthodox economic policies, it’s little remarked that the digital economy is largely untouched by the international tariff regime.
In 1998 – the early days of the internet’s spread into our everyday lives – the World Trade Organisation (WTO) agreed that digital products would remain tariff-free. This is why we can download an e-book from Amazon anywhere in the world, even if the product sits in a US datacentre, without any punitive transfer charges being levied.
Twenty years later, the world is very different and much more interconnected. We’re on the cusp of a revolution in manufacturing as 3D printing becomes mainstream. But think of the implications on global trade.
Tariffs exist to protect local businesses. India, for example, will apply import duties to products from overseas to make locally manufactured items cheaper for consumers, thus supporting its own manufacturing base.
If, however, that product is instead a digital file that defines the item, which is downloaded from the internet onto a 3D printer in India, no tariffs would be applied. That’s an existential threat for many indigenous companies in any country that doesn’t have the extensive telecoms and digital infrastructure of more advanced economies.
And that’s why India and South Africa – supported by other developing countries – are calling for the WTO to review the status of digital goods. Their concerns are understandable – but inevitably the US, EU and China are in favour of the existing regime.
“At a time when we are already seeing challenges to the long-standing principle that lower tariffs are good for trade, competition and consumers, it would be deeply concerning if the international consensus on tariff-less digital trade were to come to an end,” says UK IT trade body TechUK.
Think also of the implications for the UK after Brexit. We’re one of the most digitally enabled economies in the world – the biggest online shoppers in Europe. The government has rightly identified the digital economy as an area where it hopes to expand its global influence outside the EU.
Imagine if many of the countries with which we hope to sign “ambitious trade deals” were to impose tariffs on our digital goods and services – the UK would be forced to reciprocate, potentially affecting many of the e-commerce services we use.
What if Donald Trump, in retaliation, imposed digital tariffs to protect Silicon Valley? The rest of the world would have to do the same for Netflix. It sounds like an absurd proposition, but is anything really that unlikely in a post-Brexit, Trumpian trade environment?
India and South Africa’s proposal will most likely be voted down by developed countries, but it’s an indication that the digital economy could become a future battleground in international trade relations. And for the UK, hoping to sell its digital expertise to the rest of the world, that would be a real concern.
Slowly, slowly, the UK government is getting better at big IT projects. Thanks to Whitehall watchdog, the Infrastructure and Projects Authority (IPA), we’re able to track the improvements in major programmes, and its latest annual report shows the incidence of problem projects is declining.
None of the 29 government IT projects scrutinised by the IPA this year were given the worst “red” ranking, and only seven were rated as “amber/red”. Back in 2015 there were four red projects, and 16 were amber/red.
Congratulations, then, are due.
Thanks to the IPA, many of the recurrent and often basic problems are now spotted earlier and addressed. Put simply, it’s not so easy to hide when things are going wrong. But don’t think for a minute this means we won’t see further government IT failures.
Reading through the detailed notes published by the IPA to accompany its report, you can still see common issues that have yet to be cracked.
The difficulty in recruiting enough people with the necessary digital skills continues to be a challenge. Under-performing suppliers delay work. Over-ambitious, often politically driven targets mean timescales and business cases need to be redrawn. Sometimes things just take more time than people expect. Sometimes things just go wrong.
And because big public sector projects tend to be, well, very big – when they do go wrong, they go spectacularly wrong. The drive for smaller, more agile IT projects is also helping to mitigate against big failures.
Project managers in the private sector would recognise most of these symptoms, but they don’t generally face the public scrutiny of their government counterparts.
Reading the IPA report offers a sense that, given a following wind, the right resources, and political support, the number of problem projects will continue to decline. But what it also suggests is that sense of progression is still vulnerable to unexpected shocks to the system.
“Ensuring the UK is ready to exit the European Union has resulted in a significant increase in the number of projects and programmes that need to be delivered across government. While many of these projects are not of the same scale or duration as [major] projects, EU exit related projects are, by their very nature, high priority and need to be delivered at pace and with confidence,” says the IPA report.
“As such, EU exit has required an increase in government resources and, specifically for project delivery skills in government, a need to redeploy professionals and prioritise activity across departmental portfolios.”
Those carefully worded paragraphs are a stark warning that the practicalities of whatever form of Brexit we end up with, threatens to overwhelm government resources and redirect other priorities. As stated previously in these pages, Cabinet ministers hoping that technology will solve the more intractable difficulties of leaving the EU are kidding themselves – and us.
Whitehall’s project management community deserve recognition for important steps forward in recent years. They will be aware that Brexit could derail that progress very quickly.
MPs on the Treasury select committee have been doing everyone in IT a favour lately. Thanks to pressure from their investigations, we’ve had near-unprecedented access to the real stories of what caused the Visa and TSB outages that affected millions of people recently.
Visa provided a detailed, 11-page description of the technical problems that caused its card payment network to fail, while publication of an initial IBM report into TSB highlighted the glaring lack of preparedness at the bank when its IT migration went wrong. Sadly TSB’s response so far has been to play down the findings and say the report is out of date, rather than follow Visa’s lead and offer a full response.
It is absolutely right that companies whose IT is relied upon by millions of people should face in-depth scrutiny when that IT goes wrong. Such openness and detailed analysis benefits everyone working in IT – it helps to share lessons about the increasingly complex systems that run business and government. The more information is shared, the better everyone becomes at avoiding future problems.
The cyber security sector already understands this, with information sharing networks in place for organisations hit by attacks or data breaches, helping others to avoid a similar fate – not that everybody necessarily always takes heed.
As we’ve seen with the public scrutiny of Facebook, as technology increasingly becomes a utility in our lives, outdated attitudes towards secrecy and saving face harm not only customers but the companies themselves.
Look at shipping company Maersk, which was among the worst hit by the NotPetya cyber attack last year, which the firm revealed cost as much as $300m to deal with. As the firm was coping with the consequences of the malware, it put out a regular stream of updates, keeping customers and stakeholders informed about what was happening. Maersk was rightly applauded for its approach, which helped to mitigate criticism for having to shut down many of its IT systems.
Public scrutiny should be part of every business continuity and disaster recovery plan, helping to rebuild confidence when IT fails. IT leaders should take the initiative and prepare their firms for greater openness and work with their peers to share such valuable learning points – anyone who has been through a major outage will understand why they don’t want to have such an experience again.
Nobody likes to admit to failure, but in a digital world where “fail fast” has become a mantra, and where acknowledging failure is often seen as an essential part of being successful, detailed scrutiny when technology goes wrong is very much for the greater good.
Matt Hancock, the secretary of state for digital, culture, media and sport (DCMS), is not everyone’s cup of tea – especially when that cup is Matt Hancock branded. But regardless of whether you’re a fan or not, he’s clearly on a mission to accelerate the UK’s digital economy, and is winning some important battles to do so.
Across Whitehall and in the private sector, the past couple of years have seen growing frustration with the Government Digital Service (GDS) at the slow pace of its plans for data and digital identity – both rightly identified as being fundamental to the digital economy.
That frustration has been particularly felt in Hancock’s DCMS department, where national technology advisor Liam Maxwell and Matthew Gould, director general for digital and media policy, are responsible for “making sure the UK has the world’s best digital economy”.
Rumours of heated debates between Cabinet Office and DCMS officials came to a head in March when GDS lost control of data policy to Hancock’s team, barely a year after GDS put data at the heart of its government transformation strategy.
This week, Hancock announced plans for a new National Data Strategy – not yet in place, but already pushing forward on the data agenda that seemed in limbo over the previous 12 months.
In a press briefing the week before, attended by Computer Weekly, Hancock also dropped the bombshell that DCMS has quietly taken over digital identity policy too. If his team move as quickly again, we can expect to see a new strategy in place in the coming months, one that will be widely welcomed and will hopefully clarify what role GDS’s Gov.uk Verify ID assurance system will play in the UK’s wider digital identity ecosystem.
Hancock’s announcement seemed to take GDS by surprise – its officials initially claiming nothing has changed and bizarrely suggesting GDS was never in charge of digital identity policy. Even allowing for the intricacies of government machinery, that would come as news to most people operating in the identity community.
DCMS’s challenge now should not be underestimated –the steps it takes next on data and digital identity will resonate for years to come, setting the tone for the next stages in developing a world-leading digital economy in the UK post-Brexit.
We now have the most pro-digital government the UK has ever had – in terms of policy, direction and ambition, at least. It’s vital that DCMS turns that intent into practice and lays the foundations that allow the UK to take advantage of the enormous opportunities of the digital revolution.
Under the leadership of Satya Nadella, Microsoft looks like a very different organisation from the one his predecessor, Steve Ballmer, attempted to build. Ballmer infamously jumped up and down on stage at a Microsoft developer conference, proclaiming: “I love developers.” Outwardly, it looked like his strategy built on the strong Microsoft ecosystem pioneered by founders Bill Gates and Paul Allen, to spread Windows everywhere.
It is highly unlikely that Nadella will ever jump up and down on stage for anything. Instead, under his leadership, Microsoft is currently the largest contributor on GitHub and last week announced its $7.6bn acquisition of the open source repository.
Arguably, open source is not an alternative to commercial software. Rather, it is a way to create software collaboratively, and at a global scale. Active contributors in the open source community resolve problems; the source code is visible and can be tweaked and updated by anyone.
Analyst Gartner believes Microsoft’s acquisition of GitHub gives it a way to target 17 to 30-year-old developers, who code primarily using open source tools and build cloud-native software. That is why Microsoft acquired Xamarin in 2016, bringing an open source version of .Net, and is why its Visual Studio is available on Linux.
If it is to succeed in offering a viable platform in the Azure cloud, Microsoft needs to make Azure the best platform for open source. The value-added services it then offers on top of Azure, such as for machine learning, artificial intelligence, the internet of things and graph databases, are the bait to entice developers to write code built using its public cloud.
Yet for every open source project Microsoft puts on GitHub, there are likely to be alternatives that developers can use, code submitted by the likes of Amazon Web Services, Facebook and Google to startups and individual developers’ work. This gives open source developers a choice, a choice that was not so easy in the Windows-only world of the old Microsoft.
Nadella now has a chance to do something truly remarkable. After the initial licence purchase, Windows is already being distributed as a free operating system update. The next step is to make it free and rely on enterprise support contracts for revenue. And finally, Nadella should use the acquisition of GitHub to pave the way to making its core operating system fully open source.
More than three-quarters of UK organisations are experiencing challenges in recruiting people with digital skills, according to research from Deloitte. It’s a shocking figure, but is anyone really surprised?
“The skills shortage” has been a trope of tech news headlines for 20 years – the skills for which we are short might have changed, but the UK IT sector has carried on regardless and isn’t doing too badly. So is it really such a problem?
Plenty of cynics would say it’s not, but the reality on the ground for IT leaders suggests otherwise. When Computer Weekly asks CIOs what is their biggest challenge, it’s almost always related to a lack of available talent with the skills they need for digital transformation.
Maybe 20 years ago the lack of skills related to SAP or Oracle products, as companies rolled out finance and business management software – there’s no shortage in those areas any more. But generally, it was only larger organisations affected.
In our fast-developing digital economy, the new skills needed are in demand from organisations large and small, in the private or public sectors. Every company and government body needs to change to take advantage of internet technologies.
As we have catalogued in Computer Weekly on many occasions, the UK education and training system simply does not produce enough people to meet IT’s demand. Meanwhile, the government seems intent on making it harder to attract top overseas talent to help fill the gaps.
Thousands of people eligible for skilled visas in the UK are being refused entry due to government immigration caps. Between December 2017 and March 2018, around 3,500 of the 6,080 Tier 2 visas refused were for people skilled in science, technology, engineering and maths, with 1,226 of the refusals affecting IT roles. It’s also harder to get student visas, sending enthusiastic youngsters elsewhere. Let’s not forget the Brexit effect too.
There’s a global war for top technology talent, and without a change in approach, the UK is going to be on the losing side. Look at Canada, for example, where the government is working to promote its tech sector and recognises the need to import the best people. Anyone with the requisite tech skills and a job earning at least C$80,000 can get a work visa in just two weeks, and the government is offering generous tax credits to fund up to 50% of salary for jobs in research and development.
Canada’s liberal immigration regime is a counterpoint to its nearest neighbour, where US president Donald Trump’s belligerent approach is preventing some foreign IT workers entering the country. Canada hopes to benefit – Seattle-based Microsoft and Amazon, for example, are setting up development centres in Vancouver, a short hop across the border, to house imported skills there instead.
“Canada’s comparatively open attitude towards immigration and attracting talent is a real strength that the UK should look to as a model,” says Thomas Goldsmith, policy manager for Brexit and trade at UK IT trade body TechUK.
“The UK should be keenly aware that there is a global marketplace for in-demand digital skills, and if these workers find it difficult to secure a UK visa or they if they are made to feel unwelcome, then there will be no shortage of countries holding their arms open to them instead.”
Deloitte’s research also shows that UK organisations want to increase their investment in emerging technologies such as artificial intelligence, blockchain, internet of things and virtual reality. Firms can see the opportunity – they just can’t find the talent to make it happen. The UK’s digital economy is under threat until the skills gap is addressed.
Last week, the Government Digital Service (GDS) dedicated a whole day to talking about what it’s up to, at its first Sprint event in two years. About time too, many observers said, since GDS seemed to have gone to ground over the past six months, notable mostly for its silence even at a time when Theresa May diminished its role in Whitehall by stripping away responsibility for data policy.
However, it tells you something about how GDS is perceived at the moment that the biggest news headlines from the day came instead from a 10-minute press briefing given to journalists on the sidelines of the event.
During a day of presentations and workshops showcasing the work of GDS, its Gov.uk Verify digital identity system was barely mentioned. But when Nic Harrison, director of service design and assurance, digital identity at GDS – the man in charge of Verify – talked to Computer Weekly and others, he opened up something of a Pandora’s Box.
To Harrison’s credit, his openness was welcome – GDS’s responses to Computer Weekly questions about Verify have for some time bordered on monosyllabic, and have often been information-free to the point of being meaningless.
All the while, the players in the UK’s burgeoning digital identity ecosystem have grown ever-more frustrated with the lack of progress on Verify.
Harrison conceded that Verify adoption is poor – “still not stellar,” as he put it. He denied there was ever any fight between GDS and other Whitehall departments over Verify, citing “very sound operational reasons” to justify the lack of take-up.
That’s not what sources in some of those other departments have consistently said, but maybe they see things differently from outside GDS. Or perhaps “operational reasons” includes Verify’s inability to be used by companies or intermediaries – an essential requirement for HM Revenue & Customs (HMRC) – or to work for Universal Credit claimants with little digital footprint.
Harrison also cited departmental cost controls introduced in the 2015 spending review, and the demands of Brexit as further reasons for the problems afflicting Verify.
“They’re worrying about EU exit, so we are frankly just not going to get hundreds of new services being digitised in the next year to bring on Verify,” he said.
As some observers have noted, GDS was given £450m in the spending review, a significant portion of which was earmarked for Verify – in return for an anticipated £1.1bn savings. But because of the poor performance of Verify, GDS hasn’t been able to spend all the money it was allocated for the project, and has been under pressure to return that unspent budget to the Treasury.
Perhaps it’s unsurprising, therefore, that patience among key players in the digital identity community is wearing thin.
Even OIX, the standards body that is largely funded by the Cabinet Office, and which GDS chose to help develop the market for Verify, is starting to publicly criticise progress. In a report published in April, pointedly titled Digital identity in the UK: The cost of doing nothing, OIX said: “The UK is among an ever-smaller group of developed nations without a national digital identity infrastructure.”
The report added: “While over 60 countries around the world have developed or are close to launching a digital identity scheme, digital identity developments in the UK have been more limited… We have seen the government’s own identity scheme Gov.uk Verify emerge, but it has yet to reach the widespread adoption it was targeting.”
Behind gentle words, often high criticism lies.
Experts highlight a sense of déjà vu in GDS’s latest comments on Verify. Here is a comment from one longstanding digital identity expert, to Computer Weekly:
“All the good (and bad) words on standards, industry engagement, as well as dogma about hubs seemed unchanged from [a] presentation to the Department for Trade and Industry in 2011. European interoperability demands were news at the Manchester declaration in 2005 – not delivered by 2010, so who thought that what was due in 2013 would be here by 2018? Local government still seemed excluded from contribution to provision.
“No battles? Perhaps those battle-scarred by the bitter turf war with DWP have all gone – and those in HMRC changed sides? Scotland didn’t engage. Defra and NHS weren’t exactly enthusiastic. And the missing commercial model for Verify? Not only is there no discernible interaction with active groups like Kantara, even the usually obsequious OIX’s latest report has indicated that all is not well in the UK.”
Other identity experts lament how the delays in Verify have hindered the development of a UK identity ecosystem, and warn that the UK is slipping down the list of digital economy nations as a result. Some have called for greater involvement and leadership from banks, especially in light of recent financial industry regulations such as PSD2 and open banking.
Change of strategy
GDS director general Kevin Cunnington was due to speak at an event on digital identity later this week – according to the latest update on the event website he has mysteriously disappeared, with Nic Harrison stepping up instead.
Rumours grow that GDS is considering a change of strategy for Verify. At Sprint, Harrison confirmed Computer Weekly’s story that GDS is moving towards promoting “Verify compliant” identity providers, rather than pushing Verify itself as a product.
He also partly acknowledged concerns about the role of the existing Verify identity providers, who have an effective monopoly on public sector ID where Verify is in use.
Harrison said he wants to see an ecosystem based on interoperability and standards – with government as the “arbiter”, not the provider. As the old saying goes, the great thing about standards is there are so many of them to choose from.
So it would appear that GDS is slowly and quietly starting to acknowledge the flaws in the current Verify plan – and hopes to extricate itself through a path of least embarrassment. A new strategy is emerging that will place greater responsibility on the private sector to drive the UK’s digital identity infrastructure.
The next question will be whether that’s enough to relieve the frustration caused by Verify so far, and stimulate the functioning identity ecosystem that the UK’s digital economy so desperately needs.