So it turns out the O2 mobile network failure that took out data access for some 30 million people this week, was caused by an expired software certificate – no great conspiracy, no programming error, no undiscovered bug, no malicious interference, but one of the most basic systems administration mistakes you can imagine. Someone somewhere forgot to renew a certificate.
As a wise voice once said, there’s no patch for stupidity. And herein lies the great unspoken conundrum at the heart of the digital revolution. Computers go wrong. Why? Because they’re designed, manufactured, programmed, configured, secured and operated by the most fallible, unpredictable and unreliable resource in the technology world – people.
Of course, it’s those same people who every day ensure that the IT systems supporting every company and government in the world work mostly as intended, who keep the internet running and protect the vast majority of our personal data. That’s because people are pretty good at computers these days. But we’ll never be perfect.
The job of running IT systems is becoming increasingly abstracted from the technology – virtualisation, cloud, containers, serverless, orchestration, all these trends aim to remove that human fallibility from everyday tasks. Not forgetting that it still takes another human somewhere to make those technologies work in the first place.
Much as artificial intelligence (AI) and automation are replacing or augmenting corporate jobs, so the IT department will see further dramatic change as more of its responsibilities are taken over by software robots. Of course, those software robots were created and programmed by humans too. And they aren’t exactly perfect – as the Amazon workers in a New Jersey warehouse found out this week, when a robot accidentally punctured a can of bear repellent, sending 24 staff to hospital.
There is, correctly, much debate about ethics in AI and technology, not least the need to prevent human bias from becoming too infused in the algorithms they rely on. People outside IT are taking more of an interest in the workings of IT than ever before. It’s fair to assume those non-IT types are pretty fallible too.
When O2 went down, there was much humour taken from the site of people trying to consult paper maps to find their way around, and attempted insights from those who found a whole new world beyond the smartphone they’d been glued to until then. The outage was a small reminder of how reliant most of us have become on technology.
For all the great advances of recent decades, it’s going to be a long time before we no longer see headlines screaming “outage”. Whether through malice or simple error, human fallibility is a part of our digital future too.
Sometimes reporting the latest tech news at Computer Weekly throws up an entertaining juxtaposition. Take these two headlines, for example, from last week:
Just as IT industry trade body TechUK is pushing health and social care secretary Matt Hancock to accelerate his grand technology vision for the NHS, the Institute for Public Policy Research publishes a report that suggests digital healthcare is the lowest priority for patients.
Sound familiar? The tech sector telling its prospective customers they should be buying more stuff, while the ultimate users of that stuff show somewhat less enthusiasm? In one form or another, this has been a recurring trait throughout enterprise IT history.
Of course, it goes back even further than that. Vehicle manufacturing pioneer Henry Ford is widely attributed as saying, “If I had asked people what they wanted, they would have said faster horses”.
While NHS patients might like the idea of faster doctors and nurses, there’s no escaping the reality that technology will accelerate their capacity and capability far more effectively.
But this particular juxtaposition highlights a deeper trend at the moment. In the decade since the launch of the iPhone, we’ve seen technology becoming ever-more ubiquitous and popular, but in the past year or so we’ve seen the start of what is, perhaps, an inevitable backlash.
Much of the growing negativity towards tech is coming from the dominance of US internet giants like Facebook, Google, Amazon and Apple – and especially continuing revelations about topics such as Facebook’s cavalier attitude towards our personal data, Amazon’s working practices, or Google’s tax policies.
It’s a concern that the phrase “big tech” is becoming commonplace, carrying with it the hint of malfeasance that originated from “big tobacco” and “big pharma”.
The underlying positive aspect is that the backlash is a natural response to our increasing reliance on tech, and the influence it’s having on driving social and cultural change. Such a process is never easy, but if you believe the benefits of technology outweigh the concerns, then it’s incumbent on tech evangelists to continue to make the case.
The next few years are likely to be difficult for the tech sector, and those who come through successfully will be the ones who change their behaviour – grow up, so to speak, from tech’s adolescence. Tech is not the young upstart anymore, it needs to be a responsible member of society.
For everyone who works in IT, it’s your responsibility too, to focus on the benefits, and mitigate the potential downsides of the digital economy.
As we, seemingly, edge closer to something resembling a UK deal for leaving the European Union (and by the time you read this, that statement could quite possibly have been superseded by events), so the government is starting to reach out to the tech sector to ease its ongoing concerns.
This week, Brexit secretary Dominic Raab came along to meet a room-full of tech leaders and journalists in London, to answer their questions and put forward his case for Brexit.
It was on this occasion that Raab exposed himself to widespread mockery – far beyond his tech audience – for admitting that he had not fully understood the importance of the Dover to Calais crossing for UK trade. Cue facepalms all round.
But did he reassure the gathered leaders that Brexit is not the disaster that most of the industry fears it will be? Whether you feel he achieved that objective depends mostly on how much work you are willing to allow the word “if” to undertake on his behalf.
When asked about the prospects for the UK’s artificial intelligence (AI) sector – in which the UK likes to see itself as something of a pioneer and where there is undoubted future opportunity – Raab said all will be well, “if you get it right”.
That’s an answer that seems to sum up most of his pitch. Be reassured, leaders of a critically important UK industry, everything will be fine, if you get it right.
That poor ‘if’ is left to support all the justified concerns about data flows, regulatory compliance, lack of a deal for services, losing the customs union, exiting the digital single market, ending free movement of talent from the EU, attracting foreign investment – add your own weights to the straining bar that ‘if’ is holding.
“Brexit may create opportunities,” Raab continued – note, “may” not “will”, and this from one of the most ideologically committed Brexiteers.
“I want to deliver a global, outgoing Britain,” he said, to an audience of outgoing, globalist tech leaders, no doubt somewhat surprised to learn they were not already outgoing and global.
He repeated the government line that losing EU freedom of movement will not hinder the industry, because we’ll have a global immigration system instead. We do, of course, already have a global immigration system in the UK, which fails to attract enough overseas talent, and fails thus solely because of self-imposed restrictions.
“Most of the growth markets in the future will be in the non-EU markets, whether Latin America or Asia, and so, for instance we want to be able to promote e-commerce,” said Raab. It might be a surprise to UK e-commerce firms that apparently they don’t have the ability to target non-EU growth markets or promote their services therein. Pretty sure they do that now.
Computer Weekly has long held the view that Brexit is bad for tech. Should Raab’s positivity prove correct – if “if” steps up and carries all that weight – even then we’re still to be convinced the growth opportunities will be better than they would be within the EU. If we’re wrong, we’ll stand up and say so. But our “if” is a lot smaller than Raab’s.
Gone, or so it seems, are the days when Computer Weekly laments after every Budget statement from the Chancellor of the Exchequer that tech has been overlooked. There is little doubt that government now realises that support for, and investment in, the technology sector is critical to the UK’s future.
Of course, we can and will still observe that there is more to do, but Philip Hammond’s latest Budget put at least a small finger in many of the necessary pies – startups, R&D, skills, digital government, broadband and more – even if he did anger the IT contractor community by extending controversial IR35 reforms to the private sector.
Hammond’s digital services tax – targeting the big web giants and their creative tax accounting – has unsurprisingly been attacked by the tech industry, but putting aside the rights or wrongs of the policy, it’s another reflection of the growing importance and influence of the IT world on government decision-making.
As with so much coming from the UK government, however, the positive Budget announcements exist in a parallel world to the uncertainties and concerns over Brexit.
A few days earlier, digital minister Margot James told Parliament that she still cannot guarantee a data-sharing agreement with the European Union in the event of a no-deal Brexit. As Labour’s Liam Byrne, who was questioning James at a session of the European Select Committee, said: “Without data sharing our exports will grind to a halt”.
That same week, a National Audit Office report on the UK border’s preparedness for leaving the EU found that 11 of the 12 “critical systems” at the border are at risk of “not being delivered on time and to acceptable quality”.
Not quite Hammond’s previous glib and spectacularly uninformed observation when asked about a possible digital solution to the Irish border issue, where he proclaimed: “I don’t claim to be an expert on it but the most obvious technology is blockchain.”
Congratulations to whichever tech lobbyist persuaded whichever civil servant to tip the Chancellor on that fantastical idea.
The government’s new-found love for the tech sector is welcome, but its desire to make technology a panacea for all the ills of Brexit needs quickly to be tempered.
For technology leaders, the dilemma continues – eager to take advantage of Westminster’s tech-friendly approach, but fearful that a bad Brexit of whatever form might rapidly unravel the advances made in recent years.
We look forward to a future Budget where support for tech is not just welcome, but unequivocal and full of certainty.
A large and toxic cloud has hung over NHS IT since the failure of the £12bn National Programme that saw billions wasted on systems that barely worked. Since then, we’ve seen the collapse of Care.data, the botched attempt to share patient records through a central database, and a plan for a “paperless NHS” that first aimed to deliver by 2018, was put back to 2020, and is unlikely to be achieved before 2023.
The opportunity for technology to reform and improve the UK’s health and social care system is obvious to anyone who’s ever used a smartphone. There are undoubtedly pockets of excellence in the NHS, but the gap between the best and the worst is enormous. IT leaders have never managed to get over the argument that says: do you want to spend more money on doctors and nurses, or on computers? In the austerity hit NHS, there’s only ever one answer.
Even in the better NHS trusts, there’s a hugely complex legacy to unravel. At Leeds Teaching Hospitals – a great example of a forward-thinking health organisation – there are 460 different IT systems in use. Multiply that across the whole health system and the transformation to digital becomes an ever-bigger challenge.
One day, hopefully sooner than later, somebody has to get NHS technology right. Enter Matt Hancock.
The new secretary of state for health and social care comes with technology squarely in his comfort zone. Through ministerial appointments at the Cabinet Office and the Department for Digital, Culture, Media and Sport (DCMS), it’s been the strongest thread in his political career. There was disappointment in the tech sector when Hancock was promoted from DCMS to health because he was seen as a passionate advocate for digital at the highest levels of government, and he understood the issues better than any of his predecessors (although admittedly, that’s not always been an especially high bar).
He’s wasted no time in putting technology overhaul at the heart of his plans to reform the NHS and social care systems. In July, he promised to make £487m available for NHS technology projects and to replace paper-based systems. In September, he announced a £200m fund for digital centres of excellence and plans to pilot the NHS app across England.
His Labour shadow, Jonathan Ashworth, observed: “This isn’t a serious plan for technology and innovation in the NHS – it’s a pipe dream”.
Now Hancock has launched his “technology vision” for the NHS – a digital future based on open standards, interoperability and APIs, but which retains local autonomy of IT decision-making. It’s a perfectly sensible, ambitious plan, which appears to have learned the lessons of the National Programme. But to paraphrase an old saying, if that’s where you want to go, you wouldn’t start from here.
To be fair, Hancock told Computer Weekly that he doesn’t underestimate the challenge – but he’s looking for a real change in attitude and approach to technology at a local level. His predecessor, Jeremy Hunt, became the longest-service health secretary in history – not far short of six years. Hancock may need to be in place even longer to see through the digital transformation he wants – but this time, the NHS needs finally to get IT right.
Depending on your perspective, Gov.uk Verify is now either secure in its future at the heart of the UK’s emerging digital identity ecosystem, or it has one foot in the grave and is on the way to its inevitable demise.
The Cabinet Office has produced a carefully worded announcement that leaves room for interpretation, while also giving the Government Digital Service (GDS) a way to save face. You can read the full announcement to Parliament here, but the essence is this:
In 18 months’ time, after a “capped expenditure” approved by the Treasury has been spent, government will cease further public investment in Verify – but Verify itself will not cease. Instead, five identity providers (IDPs) will use the technology they developed as part of the Verify programme – and, most importantly, the Verify users they each registered – to offer a private sector digital identity solution, based on “state-backed assurance and standards”.
Whether those providers choose to still call their products “Verify”, or to use some form of Verify branding, will be up to them. As part of the new contracts those companies have signed with the Cabinet Office, they will have permission to reuse their Verify technology without requiring government approval, which they would have had to seek under their previous deals.
What’s important, as far as government is concerned, is that the (so far) 2.9 million citizens signed up to Verify will still have a digital identity they can use that allows them to maintain access to the online public services for which they set up that identity. Those citizens should, in theory, also be able to use that identity in future to access any private sector services that accept the same standards.
GDS will still support whatever in-house Verify technology – and whatever staff – is needed to support users accessing digital public services. But by the time the IDPs take over, this will be done on a “cost neutral” basis to government – in other words, the IDPs will have to pay for it.
It’s not clear – and not yet decided – what services GDS will need to offer those IDPs. It could be none (although that’s unlikely on day one) – or it could slowly decline to none. It remains to be seen how the IDPs will be able to offer a service with only a 47% verification rate as a commercial product – and if they get that rate up to an acceptable figure, why didn’t they do so before?
GDS will continue to advise Whitehall departments on appropriate use of digital identity in their services – but any previous attempts to mandate the use of Verify are over. GDS will keep departments on the straight and narrow, but it’s up to the departments to decide which standards-based identity products they want to use. That includes using suppliers that have had no involvement in Verify.
Digital identity market
The five chosen IDPs will be responsible for helping to build the wider digital identity market in the UK. GDS can then say it has been instrumental in the establishment of a digital identity ecosystem that did not exist before. Others will decide if the £130m and more invested in Verify was justified to reach that end (and that’s not to mention the many millions more invested by HMRC, DWP, NHS and others in building their own digital identity systems because they couldn’t rely on Verify).
The 18 digital services that currently use Verify – along with three others in private beta – will continue to offer Verify to users for as long as they want to, but it’s undecided how new users will select which provider to use after the 18-month transition. Previously, users registering with Verify have been asked to select one of seven (now five) IDPs. It’s yet to be decided what will be offered to users at that point in future – for example, a selection of IDPs, or a preferred IDP per service, or simply a statement that they can use any IDP that conforms to the standards.
Think about how that’s going to come across to any citizens uncomfortable with technology at the best of times. But perhaps there’s a solution yet to be determined.
Two of the existing IDPs, Royal Mail and Citizen Safe, have dropped out. At the start of 2017, those two accounted for approximately 3% of all Verify users, so in the grand scheme of things they’re not much of a loss. But with 2.9 million users, that still equates to 87,000 citizens. Royal Mail and Citizen Safe will continue to support those people for the next 12 months, but after that they will have to re-register with another IDP to continue to access government services – there will be a communications plan to explain and help.
Verify faces its biggest challenge yet in 2019 – perhaps the reason why the 18-month transition period has that duration. By the end of this year, the digital version of Universal Credit (UC) will be rolled out to all Jobcentres, and next year millions of existing benefits claimants will be told they have to apply for UC. As part of that process, they will have to use Verify.
DWP already knows Verify can’t cope on its own, and has had to develop its own system to work alongside. Verify has consistently struggled to successfully register even half of the citizens who attempt to use it. Early tests on UC suggested that only 35% were able to set up a Verify account online. UC could potentially more than double the number of Verify users – the system has never been asked to work at such scale, and especially not for a service under the intense political and public scrutiny of Universal Credit.
The five IDPs have already been involved with the UC programme, but will be working more closely with DWP over the next 18 months.
Potentially the biggest winners here will be the identity companies that have been excluded from Verify in the past, and whose business growth has been stifled as a result. As long as they conform to standards, the public sector market will finally open up to them.
Those standards will be set, not by GDS, but by the Department for Digital, Culture, Media and Sport (DCMS), which took over policy responsibility earlier this year. DCMS has little interest in supporting Verify, and privately sees its standards-based approach as heralding the end of Verify. Like many other departments, DCMS has simply lost confidence in what was meant to be the government’s flagship digital identity product.
The lessons of Verify
Over the last two or three years, as its critics increasingly claimed that Verify was travelling down a dead-end street, GDS has retreated into secrecy and silence over its plans. It took the government’s Infrastructure and Projects Authority to recommend the termination of Verify to reach this point.
Verify was always an ambitious and important programme – digital identity is just hard to do – and GDS deserves credit for taking it on.
But somewhere along the line, it got lost. Remember that as recently as February 2017, the Cabinet Office set a target of 25 million Verify users by 2020. Only last month, minister Oliver Dowden reiterated that goal. It’s unlikely there will even be 25 million citizens with any form of standards-based digital identity in the UK by that time.
GDS will tell us – and it may be correct – that the time, resources and money invested in Verify has been worth it to help establish a UK digital identity ecosystem. The difficulty is, we just don’t know if that’s true.
GDS has learned a lot – about what works and what doesn’t work – on digital identity through Verify. If it really wants to be viewed as the prime instigator of a market that will be critical to the success of the UK digital economy, it needs to now be fully open and transparent about its Verify journey. There is surely much that the whole ecosystem can learn too.
The Department for Digital, Culture, Media and Sport (DCMS) has been conducting a review of digital identity since taking over policy responsibility from the Government Digital Service (GDS) in June.
Computer Weekly has learned that at the core of the DCMS proposals to boost the UK’s digital identity ecosystem is a plan to open up government databases via APIs to the private sector – a move that could also administer the last rites to GDS’s troubled Gov.uk Verify system.
Under the proposals, databases containing vital identity information such as passports and driving licences could be accessed through APIs by identity providers. Any company seeking to offer digital IDs for online transactions would, in theory, be able to quickly and cheaply validate data against recognised government information – the closest thing the UK has to a “gold standard” for identity data.
Such a system would not mean third parties accessing data directly, only checking that ID data provided by an individual to that third party is correct.
The concept is a reversal of the principles underlying Verify where only a small set of government-selected companies are allowed access to these databases through a GDS-developed document checking service which performs a similar function.
Where Verify is a closed shop, the API approach would allow any suitable provider – including other parts of government – to offer assured digital identities, creating a wider, market-based ecosystem.
DCMS is understood to believe its plan would be significantly cheaper to run than Verify – potentially costing a fraction of a penny per transaction. Using Verify, by contrast, GDS pays its pool of identity providers on average about £5 for each user they successfully register.
Verify is designed around a “hub” where users are directed to one of seven identity providers (IDPs) when they wish to establish a digital identity to access one of the 18 online government services that currently use Verify.
Under the DCMS plan, theoretically any digital government service could choose to accept approved identities from any third-party that has used the database APIs. The department’s review is understood to be based on the principle government should enable a digital identity market using public data, rather than building its own system.
The future of Verify is already in question after government watchdog the Infrastructure and Projects Authority recommended it be scrapped, which would mean writing off more than £130m spent so far by GDS rather than throwing more money at a programme that many in Whitehall see as a failure.
GDS is fighting to keep Verify going – only this month Cabinet Office minister for implementation Oliver Dowden confirmed the government is still committed to its target of for 25 million Verify users by 2020. Whitehall internal politics may yet find a way to rebrand the DCMS plan as “Verify mark two” or something similar, in order to be seen to deliver on a promise that was part of the Conservative Party election manifesto in 2017.
GDS’s existing contracts with the Verify IDPs are understood to be ending soon, and if the DCMS proposal is accepted it seems unlikely those IDP contracts would need to be renewed other than to manage existing users as the service they provide is wound down.
Private sector identity providers have long been frustrated at the way the Verify model has shut them out of government, and will hope that the DCMS plans will kick-start the development of a growing market in an area that’s hugely important for the UK’s digital economy.
Other areas of the public sector could benefit from the API approach too, with HM Revenue & Customs, Department for Work & Pensions, NHS England and the Scottish government all working on their own digital identity systems rather than using Verify.
Long-term GDS watchers will recall that the organisation was set up following a 2010 recommendation by web entrepreneur Martha Lane Fox in a report commissioned by then Cabinet Office minister Francis Maude. One of the main suggestions put forward by Lane Fox was to “mandate the creation of application programming interfaces (APIs) to allow third parties to present content and transactions on behalf of the government. Shift from ‘public services all in one place’ (closed & unfocused) to ‘government services wherever you are’ (open & distributed)”.
There would be a certain irony if the eventual use of APIs through another department brought about the end for GDS’s flagship project.
The digital revolution is challenging the regulatory environment across every westernised, developed economy. Governments in the EU, UK, France, Germany and the US are each trying to take a lead in working out how to deal with the new challenges presented by internet companies such as Facebook and Google. There are debates taking place around data protection, privacy. responsibility for content, copyright, algorithms and there will be other issues arising that have barely been considered even now – not least around artificial intelligence (AI).
It has long been the case that regulation lags well behind technology, and as a result regulators tend to try to shoehorn new digital developments into existing structures. A prime example comes with social media, especially after the fallout from Facebook and Cambridge Analytica over use of customer data.
Internet platforms have always been regulated as if they were telecoms companies – US President Bill Clinton established in 1997 that web companies were classified as “mere conduits”. This means that, like a phone company or broadband provider, such firms were not considered legally responsible for content shared on their platforms when it was created by their customers / users.
More recently, issues such as extremist content and child abuse images have challenged that doctrine, with many observers calling for web platforms to be treated like media companies, which are legally responsible for all content published on their sites.
Neither scenario works, and it’s clear that a new style of regulation is needed for a different type of company. Can regulators ever keep up with the pace of technological change? Probably not – but they can be better prepared for it.
The next great challenge for regulators will be AI – how, for example, should we oversee the algorithms that will increasingly make decisions that affect our lives? How do we ensure those algorithms are fair and unbiased? What about the development teams that create them – should they be sufficiently diverse to make sure everyone in society is considered? And what about machine learning, where algorithms evolve without human intervention as the AI system “learns”?
Nigel Shadbolt, one of the UK’s leading academics in AI and open data, told Computer Weekly that if the UK wants to take a lead in AI, then an area for focus is ethics. Realistically, the UK can’t compete with the multibillions that China is throwing at the sector – but China’s social and political culture is unlikely to take the same approach to regulation and ethics as we would.
It’s an easy thing to say, much harder to do – but the UK has a unique opportunity to lead the world in ethical regulation of the digital revolution. Don’t regulate on specifics – regulate on values and principles that can underpin technology development for years, maybe even decades to come.
The UK government is already setting up a Centre for Data Ethics and Innovation, and Theresa May has called for the UK to be a world leader in ethical AI. We have a genuine opportunity to set the standards that the world will follow. In such uncertain times for the UK tech sector, ethics is one area where we can and must take the lead.
The government’s flagship digital identity system Gov.uk Verify has become a car crash in motion, accelerating towards its demise.
The Cabinet Office’s project watchdog has condemned Verify by recommending it be scrapped because the departments that would have to fund its continuation have lost confidence in its ability to deliver.
But the Government Digital Service (GDS) is hoping it can resist the inevitable and is making one more attempt to gain funding to keep Verify going. Realistically, the best outcome GDS can hope for is a managed decline to allow those few services that adopted Verify time to find an alternative.
GDS is pinning its hopes on Universal Credit, which has Verify as part of its user registration process. Given that the Department for Work and Pensions had so little confidence in Verify as long ago as 2015 that it decided to develop its own digital ID system to fill in the gaps, this seems an unlikely hill on which to fight Verify’s last battle.
The cost will be significant – GDS has spent at least £130m on Verify so far, which doesn’t include how much other departments have spent integrating Verify or – in too many cases – developing an alternative.
Verify promised so much, but GDS leadership failed to listen to the counsel of those who have said for some time it needed to change direction. A recent attempt to “reset” the project was too little and too late.
Ed Tucker, the former head of IT security at HM Revenue & Customs (HMRC), put it well on Twitter: “There has been some simply awesome work done by GDS. Truly awesome. Changing the face of delivery. This though, is classic horrific government IT programme. Flogged to death at great expense to save face.”
The implications of scrapping Verify are significant. Private sector identity companies have invested heavily in developing software to be compatible with the GDS system. The UK has pinned its plans for EU-wide digital identity interoperability on Verify.
But GDS has already lost control of Verify’s role in the wider UK identity ecosystem – that responsibility moved to the Department for Culture, Media and Sport (DCMS) in June. GDS is contributing to a review of digital identity policy, but that’s now being led by DCMS. It seems unlikely that DCMS will be a lone voice in support of Verify when so many other departments have lost confidence.
Verify’s original goal was to replace the ageing Government Gateway system now owned by HMRC. But after seven years, Verify has less functionality than the clunky old Gateway – a system developed in just 90 days that is still in widespread use 17 years after its launch.
What next for GDS?
What does this mean for GDS? There’s no denying it is a huge blow – Verify is one of the core systems for GDS, taking up a significant chunk of the funding allocated to GDS in the 2015 spending review.
This will be the second major failure for GDS – the first coming back in March 2015 when a GDS-led project at the Rural Payments Agency (RPA) collapsed. To GDS’s credit, two major failures in seven years is a huge improvement on what came before. But RPA dented confidence in GDS’s ability to deliver on large-scale systems integration projects, and gave ammunition to the Whitehall mandarins who never liked GDS in the first place.
Scrapping Verify could have even wider repercussions – it was included in the Conservative manifesto for the general election in 2017, and ministers don’t like seeing their high-profile policies shot down from within.
With the Cabinet Office’s own major projects authority recommending the termination of the biggest project in another Cabinet Office organisation, you have to think this implies a loss of confidence in GDS from the highest levels of the Cabinet Office too.
The recommendation from the Infrastructure & Projects Authority seems to have come as a shock to GDS. When Computer Weekly requested a statement about our story and asked specific questions about the future of Verify, GDS and the Cabinet Office declined to comment. That in itself is an indictment of any organisation under scrutiny over the potential waste of £130m of taxpayers’ money.
At this point it’s difficult not to raise the issue of leadership. There was a brief rumour recently that GDS director general Kevin Cunnington was leaving – he’s not – but sources say there were a lot of happy people in GDS when the story first circulated.
In December last year, a GDS staff survey leaked to Computer Weekly revealed major concerns about leadership. Only 28% of GDS employees gave a positive response – down five percentage points on the previous year; 16 percentage points below the average across the whole Cabinet Office; and 26 percentage points down on the top teams across all of the civil service departments that took part in the survey.
Talk privately to senior people in digital government in Whitehall and leadership is the biggest concern they raise over GDS’s future. The Institute for Government has also been critical of Whitehall leadership around digital, data and technology.
Cunnington chose – and was backed – to change GDS’s role to be more of a support, education and consultancy operation, rather than the disruptive transformational approach of his predecessors. He has been largely successful in achieving that change – but at the price of diminishing GDS’s influence across Whitehall, according to multiple sources. GDS also lost control of data policy to DCMS earlier this year.
Note that when we talk generally about”leadership” here, it’s not specifically a reference to Cunnington – many experts are critical of his boss, permanent secretary John Manzoni, and some GDS staff have in the past expressed concerns about others in the GDS senior team.
GDS is taking an important role in supporting the technology implications of Brexit, and while EU exit remains the biggest priority for this government it’s unlikely anyone will want to get in the way. But the next spending review is coming up. GDS has 860 staff and an annual budget of £128m – it seems incredibly unlikely the Treasury would approve similar resources if £130m is written off from the failure of its highest-profile flagship project.
There is an impending storm heading towards corporate IT, and the outlook doesn’t look sunny for cash-strapped, time-constrained IT departments. Just like in 1999 with Y2K, there is an absolute deadline – support for SAP ECC 6, the core component of SAP’s enterprise software platform, will end in 2025.
Then there is the upgrade. As Computer Weekly has found, moving to the next version of SAP, S/4 Hana, is not a simple upgrade. Yes, the technical stuff to get S/4 Hana running can be achieved in three months or so. But this is only viable in a greenfield installation. In the real world, businesses have accumulated vast amounts of enterprise software over time, intricately linked to enable business data to flow across these disparate systems in as seamless a fashion as possible. The people who originally implemented these highly complicated integrated systems are either well on their way to retirement, if they are in-house, or long gone, if the project was handled by a system integrator. And new IT staff are not exactly banging on the door with SAP skills.
In fact, it is widely recognised that there is an S/4 Hana skills shortage and the younger generation of IT professionals simply do not find SAP sexy enough to invest the time and effort to learn it. In many ways, the SAP systems currently running businesses have become a kind of technical debt. The data that flows through these companies has become essential to keep the business running, yet few organisations have a sound understanding of these workflows, how the data flows between enterprise applications to support a business process.
Experts have forecast that most of the time in implementing S/4 Hana will be taken up by understanding and cataloguing these data flows. Some companies will opt for third-party maintenance to extend the life of their existing SAP system, while others will replace SAP with something else. But as businesses start building a digital core, irrespective of whether they deploy S/4 Hana or implement something entirely new, the preparatory work in understanding data flows will be an essential step to take.