Gone, or so it seems, are the days when Computer Weekly laments after every Budget statement from the Chancellor of the Exchequer that tech has been overlooked. There is little doubt that government now realises that support for, and investment in, the technology sector is critical to the UK’s future.
Of course, we can and will still observe that there is more to do, but Philip Hammond’s latest Budget put at least a small finger in many of the necessary pies – startups, R&D, skills, digital government, broadband and more – even if he did anger the IT contractor community by extending controversial IR35 reforms to the private sector.
Hammond’s digital services tax – targeting the big web giants and their creative tax accounting – has unsurprisingly been attacked by the tech industry, but putting aside the rights or wrongs of the policy, it’s another reflection of the growing importance and influence of the IT world on government decision-making.
As with so much coming from the UK government, however, the positive Budget announcements exist in a parallel world to the uncertainties and concerns over Brexit.
A few days earlier, digital minister Margot James told Parliament that she still cannot guarantee a data-sharing agreement with the European Union in the event of a no-deal Brexit. As Labour’s Liam Byrne, who was questioning James at a session of the European Select Committee, said: “Without data sharing our exports will grind to a halt”.
That same week, a National Audit Office report on the UK border’s preparedness for leaving the EU found that 11 of the 12 “critical systems” at the border are at risk of “not being delivered on time and to acceptable quality”.
Not quite Hammond’s previous glib and spectacularly uninformed observation when asked about a possible digital solution to the Irish border issue, where he proclaimed: “I don’t claim to be an expert on it but the most obvious technology is blockchain.”
Congratulations to whichever tech lobbyist persuaded whichever civil servant to tip the Chancellor on that fantastical idea.
The government’s new-found love for the tech sector is welcome, but its desire to make technology a panacea for all the ills of Brexit needs quickly to be tempered.
For technology leaders, the dilemma continues – eager to take advantage of Westminster’s tech-friendly approach, but fearful that a bad Brexit of whatever form might rapidly unravel the advances made in recent years.
We look forward to a future Budget where support for tech is not just welcome, but unequivocal and full of certainty.
A large and toxic cloud has hung over NHS IT since the failure of the £12bn National Programme that saw billions wasted on systems that barely worked. Since then, we’ve seen the collapse of Care.data, the botched attempt to share patient records through a central database, and a plan for a “paperless NHS” that first aimed to deliver by 2018, was put back to 2020, and is unlikely to be achieved before 2023.
The opportunity for technology to reform and improve the UK’s health and social care system is obvious to anyone who’s ever used a smartphone. There are undoubtedly pockets of excellence in the NHS, but the gap between the best and the worst is enormous. IT leaders have never managed to get over the argument that says: do you want to spend more money on doctors and nurses, or on computers? In the austerity hit NHS, there’s only ever one answer.
Even in the better NHS trusts, there’s a hugely complex legacy to unravel. At Leeds Teaching Hospitals – a great example of a forward-thinking health organisation – there are 460 different IT systems in use. Multiply that across the whole health system and the transformation to digital becomes an ever-bigger challenge.
One day, hopefully sooner than later, somebody has to get NHS technology right. Enter Matt Hancock.
The new secretary of state for health and social care comes with technology squarely in his comfort zone. Through ministerial appointments at the Cabinet Office and the Department for Digital, Culture, Media and Sport (DCMS), it’s been the strongest thread in his political career. There was disappointment in the tech sector when Hancock was promoted from DCMS to health because he was seen as a passionate advocate for digital at the highest levels of government, and he understood the issues better than any of his predecessors (although admittedly, that’s not always been an especially high bar).
He’s wasted no time in putting technology overhaul at the heart of his plans to reform the NHS and social care systems. In July, he promised to make £487m available for NHS technology projects and to replace paper-based systems. In September, he announced a £200m fund for digital centres of excellence and plans to pilot the NHS app across England.
His Labour shadow, Jonathan Ashworth, observed: “This isn’t a serious plan for technology and innovation in the NHS – it’s a pipe dream”.
Now Hancock has launched his “technology vision” for the NHS – a digital future based on open standards, interoperability and APIs, but which retains local autonomy of IT decision-making. It’s a perfectly sensible, ambitious plan, which appears to have learned the lessons of the National Programme. But to paraphrase an old saying, if that’s where you want to go, you wouldn’t start from here.
To be fair, Hancock told Computer Weekly that he doesn’t underestimate the challenge – but he’s looking for a real change in attitude and approach to technology at a local level. His predecessor, Jeremy Hunt, became the longest-service health secretary in history – not far short of six years. Hancock may need to be in place even longer to see through the digital transformation he wants – but this time, the NHS needs finally to get IT right.
Depending on your perspective, Gov.uk Verify is now either secure in its future at the heart of the UK’s emerging digital identity ecosystem, or it has one foot in the grave and is on the way to its inevitable demise.
The Cabinet Office has produced a carefully worded announcement that leaves room for interpretation, while also giving the Government Digital Service (GDS) a way to save face. You can read the full announcement to Parliament here, but the essence is this:
In 18 months’ time, after a “capped expenditure” approved by the Treasury has been spent, government will cease further public investment in Verify – but Verify itself will not cease. Instead, five identity providers (IDPs) will use the technology they developed as part of the Verify programme – and, most importantly, the Verify users they each registered – to offer a private sector digital identity solution, based on “state-backed assurance and standards”.
Whether those providers choose to still call their products “Verify”, or to use some form of Verify branding, will be up to them. As part of the new contracts those companies have signed with the Cabinet Office, they will have permission to reuse their Verify technology without requiring government approval, which they would have had to seek under their previous deals.
What’s important, as far as government is concerned, is that the (so far) 2.9 million citizens signed up to Verify will still have a digital identity they can use that allows them to maintain access to the online public services for which they set up that identity. Those citizens should, in theory, also be able to use that identity in future to access any private sector services that accept the same standards.
GDS will still support whatever in-house Verify technology – and whatever staff – is needed to support users accessing digital public services. But by the time the IDPs take over, this will be done on a “cost neutral” basis to government – in other words, the IDPs will have to pay for it.
It’s not clear – and not yet decided – what services GDS will need to offer those IDPs. It could be none (although that’s unlikely on day one) – or it could slowly decline to none. It remains to be seen how the IDPs will be able to offer a service with only a 47% verification rate as a commercial product – and if they get that rate up to an acceptable figure, why didn’t they do so before?
GDS will continue to advise Whitehall departments on appropriate use of digital identity in their services – but any previous attempts to mandate the use of Verify are over. GDS will keep departments on the straight and narrow, but it’s up to the departments to decide which standards-based identity products they want to use. That includes using suppliers that have had no involvement in Verify.
Digital identity market
The five chosen IDPs will be responsible for helping to build the wider digital identity market in the UK. GDS can then say it has been instrumental in the establishment of a digital identity ecosystem that did not exist before. Others will decide if the £130m and more invested in Verify was justified to reach that end (and that’s not to mention the many millions more invested by HMRC, DWP, NHS and others in building their own digital identity systems because they couldn’t rely on Verify).
The 18 digital services that currently use Verify – along with three others in private beta – will continue to offer Verify to users for as long as they want to, but it’s undecided how new users will select which provider to use after the 18-month transition. Previously, users registering with Verify have been asked to select one of seven (now five) IDPs. It’s yet to be decided what will be offered to users at that point in future – for example, a selection of IDPs, or a preferred IDP per service, or simply a statement that they can use any IDP that conforms to the standards.
Think about how that’s going to come across to any citizens uncomfortable with technology at the best of times. But perhaps there’s a solution yet to be determined.
Two of the existing IDPs, Royal Mail and Citizen Safe, have dropped out. At the start of 2017, those two accounted for approximately 3% of all Verify users, so in the grand scheme of things they’re not much of a loss. But with 2.9 million users, that still equates to 87,000 citizens. Royal Mail and Citizen Safe will continue to support those people for the next 12 months, but after that they will have to re-register with another IDP to continue to access government services – there will be a communications plan to explain and help.
Verify faces its biggest challenge yet in 2019 – perhaps the reason why the 18-month transition period has that duration. By the end of this year, the digital version of Universal Credit (UC) will be rolled out to all Jobcentres, and next year millions of existing benefits claimants will be told they have to apply for UC. As part of that process, they will have to use Verify.
DWP already knows Verify can’t cope on its own, and has had to develop its own system to work alongside. Verify has consistently struggled to successfully register even half of the citizens who attempt to use it. Early tests on UC suggested that only 35% were able to set up a Verify account online. UC could potentially more than double the number of Verify users – the system has never been asked to work at such scale, and especially not for a service under the intense political and public scrutiny of Universal Credit.
The five IDPs have already been involved with the UC programme, but will be working more closely with DWP over the next 18 months.
Potentially the biggest winners here will be the identity companies that have been excluded from Verify in the past, and whose business growth has been stifled as a result. As long as they conform to standards, the public sector market will finally open up to them.
Those standards will be set, not by GDS, but by the Department for Digital, Culture, Media and Sport (DCMS), which took over policy responsibility earlier this year. DCMS has little interest in supporting Verify, and privately sees its standards-based approach as heralding the end of Verify. Like many other departments, DCMS has simply lost confidence in what was meant to be the government’s flagship digital identity product.
The lessons of Verify
Over the last two or three years, as its critics increasingly claimed that Verify was travelling down a dead-end street, GDS has retreated into secrecy and silence over its plans. It took the government’s Infrastructure and Projects Authority to recommend the termination of Verify to reach this point.
Verify was always an ambitious and important programme – digital identity is just hard to do – and GDS deserves credit for taking it on.
But somewhere along the line, it got lost. Remember that as recently as February 2017, the Cabinet Office set a target of 25 million Verify users by 2020. Only last month, minister Oliver Dowden reiterated that goal. It’s unlikely there will even be 25 million citizens with any form of standards-based digital identity in the UK by that time.
GDS will tell us – and it may be correct – that the time, resources and money invested in Verify has been worth it to help establish a UK digital identity ecosystem. The difficulty is, we just don’t know if that’s true.
GDS has learned a lot – about what works and what doesn’t work – on digital identity through Verify. If it really wants to be viewed as the prime instigator of a market that will be critical to the success of the UK digital economy, it needs to now be fully open and transparent about its Verify journey. There is surely much that the whole ecosystem can learn too.
The Department for Digital, Culture, Media and Sport (DCMS) has been conducting a review of digital identity since taking over policy responsibility from the Government Digital Service (GDS) in June.
Computer Weekly has learned that at the core of the DCMS proposals to boost the UK’s digital identity ecosystem is a plan to open up government databases via APIs to the private sector – a move that could also administer the last rites to GDS’s troubled Gov.uk Verify system.
Under the proposals, databases containing vital identity information such as passports and driving licences could be accessed through APIs by identity providers. Any company seeking to offer digital IDs for online transactions would, in theory, be able to quickly and cheaply validate data against recognised government information – the closest thing the UK has to a “gold standard” for identity data.
Such a system would not mean third parties accessing data directly, only checking that ID data provided by an individual to that third party is correct.
The concept is a reversal of the principles underlying Verify where only a small set of government-selected companies are allowed access to these databases through a GDS-developed document checking service which performs a similar function.
Where Verify is a closed shop, the API approach would allow any suitable provider – including other parts of government – to offer assured digital identities, creating a wider, market-based ecosystem.
DCMS is understood to believe its plan would be significantly cheaper to run than Verify – potentially costing a fraction of a penny per transaction. Using Verify, by contrast, GDS pays its pool of identity providers on average about £5 for each user they successfully register.
Verify is designed around a “hub” where users are directed to one of seven identity providers (IDPs) when they wish to establish a digital identity to access one of the 18 online government services that currently use Verify.
Under the DCMS plan, theoretically any digital government service could choose to accept approved identities from any third-party that has used the database APIs. The department’s review is understood to be based on the principle government should enable a digital identity market using public data, rather than building its own system.
The future of Verify is already in question after government watchdog the Infrastructure and Projects Authority recommended it be scrapped, which would mean writing off more than £130m spent so far by GDS rather than throwing more money at a programme that many in Whitehall see as a failure.
GDS is fighting to keep Verify going – only this month Cabinet Office minister for implementation Oliver Dowden confirmed the government is still committed to its target of for 25 million Verify users by 2020. Whitehall internal politics may yet find a way to rebrand the DCMS plan as “Verify mark two” or something similar, in order to be seen to deliver on a promise that was part of the Conservative Party election manifesto in 2017.
GDS’s existing contracts with the Verify IDPs are understood to be ending soon, and if the DCMS proposal is accepted it seems unlikely those IDP contracts would need to be renewed other than to manage existing users as the service they provide is wound down.
Private sector identity providers have long been frustrated at the way the Verify model has shut them out of government, and will hope that the DCMS plans will kick-start the development of a growing market in an area that’s hugely important for the UK’s digital economy.
Other areas of the public sector could benefit from the API approach too, with HM Revenue & Customs, Department for Work & Pensions, NHS England and the Scottish government all working on their own digital identity systems rather than using Verify.
Long-term GDS watchers will recall that the organisation was set up following a 2010 recommendation by web entrepreneur Martha Lane Fox in a report commissioned by then Cabinet Office minister Francis Maude. One of the main suggestions put forward by Lane Fox was to “mandate the creation of application programming interfaces (APIs) to allow third parties to present content and transactions on behalf of the government. Shift from ‘public services all in one place’ (closed & unfocused) to ‘government services wherever you are’ (open & distributed)”.
There would be a certain irony if the eventual use of APIs through another department brought about the end for GDS’s flagship project.
The digital revolution is challenging the regulatory environment across every westernised, developed economy. Governments in the EU, UK, France, Germany and the US are each trying to take a lead in working out how to deal with the new challenges presented by internet companies such as Facebook and Google. There are debates taking place around data protection, privacy. responsibility for content, copyright, algorithms and there will be other issues arising that have barely been considered even now – not least around artificial intelligence (AI).
It has long been the case that regulation lags well behind technology, and as a result regulators tend to try to shoehorn new digital developments into existing structures. A prime example comes with social media, especially after the fallout from Facebook and Cambridge Analytica over use of customer data.
Internet platforms have always been regulated as if they were telecoms companies – US President Bill Clinton established in 1997 that web companies were classified as “mere conduits”. This means that, like a phone company or broadband provider, such firms were not considered legally responsible for content shared on their platforms when it was created by their customers / users.
More recently, issues such as extremist content and child abuse images have challenged that doctrine, with many observers calling for web platforms to be treated like media companies, which are legally responsible for all content published on their sites.
Neither scenario works, and it’s clear that a new style of regulation is needed for a different type of company. Can regulators ever keep up with the pace of technological change? Probably not – but they can be better prepared for it.
The next great challenge for regulators will be AI – how, for example, should we oversee the algorithms that will increasingly make decisions that affect our lives? How do we ensure those algorithms are fair and unbiased? What about the development teams that create them – should they be sufficiently diverse to make sure everyone in society is considered? And what about machine learning, where algorithms evolve without human intervention as the AI system “learns”?
Nigel Shadbolt, one of the UK’s leading academics in AI and open data, told Computer Weekly that if the UK wants to take a lead in AI, then an area for focus is ethics. Realistically, the UK can’t compete with the multibillions that China is throwing at the sector – but China’s social and political culture is unlikely to take the same approach to regulation and ethics as we would.
It’s an easy thing to say, much harder to do – but the UK has a unique opportunity to lead the world in ethical regulation of the digital revolution. Don’t regulate on specifics – regulate on values and principles that can underpin technology development for years, maybe even decades to come.
The UK government is already setting up a Centre for Data Ethics and Innovation, and Theresa May has called for the UK to be a world leader in ethical AI. We have a genuine opportunity to set the standards that the world will follow. In such uncertain times for the UK tech sector, ethics is one area where we can and must take the lead.
The government’s flagship digital identity system Gov.uk Verify has become a car crash in motion, accelerating towards its demise.
The Cabinet Office’s project watchdog has condemned Verify by recommending it be scrapped because the departments that would have to fund its continuation have lost confidence in its ability to deliver.
But the Government Digital Service (GDS) is hoping it can resist the inevitable and is making one more attempt to gain funding to keep Verify going. Realistically, the best outcome GDS can hope for is a managed decline to allow those few services that adopted Verify time to find an alternative.
GDS is pinning its hopes on Universal Credit, which has Verify as part of its user registration process. Given that the Department for Work and Pensions had so little confidence in Verify as long ago as 2015 that it decided to develop its own digital ID system to fill in the gaps, this seems an unlikely hill on which to fight Verify’s last battle.
The cost will be significant – GDS has spent at least £130m on Verify so far, which doesn’t include how much other departments have spent integrating Verify or – in too many cases – developing an alternative.
Verify promised so much, but GDS leadership failed to listen to the counsel of those who have said for some time it needed to change direction. A recent attempt to “reset” the project was too little and too late.
Ed Tucker, the former head of IT security at HM Revenue & Customs (HMRC), put it well on Twitter: “There has been some simply awesome work done by GDS. Truly awesome. Changing the face of delivery. This though, is classic horrific government IT programme. Flogged to death at great expense to save face.”
The implications of scrapping Verify are significant. Private sector identity companies have invested heavily in developing software to be compatible with the GDS system. The UK has pinned its plans for EU-wide digital identity interoperability on Verify.
But GDS has already lost control of Verify’s role in the wider UK identity ecosystem – that responsibility moved to the Department for Culture, Media and Sport (DCMS) in June. GDS is contributing to a review of digital identity policy, but that’s now being led by DCMS. It seems unlikely that DCMS will be a lone voice in support of Verify when so many other departments have lost confidence.
Verify’s original goal was to replace the ageing Government Gateway system now owned by HMRC. But after seven years, Verify has less functionality than the clunky old Gateway – a system developed in just 90 days that is still in widespread use 17 years after its launch.
What next for GDS?
What does this mean for GDS? There’s no denying it is a huge blow – Verify is one of the core systems for GDS, taking up a significant chunk of the funding allocated to GDS in the 2015 spending review.
This will be the second major failure for GDS – the first coming back in March 2015 when a GDS-led project at the Rural Payments Agency (RPA) collapsed. To GDS’s credit, two major failures in seven years is a huge improvement on what came before. But RPA dented confidence in GDS’s ability to deliver on large-scale systems integration projects, and gave ammunition to the Whitehall mandarins who never liked GDS in the first place.
Scrapping Verify could have even wider repercussions – it was included in the Conservative manifesto for the general election in 2017, and ministers don’t like seeing their high-profile policies shot down from within.
With the Cabinet Office’s own major projects authority recommending the termination of the biggest project in another Cabinet Office organisation, you have to think this implies a loss of confidence in GDS from the highest levels of the Cabinet Office too.
The recommendation from the Infrastructure & Projects Authority seems to have come as a shock to GDS. When Computer Weekly requested a statement about our story and asked specific questions about the future of Verify, GDS and the Cabinet Office declined to comment. That in itself is an indictment of any organisation under scrutiny over the potential waste of £130m of taxpayers’ money.
At this point it’s difficult not to raise the issue of leadership. There was a brief rumour recently that GDS director general Kevin Cunnington was leaving – he’s not – but sources say there were a lot of happy people in GDS when the story first circulated.
In December last year, a GDS staff survey leaked to Computer Weekly revealed major concerns about leadership. Only 28% of GDS employees gave a positive response – down five percentage points on the previous year; 16 percentage points below the average across the whole Cabinet Office; and 26 percentage points down on the top teams across all of the civil service departments that took part in the survey.
Talk privately to senior people in digital government in Whitehall and leadership is the biggest concern they raise over GDS’s future. The Institute for Government has also been critical of Whitehall leadership around digital, data and technology.
Cunnington chose – and was backed – to change GDS’s role to be more of a support, education and consultancy operation, rather than the disruptive transformational approach of his predecessors. He has been largely successful in achieving that change – but at the price of diminishing GDS’s influence across Whitehall, according to multiple sources. GDS also lost control of data policy to DCMS earlier this year.
Note that when we talk generally about”leadership” here, it’s not specifically a reference to Cunnington – many experts are critical of his boss, permanent secretary John Manzoni, and some GDS staff have in the past expressed concerns about others in the GDS senior team.
GDS is taking an important role in supporting the technology implications of Brexit, and while EU exit remains the biggest priority for this government it’s unlikely anyone will want to get in the way. But the next spending review is coming up. GDS has 860 staff and an annual budget of £128m – it seems incredibly unlikely the Treasury would approve similar resources if £130m is written off from the failure of its highest-profile flagship project.
There is an impending storm heading towards corporate IT, and the outlook doesn’t look sunny for cash-strapped, time-constrained IT departments. Just like in 1999 with Y2K, there is an absolute deadline – support for SAP ECC 6, the core component of SAP’s enterprise software platform, will end in 2025.
Then there is the upgrade. As Computer Weekly has found, moving to the next version of SAP, S/4 Hana, is not a simple upgrade. Yes, the technical stuff to get S/4 Hana running can be achieved in three months or so. But this is only viable in a greenfield installation. In the real world, businesses have accumulated vast amounts of enterprise software over time, intricately linked to enable business data to flow across these disparate systems in as seamless a fashion as possible. The people who originally implemented these highly complicated integrated systems are either well on their way to retirement, if they are in-house, or long gone, if the project was handled by a system integrator. And new IT staff are not exactly banging on the door with SAP skills.
In fact, it is widely recognised that there is an S/4 Hana skills shortage and the younger generation of IT professionals simply do not find SAP sexy enough to invest the time and effort to learn it. In many ways, the SAP systems currently running businesses have become a kind of technical debt. The data that flows through these companies has become essential to keep the business running, yet few organisations have a sound understanding of these workflows, how the data flows between enterprise applications to support a business process.
Experts have forecast that most of the time in implementing S/4 Hana will be taken up by understanding and cataloguing these data flows. Some companies will opt for third-party maintenance to extend the life of their existing SAP system, while others will replace SAP with something else. But as businesses start building a digital core, irrespective of whether they deploy S/4 Hana or implement something entirely new, the preparatory work in understanding data flows will be an essential step to take.
Amid the global debate about trade tariffs, prompted by US president Donald Trump’s unorthodox economic policies, it’s little remarked that the digital economy is largely untouched by the international tariff regime.
In 1998 – the early days of the internet’s spread into our everyday lives – the World Trade Organisation (WTO) agreed that digital products would remain tariff-free. This is why we can download an e-book from Amazon anywhere in the world, even if the product sits in a US datacentre, without any punitive transfer charges being levied.
Twenty years later, the world is very different and much more interconnected. We’re on the cusp of a revolution in manufacturing as 3D printing becomes mainstream. But think of the implications on global trade.
Tariffs exist to protect local businesses. India, for example, will apply import duties to products from overseas to make locally manufactured items cheaper for consumers, thus supporting its own manufacturing base.
If, however, that product is instead a digital file that defines the item, which is downloaded from the internet onto a 3D printer in India, no tariffs would be applied. That’s an existential threat for many indigenous companies in any country that doesn’t have the extensive telecoms and digital infrastructure of more advanced economies.
And that’s why India and South Africa – supported by other developing countries – are calling for the WTO to review the status of digital goods. Their concerns are understandable – but inevitably the US, EU and China are in favour of the existing regime.
“At a time when we are already seeing challenges to the long-standing principle that lower tariffs are good for trade, competition and consumers, it would be deeply concerning if the international consensus on tariff-less digital trade were to come to an end,” says UK IT trade body TechUK.
Think also of the implications for the UK after Brexit. We’re one of the most digitally enabled economies in the world – the biggest online shoppers in Europe. The government has rightly identified the digital economy as an area where it hopes to expand its global influence outside the EU.
Imagine if many of the countries with which we hope to sign “ambitious trade deals” were to impose tariffs on our digital goods and services – the UK would be forced to reciprocate, potentially affecting many of the e-commerce services we use.
What if Donald Trump, in retaliation, imposed digital tariffs to protect Silicon Valley? The rest of the world would have to do the same for Netflix. It sounds like an absurd proposition, but is anything really that unlikely in a post-Brexit, Trumpian trade environment?
India and South Africa’s proposal will most likely be voted down by developed countries, but it’s an indication that the digital economy could become a future battleground in international trade relations. And for the UK, hoping to sell its digital expertise to the rest of the world, that would be a real concern.
Slowly, slowly, the UK government is getting better at big IT projects. Thanks to Whitehall watchdog, the Infrastructure and Projects Authority (IPA), we’re able to track the improvements in major programmes, and its latest annual report shows the incidence of problem projects is declining.
None of the 29 government IT projects scrutinised by the IPA this year were given the worst “red” ranking, and only seven were rated as “amber/red”. Back in 2015 there were four red projects, and 16 were amber/red.
Congratulations, then, are due.
Thanks to the IPA, many of the recurrent and often basic problems are now spotted earlier and addressed. Put simply, it’s not so easy to hide when things are going wrong. But don’t think for a minute this means we won’t see further government IT failures.
Reading through the detailed notes published by the IPA to accompany its report, you can still see common issues that have yet to be cracked.
The difficulty in recruiting enough people with the necessary digital skills continues to be a challenge. Under-performing suppliers delay work. Over-ambitious, often politically driven targets mean timescales and business cases need to be redrawn. Sometimes things just take more time than people expect. Sometimes things just go wrong.
And because big public sector projects tend to be, well, very big – when they do go wrong, they go spectacularly wrong. The drive for smaller, more agile IT projects is also helping to mitigate against big failures.
Project managers in the private sector would recognise most of these symptoms, but they don’t generally face the public scrutiny of their government counterparts.
Reading the IPA report offers a sense that, given a following wind, the right resources, and political support, the number of problem projects will continue to decline. But what it also suggests is that sense of progression is still vulnerable to unexpected shocks to the system.
“Ensuring the UK is ready to exit the European Union has resulted in a significant increase in the number of projects and programmes that need to be delivered across government. While many of these projects are not of the same scale or duration as [major] projects, EU exit related projects are, by their very nature, high priority and need to be delivered at pace and with confidence,” says the IPA report.
“As such, EU exit has required an increase in government resources and, specifically for project delivery skills in government, a need to redeploy professionals and prioritise activity across departmental portfolios.”
Those carefully worded paragraphs are a stark warning that the practicalities of whatever form of Brexit we end up with, threatens to overwhelm government resources and redirect other priorities. As stated previously in these pages, Cabinet ministers hoping that technology will solve the more intractable difficulties of leaving the EU are kidding themselves – and us.
Whitehall’s project management community deserve recognition for important steps forward in recent years. They will be aware that Brexit could derail that progress very quickly.
MPs on the Treasury select committee have been doing everyone in IT a favour lately. Thanks to pressure from their investigations, we’ve had near-unprecedented access to the real stories of what caused the Visa and TSB outages that affected millions of people recently.
Visa provided a detailed, 11-page description of the technical problems that caused its card payment network to fail, while publication of an initial IBM report into TSB highlighted the glaring lack of preparedness at the bank when its IT migration went wrong. Sadly TSB’s response so far has been to play down the findings and say the report is out of date, rather than follow Visa’s lead and offer a full response.
It is absolutely right that companies whose IT is relied upon by millions of people should face in-depth scrutiny when that IT goes wrong. Such openness and detailed analysis benefits everyone working in IT – it helps to share lessons about the increasingly complex systems that run business and government. The more information is shared, the better everyone becomes at avoiding future problems.
The cyber security sector already understands this, with information sharing networks in place for organisations hit by attacks or data breaches, helping others to avoid a similar fate – not that everybody necessarily always takes heed.
As we’ve seen with the public scrutiny of Facebook, as technology increasingly becomes a utility in our lives, outdated attitudes towards secrecy and saving face harm not only customers but the companies themselves.
Look at shipping company Maersk, which was among the worst hit by the NotPetya cyber attack last year, which the firm revealed cost as much as $300m to deal with. As the firm was coping with the consequences of the malware, it put out a regular stream of updates, keeping customers and stakeholders informed about what was happening. Maersk was rightly applauded for its approach, which helped to mitigate criticism for having to shut down many of its IT systems.
Public scrutiny should be part of every business continuity and disaster recovery plan, helping to rebuild confidence when IT fails. IT leaders should take the initiative and prepare their firms for greater openness and work with their peers to share such valuable learning points – anyone who has been through a major outage will understand why they don’t want to have such an experience again.
Nobody likes to admit to failure, but in a digital world where “fail fast” has become a mantra, and where acknowledging failure is often seen as an essential part of being successful, detailed scrutiny when technology goes wrong is very much for the greater good.