If you happen to be an ardent Brexiteer, a true believer in the UK government’s desire for “innovative solutions” to avoid a hard border with the Republic of Ireland, we have good news for you. The technology exists to do the job and save the day for you.
For a start, there’s what your friends in IT call the “internet of things”, which means sensors on containers transporting goods across the Irish land border; sensors on the delivery vehicles; sensors tracking temperature, humidity, and other essential quality indicators; sensors on the goods themselves; sensors on animals – maybe even sensors on people (which of course they already have in their smartphones).
A well-designed customs and border software system would sit behind all this data, offering simple transactions for companies and individuals that need to cross the border, providing the basis of an infrastructure to deal with the majority of the challenges that a hard border presents. You might even be able to develop an app. Maybe you can use blockchain, and some “artificial intelligence“. You would probably still need some manual elements in the process, but largely, the problem is solvable. (Let’s not worry for now about the dozens of existing government IT systems you would need to interface to or integrate with – you know, details).
This may be admittedly a very high-level description, but the oft-touted (mostly) frictionless border is a technological possibility, of that there is no doubt.
If you are a Remainer, or at least a not-so-hard Brexiteer searching for common sense among the casual frippery promoted by the likes of foreign secretary Boris Johnson, here’s your counter argument to the proposal above.
It won’t work. But you know that already.
Why not? Well, it will probably be able to work one day, but even with the best will in the world – and the best technologists, and a very large amount of money – it’s not going to happen anytime soon.
Technology can do some amazing things. But if government should have learned anything from its various IT woes over the years, to do something new, untried, untested, at large scale, and especially in a two to three year timeframe (assuming the UK and EU agree a transition deal to the end of 2020) – it isn’t going to happen.
Frankly, if they agree a transition deal to 2025, it won’t happen either. 2030? Maybe – probably not though.
Remember the NHS National Programme for IT, which started out as a three-year project? How about Universal Credit, due to be fully implemented by 2016? What about e-Borders, kicked off in 2003 and still not implemented? Let’s not worry for now about the billions of pounds wasted on just those three.
The government will, at some point, have to face the fact that technology is not going to save them. If anything, IT is more likely to sink them.
We already know that HM Revenue & Customs alone needs to update 24 IT systems to be ready for Brexit – and doesn’t yet know what it needs to be ready for. We also know that the average duration of a modern, small-scale, well-defined digital project in Whitehall is two years – excluding large-scale transformation programmes, of which Brexit will require many. Oh, and government is already short of about 4,000 digital workers before even considering Brexit.
Should we also point out that the sort of tech implementation described above has never been done, at such scale, anywhere in the world? Presumably a post-Brexit Britain will develop that ingenuity and capability as soon as we leave the EU. Praise be.
Brexiteers – thanks for your faith in what technology can do for you. Just don’t think for a minute it will be able to do what you want it to do anytime soon.
Computer Weekly writes a lot about cyber security – it’s the easily most popular topic among our readers, and always rates as one of the top priorities in our annual survey of IT professionals. None of that will come as a surprise.
But some weeks certain stories come together to bring an insight into the challenges facing organisations in this area.
A global survey published this week showed two-thirds of 1,300 senior executives interviewed ranked cyber security among their organisations’ top five risk management priorities – approximately double the response to a similar question in 2016.
However, only 19% of respondents said they are highly confident in their organisation’s ability to mitigate and respond to a cyber event, while only 30% said they have developed a plan to respond to attacks.
Consider those findings against a separate survey, that showed the cost of cyber crime is now up to 0.8% of the global economy or $600bn a year – up from 0.7% in 2014, an average rise of 11.3% a year. Note also that Europe suffers the highest economic impact of cyber crime, estimated at 0.84% of regional GDP, compared with 0.78% in North America.
Business executives clearly recognise the importance of cyber security – and are feeling it financially – but have neither the confidence nor the plans to tackle the problem.
Look also to the public sector, where a survey of 500 people in the UK found that 49% were “wary” of sharing their information on public sector websites.
Meanwhile, an investigation by privacy campaigners Big Brother Watch showed that local authorities are being hit by an average of 19.5 million cyber attacks a year. That equates to 37 attacks or attempted breaches every minute on councils that are accumulating growing amounts of sensitive and personal information about citizens.
The report revealed an “overwhelming failure” by councils to report losses and breaches of data, as well as shortcomings in staff training. Over the last five years, 114 councils suffered at least one breach and 25 had a loss of data, but more than half of those incidents went unreported.
Can you blame citizens for being worried about how their personal data is being handled?
Add one further survey to the mix – 70% of organisations said they need cyber security skills, but only 43% said they already had such skills in place.
It’s reassuring to see the growing awareness of IT security risks – among executives and the public. But the gap between understanding and capability is not shrinking in response – in many cases, it’s growing.
Organisations need to turn awareness into action, otherwise the cost of cyber attacks will become a recurring and accepted cost of simply doing business – and it’s the privacy of our personal data that will suffer as a result.
You wouldn’t exactly classify the leaders of the UK tech sector as a bunch of liberal lefties. For all the stereotypes of bearded Shoreditch sandal-wearers that surround much of the nascent startup scene, if you polled the average UK tech conference for delegates’ political views, you would find a sea of blue.
The tech sector likes to make money, and mostly, it’s pretty good at it.
So it tells you a lot when you see the increasingly difficult-to-disguise fury of the sector bubbling to the surface whenever the government talks about Brexit.
“We do not make the UK more attractive to the rest of the world by putting barriers in the way of trade with our biggest market,” said trade association TechUK after a speech by Boris Johnson this week where the foreign secretary attempted – mostly unsuccessfully – to convince frustrated Remainers of the opportunities of leaving the EU.
If you’ve been used to the sort of meek, carefully phrased statements that come from trade bodies over the years, you’ll recognise that line as the industry equivalent of thundering.
When secretary of state for digital, culture, media and sport Matt Hancock – previously digital minister – habitually praises the UK tech sector in speeches, you can’t help but notice he never mentions Brexit. He knows what his audience thinks and he’d clearly rather not go there.
Senior industry figures told a House of Lords committee earlier this month that the prospect of a no-deal Brexit would be unthinkable. There is real fear for the future among the companies that are likely to be the future for the UK economy.
They will still make money, of course. It’s not as if Brexit means the industry will dry up. But UK tech leaders will be operating with one arm tied behind their backs without regulatory alignment to the EU, without easy access to the skills of EU IT experts, without research partnerships across the continent, and without frictionless flows of data, IT services and products.
This is a sector that thinks constantly about the future, that designs our future, that understands the enormous potential for positive change it can bring about. And it’s genuinely fearful for the damage Brexit could cause, and the dithering and uncertainty brought about by the UK government’s hapless approach.
The Lords committee learned that government communication with the tech sector has declined since the start of the year – perhaps our political leaders don’t like what they’re hearing so don’t bother to listen any more. And that’s a huge concern for the UK’s digital economy.
The Department for Work and Pensions (DWP) has been consistently criticised for its lack of transparency since the controversial Universal Credit programme was launched nearly eight years ago.
The department has fought court battles against Freedom of Information campaigners to prevent the release of key documents. It has obfuscated repeatedly to journalists about problems with the IT. And until the House of Commons in December ordered DWP to release a series of project reviews to the Work and Pensions Committee, it had kept these critical progress reports from MPs too.
On the occasions that DWP is forced to reveal its hand on Universal Credit (UC), you can understand why it prefers such secrecy.
The release this week of the committee’s summary of UC project reviews since 2012 shows the scale of the challenge and the potential for calamity as the new digital version of UC is rolled out nationwide over the course of this year.
That’s even without considering concerns highlighted over assurance processes that have seen UC press on even when failing to meet agreed success criteria.
The summary confirmed Computer Weekly’s story that Gov.uk Verify, the identity assurance system developed by the Government Digital Service (GDS), lies at the heart of the risks facing the scaling up of UC to Jobcentres across the UK.
Much of the savings anticipated for the welfare reform programme were based on applications being submitted and processed automatically – with minimal manual intervention. The starting point for this automation is Verify – the way that claimants are expected to prove they are who they say they are online.
As we revealed, barely a third of claimants have successfully used Verify, and in the early implementations of UC this has meant significantly greater involvement from Jobcentre staff than planned – costing £963 per claim, compared to a target of £250.
DWP has even had to develop its own identity system to compensate for the weaknesses in Verify – making the third ID system in use in Whitehall, alongside HM Revenue & Customs’ Government Gateway product – not to mention another being developed by the NHS. Oh, and then there’s yet another underway for the Scottish government.
The committee report highlights fears that the rapid scaling up of UC users due to take place this year will push Verify beyond its already limited capabilities, with potentially significant implications for DWP staff as well as benefit claimants.
Greater transparency throughout a programme with such widespread social consequences might have caused embarrassment for DWP, but more public scrutiny could have helped point this important and much-needed reform in a better direction.
DWP has listened and learned from much of the past criticism of UC, for which it deserves credit. But it is time to deliver greater openness too –not only for the success of Universal Credit, but for wider government digital transformation initiatives too.
There’s a growing body of opinion that the government’s flagship digital identity system, Gov.uk Verify, has now become a major hindrance to the development of the UK’s digital identity infrastructure.
The concerns about Verify have increased to the point where the Government Digital Service (GDS) could be about to lose control of government policy for digital identity.
Computer Weekly understands there’s a battle going on between the Cabinet Office, where GDS sits, and the Department for Digital, Culture, Media and Sport (DCMS), responsible for policy around the digital economy – within which the issue of identity is central.
Senior Cabinet Office civil servants are reluctant to let it go – but industry disenchantment with Verify is growing and DCMS thinks it needs to address the situation.
That same debate between the two departments is also set to see policy for data shifted to DCMS – the Cabinet Office has yet to recruit a chief data officer despite the post being announced a year ago.
But it’s Verify – and its intended role as the core of a UK-wide digital identity infrastructure – where concerns are greatest. Digital identity is core to the UK’s online future – the use of standard electronic IDs for transacting online with government, banks, retailers and other e-commerce providers is expected to deliver a significant economic boost.
As Computer Weekly reported in December, there are already moves afoot to make “Gov.uk Verify” more of a brand and a set of standards, instead of a product that GDS will encourage the private sector to adopt.
This follows ongoing and extensive performance problems with Verify that even now sees barely half of all attempts to create a verified identity through the system being successful. Furthermore, GDS’s own research has shown that success rates for Universal Credit benefit claimants through the Department for Work and Pensions’ new digital service are even worse –only 30% of Universal Credit users have successfully created a Verify account.
But there’s another issue for companies wanting to be a part of a UK identity market. They feel they are being shut out by the commercial structure around Verify, with the existing independent identity providers (IDPs) recruited by GDS having a virtual monopoly.
Sources suggest that even senior figures in GDS acknowledge the current approach leans too far towards the existing IDPs.
The IDPs are protected by a contractual framework that means it will be about 18 months before new providers will have a chance to be approved for identity verification for government services.
The existing IDPs, which include Barclays, Experian and the Post Office, have exclusive access to the market for public sector users. They are protected by a period without competition that, in theory, was meant to allow them to be sure they would recoup their initial investments in developing Verify.
In reality, the roll-out of Verify has been so slow and with such poor performance that their expected financial returns are unlikely to have materialised.
Land Registry recently adopted Verify for its new digital mortgage service – an apparent boost for Verify. But the Department for Business, Energy and Industrial Strategy had to inform Parliament that being responsible for digital identity assurance meant Land Registry needed to take on an at least £300,000 in additional contingent liability in case a Verify user proved not to be who they say they are, meaning taxpayers would have to pay compensation.
Players in digital identity have been waiting for GDS to issue a commercial framework that explains how Verify can be used in the private sector – covering issues such as liability, and addressing questions such as who takes the blame if a Verify account created for government services proves to be fraudulent when used to transact with a bank, for example.
Sources suggest that the commercial framework is being blocked by the Treasury, until a plausible model has been created – this may be a contributory factor in the Land Registry liability commitment.
DCMS wants to accelerate the creation of a digital identity market in the private sector, and sees taking control of policy away from GDS as the way to make that happen. Some people see Whitehall politics in play by the omission of any reference to Verify in the DCMS green paper on internet safety strategy published in October last year.
The increasingly common theme among a growing number of identity experts is that Verify needs a major overhaul. They think it is blocking the creation of a much-needed UK digital identity ecosystem – and GDS is seen by many as the logjam in the middle.
If DCMS does gain control of digital identity policy, it is unlikely to be as stubbornly committed to the future of Verify as GDS and the Cabinet Office continue to be, even in the face of such growing concerns.
Every year, Computer Weekly conducts a large-scale survey of our readers to find out their IT spending priorities for the year ahead – it’s always an interesting take on what’s happening in IT departments around the country because it’s coming from the leaders making those technology purchasing decisions.
The latest results are in, and the headline news is clear – IT organisations in the UK are accelerating the move to cloud. When asked about IT managers’ priorities in datacentre, storage and software plans for 2018, cloud came out top every time.
Our research also looks to Europe, and the picture is the same – in France, Germany and across the continent, cloud is the number one spending priority.
As public cloud grows in popularity, it’s also putting pressure on IT teams to justify their on-premises systems. Twice as many organisations listed server virtualisation as a priority than in 2017 – and investment in basic infrastructure such as power, cooling and racks also came in as the top datacentre priority.
This suggests IT managers are having to overhaul their internal operations to prove they can deliver the same levels of efficiency, flexibility and cost-effectiveness as they’re seeing from their public cloud usage.
In business applications spending, the vast majority of IT managers buying or upgrading core software are implementing cloud-based versions. For example, 53% of readers surveyed are spending on customer relationship management (CRM) systems this year, while 45% of respondents are looking at software-as-a-service CRM – only small numbers are staying on-premise.
It’s no surprise, perhaps, that mobile device management is another top technology investment – along with mobile and cloud security. It’s further evidence that the future of corporate IT is mobile users connecting to cloud-based systems.
It’s also no surprise that compliance with the EU’s General Data Protection Regulation (GDPR) is driving a lot of IT spending this year, with the deadline fast approaching on 25 May. Data loss prevention has become the top security technology priority as a result – with end-user security training the main security initiative, as awareness grows that users are often a weak link in data protection.
Looking ahead, emerging technologies such as internet of things, artificial intelligence and blockchain were the fastest-growing technologies compared to 2017 – they’re still fairly low priority overall, but interest and investment is rising quickly as more IT teams explore what benefits such systems can bring.
2018 is set to be a big year for UK IT managers – only 10% of organisations are reducing their IT budget; 56% are increasing spending, with a quarter seeing at least a 10% hike. Our survey shows the UK technology scene is in rude health.
Read more from the Computer Weekly IT Priorities Survey 2018:
It’s nice that we have a prime minister that chooses to use one of her most high-profile speeches of the year to talk about technology.
It’s equally good that her chancellor similarly discussed the opportunities of IT at the World Economic Forum on Davos, warning of “a generation of 21st century Luddites” and asking, “Do we look inward or outward, look to the future or cleave to the past?”
We also saw Matt Hancock, previously digital minister and now secretary of state for digital, culture media and sport, talking at a Davos debate on “reimagining policy-making for the fourth industrial revolution”. Marvellous.
These are all things that the tech community has wanted to see for years – at Computer Weekly we’ve called repeatedly for our most senior politicians to understand, address and plan for just these developments.
During her speech, Theresa May clearly suffered from a bad case of what we must call Hancockism – the ability to ignore an elephant so large that it’s not so much in the room, as it is the room.
In his time as digital minister, Hancock was renowned for his prodigious speech-giving, talking up the tech sector without ever mentioning the one thing everybody listening wanted him to talk about – what Brexit means (other than “Brexit”).
May’s outbreak of Hancockism was particularly acute, given that world leaders and business chiefs from around the world were hoping she would take the opportunity of such an influential forum to explain more about her vision for a post-Brexit Britain, or to offer some detail on what the words “deep” and “special” will mean in practice when it comes to our future relationship with the EU.
There are few people in the UK tech sector who would express disappointment at the welcome priority that May and her government are giving to technology. It feels almost churlish to have to add a Brexit caveat, but it’s such a huge caveat.
As Computer Weekly has reported, tech leaders are holding their breath waiting for guidance on what the future might look like, given the intimacy of the business and tech relationships we have across Europe.
It’s great that May sees technology as key to building the new, global Britain we are promised. Undoubtedly a connected, digital Britain offers a way to open up new opportunities. But it’s not going to magically happen.
The prime minister touted investments in artificial intelligence, including £45m for 200 new PhD places, yet in the same week Google announced its new AI research centre in France even though its DeepMind AI subsidiary is in London. Earlier this month China announced a $2.1bn investment in an AI tech park in Beijing. Let’s keep our investments in context.
If the UK wants to lead the world in AI, we need to maintain the “deep and special” research and development partnerships that we have across the EU – we simply won’t have the resources to compete otherwise.
So it’s a big step forward to see technology taking its place at the political and business top table. Thank you, prime minister. But if tech is the future, we need to know what that future looks like.
Here’s what Dennis Snower, president of the Global Economic Symposium, thought of Theresa May’s Davos speech:
Theresa May’s speech in Davos: Platitudes about new technologies, no ideas about how the gains from tech can be spread, & ridiculous boasts about Britain as AI capital of the world. Embarrassing. pic.twitter.com/ORnq7VrfJI
— Dennis Snower (@DJSnower) January 25, 2018
With Brexit on Britain!s doorstep, all that May can offer is the pious hope that Britain will do well in #AI. She could just as well have been optimistic about British cricket.
— Dennis Snower (@DJSnower) January 25, 2018
The demise of construction and services giant Carillion has brought fresh scrutiny of outsourcing in government. Frankly, you could find some event almost every year that brings fresh scrutiny of outsourcing in government – from the IT disasters documented by Computer Weekly over the years, to failures in prisons and even security for the 2012 Olympics.
Because of those historic IT failures, IT outsourcing has been examined, prodded and analysed many times, by MPs, the National Audit Office (NAO), and independent experts. This week, there have been plenty of tech commentators looking for “lessons from Carillion”, but the reality is that Carillion only demonstrates the issues that we’ve seen in government IT for a long time, and the solutions that have been proposed yet consistently ignored.
After 2010, when then Cabinet Office minister Francis Maude took aim at the oligopoly of big IT outsourcers, there was a lot of progress made in trying to apply some of those solutions. We’re nearly eight years into a process of disaggregating IT contracts to make them more manageable, and to reduce the sort of exposure to a single point of failure that’s affecting Carillion’s projects.
However, even after eight years, progress has still been patchy. Many of those mega-deals are still lumbering along, and because of the distraction of Brexit, many are being extended because it’s easier than trying to break them up.
We’ve seen a rise in the use of SME tech companies, through initiatives like G-Cloud, but according to the NAO, by 2016 94% of government IT spending still went to big suppliers – down just 1% since 2012-13.
Not much has really changed. We may not have so many huge deals with a single prime contractor, but the sub-contractors involved are still delivering the same services, and they are mostly the same old large system integrators. They still have as much of the pie, it’s just sliced a little differently.
In some parts of government, they are getting IT outsourcing right – using it to supplement and complement the skills and resources available in-house, rather than to replace them. But those examples are mostly sporadic.
Where IT outsourcing has the greatest parallel to Carillion is around transparency and accountability. There have been numerous calls for open-book accounting on contracts, for those contracts to be publicly available for independent scrutiny, and for suppliers of public services to be covered by Freedom of Information laws for their outsourcing deals. Despite government commitments to open contracting, none of this has happened.
When an IT contract goes wrong, it’s rare that the supplier gets punished. The same old faces always turn up the next time a deal is awarded. Blame gets apportioned, but nobody is held accountable.
Eight years after Maude started the process, the Cabinet Office is still telling us it will take four years to exit the remaining IT outsourcing deals, and warning of the complexity of doing so – advice that should have been given in 2010.
Let’s not forget that there are times when IT outsourcing works for government – but if there is one thing to be learned from Carillion, it’s that the knee-jerk response to outsource should come to an end. It’s one of the tools in the digital kitbag, but it should never again be the default option.
It has not been the happiest start to 2018 for the IT industry.
Security researchers from Google’s Project Zero published a detailed paper identifying a flaw in the design of every modern microprocessor that could be exploited to gain privileged access to a computer’s memory.
Not since StuxNet in 2010 has the IT world been so disrupted. At that time, researchers showed how everything from lifts and building systems to electricity grids, banking networks and nuclear power stations could be directly compromised.
It has been known for two decades that electronic devices such as microprocessors have tell-tale signatures that can be exploited. Security researcher Paul Kocher published a paper in 1996 describing such a risk, known as a side-channel attack. He said the time it takes for a microprocessor instruction to run can be used to reverse engineer cryptographic keys, such as RSA tokens.
The security team that discovered Meltdown said they were able to leak secure information at a rate of 503Kbps with an error rate of 0.02%. In other words, their proof-of-concept exploit of the flaw could get at information almost 100% of the time. Because it is a hardware exploit, it works on Windows , Linux and containers such as Docker.
Luckily, Meltdown can be patched – but Spectre requires a generation of secure processors.
The patches issued across the industry are just that. They are patches; they do not fix the fundamental problem that the microprocessor is broken. The ingenious techniques applied by microprocessor designers to extract maximum performance from every processor invented since 1995 can be used to leak secure information. Everyone will need to upgrade, but this will take years. In the meantime, the patches and hot fixes may have some detrimental effect on the performance of all our IT systems.
Complete this sentence to win a prize for your technology insight: “2018 will be the year of…”
Rather like the missing word round in Have I Got News For You, the answers will range from the obvious to the hilarious – AI! Blockchain! Internet of things! Voice recognition! Driverless cars! Virtual reality! Go on, add another few dozen well-used industry buzzwords. Do a Google search, you’ll find someone has written every article already.
We can pretty much guarantee that 2018 will not be the year of anything in particular, other than the usual onward march of technology adoption and the further infiltration of the digital revolution into every aspect of our life and work – and the backlash against both.
Not wishing to be left out, here are five things that are likely to be much discussed this year – not an exhaustive list, but we reckon these will come across the desktop of most IT leaders in 2018.
Let’s get the easy one out of the way. You have until 25 May 2018 to become compliant with the EU’s General Data Protection Regulation (GDPR), which will be implemented in the UK through the new Data Protection Bill. At Computer Weekly, we’ve already written each week’s story from now to May about how many organisations have yet to comply. Data protection isn’t going away – even if we’re still trying to understand where to draw the boundaries. The real fun starts when the first household-name company gets sued for a GDPR breach.
Whether it’s about social media, online age verification, artificial intelligence, data privacy or the working practices and lack of diversity of the tech sector, ethics is going to underpin much of the existential debate around the future of our digital society. Take it seriously – choose to be an ethical digital organisation. One day, people will look back and despair at how long it took for tech to make ethics a priority.
There’s a growing acknowledgement that the core of the UK’s productivity gap with our international counterparts comes down to a lack of IT investment by corporations since the 2008 crash. CBI research gave us a wonderful soundbite last year – that UK take-up of enterprise resource planning (ERP) and customer relationship management (CRM) is lower than it was in Denmark in 2009. If we want the UK to compete internationally post-Brexit; if we want to get wages growing again; if we want to create more high-value jobs – then the government needs to find a way to persuade companies to increase how much they spend on new technology.
Identity is the perhaps the biggest challenge of the digital economy. How can we prove we are who we say we are to organisations we may never meet or physically transact with outside the virtual world? Getting digital identity right is the key to unlocking so many online opportunities, from public service delivery to open banking. The government has tried to crack this with Gov.uk Verify, but has gone down a dead-end and needs to find a way out. Better public/private co-operation is likely to be the answer – and it needs to happen this year.
OK, we had to use a proper tech buzzword somewhere. Serverless computing is the natural evolution of virtualisation, cloud and agile development – the next step to a true pay-as-you-go utility model for computer power. This year, expect to see early adopters getting seriously into serverless, most likely by testing out the capabilities of Amazon Web Services. Serverless is the future of the datacentre – or for most organisations, the future without a datacentre.
Tell us your thoughts for 2018 in the comments below…