Computer Weekly Editor's Blog


March 25, 2019  12:52 PM

MPs hear more revelations about Gov.uk Verify as troubled project gets a new leader

Bryan Glick Bryan Glick Profile: Bryan Glick

Hello and good luck, to Lisa Barrett  – the newly appointed director of digital identity at the Government Digital Service (GDS). She’s taken on what many people outside GDS see as something of a poisoned chalice.

You might think it strange, that this job has been created for the first time barely 12 months before GDS hands its digital identity system, Gov.uk Verify, to the private sector and government ceases further funding.

But Barrett’s introductory blog is full of positivity, as you would expect. There will be no shortage of people wishing her success in her goals. For there’s one thing GDS has never quite grasped – that the growing band of Verify critics generally have one thing in common, that they want Verify (or something like it) to succeed, and have done so all along. Their criticisms often come from frustration at what they see as such a critical project, struggling as a result of poor decision-making in GDS.

The timing of Barrett’s appointment is all the more difficult after recent events in Whitehall, with a highly critical National Audit Office (NAO) report followed last week by a parliamentary select committee meeting at which, if you didn’t know better, you might have taken away the conclusion that apart from a little over-optimism by that other lot who started the project, Gov.uk Verify is all going swimmingly well. Tickety boo. Nothing to see here, move along now.

That seemed to be the message that the leaders of the troubled digital identity programme wished to impart to MPs on the Public Accounts Committee (PAC).

The committee was investigating progress on Verify after the NAO questioned the value for money of the £154m spent so far on the project and doubted the scale of benefits claimed by GDS.

Making the case for how misunderstood Verify has been were Cabinet Office permanent secretary and civil service CEO John Manzoni, alongside GDS director general Kevin Cunnington.

At times it was difficult to tell whether committee members were seeing hubris or denial in action. Let’s examine some of the highlights:

Gareth Snell, MP from PAC asked: “Mr Cunnington and Mr Manzoni, could you give me any of the quantitative KPIs from any of the five business cases that Verify has actually met? Just one example of any quantitative KPI from any of the five business cases.”

Cunnington: “On the current business case [from 2018], we are on track in terms of the volume of… verifications that we have.”

Note that the original business case was made in 2015. The 2018 business case was intended to rescue the project after the Cabinet Office project management watchdog recommended it be shut down.

Snell: “Okay. Any others?”

Cunnington: “You are testing my memory now. I cannot think of any off hand.”

Snell drilled down into the costs and claimed benefits of the programme and concluded: “So you are saying that spending less, because you failed to meet targets, is now a success, in the Cabinet Office’s definition of success.”

Manzoni acknowledged there have been problems, but did so in a way that he basically threw Cunnington’s predecessors as GDS chief under a bus.

Manzoni: “I do not think there is any question that Verify did not meet its original business case benefits. Nobody is sitting here saying, ‘Right, it met its projections.’ But I think the issue is more one of hopelessly optimistic projections in the original business case than failure,” he said.

Snell: “I will come to the previous business cases, because you are right that they were significantly way off, and several reviews and a number of business cases still maintained optimistic trajectories – let’s say optimistic for the sake of diplomacy. To what do you ascribe the low take-up across government for this system?”

Cunnington them blamed the slow progress of digital transformation at other Whitehall departments for the low take-up of Verify, to which Snell replied:

“Without wishing to paraphrase you, Mr Cunnington – I will do it anyway — it is the government’s fault that they have not transformed their service efficiently to catch up with your groundbreaking technology?”

Diverting away from the failed KPIs, Cunnington instead quoted the four strategic aims of Verify.

“One was to create a standard in the marketplace. That is the most important thing. I think we can say that we have absolutely completed that,” he said.

Snell replied: “It is quite an expensive standard though, is it not – £154m for a standard?”

It’s worth noting that identity standards are now increasingly being driven by the financial services sector in support of open banking legislation. GDS’s preferred GPG45 standard is seen by some as too prescriptive.

Cunnington continued on the three other objectives:

A commoditised cost model – “We have got the price down to a point where it can be bought by the government and by the private sector,” he said. However, the NAO found that GDS was subsidising the price paid by departments to such an extent that it feared it would become unaffordable once Verify is handed to the private sector in March 2020.

A safe and secure service across a multiplicity of private sector vendors – “Which again we have achieved”. Most private sector identity providers complain that the effective monopoly handed to five companies for Verify has hindered UK market progress.

Mass adoption of Verify. “You are right to say that is where we have not had the success we were hoping for,” Cunnington acknowledged.

To which Snell replied: “So there was the application of the first three, which are qualitative, and then the one quantitative that was meant to derive actual benefit to the government and the taxpayer, which has not actually been met.”

Let’s consider these objectives. I will make an admission – I’m a supporter of Liverpool FC. In the style used above, I would say Liverpool’s four strategic objectives in recent years have been to improve the team, qualify for the Champions League, expand the stadium, and win the Premier League.

The club has achieved the first three, but not the fourth. Everybody who knows Liverpool, or English football, will know that only the fourth objective matters. User adoption is Verify’s Premier League.

Later in the inquiry, as Manzoni again pointed the finger at Cunnington’s predecessors, Snell pointed out that Manzoni was in charge when the 2015 business case was signed off.

Manzoni replied: “I am not sure I was concentrating on the business case in 2015; I cannot remember, actually.”

Former Verify boss Nic Harrison was also being questioned by the committee, and he revealed he had reviewed Verify on taking the job in October 2016 and made 11 recommendations – right after the latest business case had been signed off. Here’s the conversation:

Snell: “Okay, so what did you change from the 2016 business case for your 11 recommendations?”

Harrison: “The 2016 business case had been written and had been through the Cabinet Office investment committee and the Treasury – it was a business case. As I came in, I recognised that it was unlikely to deliver in its current form.”

Snell: “Literally a month after it had been signed off and agreed by the Treasury, you said to the Cabinet Office, ‘This is not going to work’?”

Harrison: “I certainly said it was not on a glide path to success.”

To be fair to Harrison, Verify has been reviewed more than 20 times in its five-year existence.

The committee examined that 2016 business case a little further:

Snell: “Bearing in mind that Verify’s original programme business case came in 2015, who agreed to sign off the contracts for additional suppliers in 2016 when there was no immediate evidence that there would be an increase in supply?”

Manzoni: “I can’t remember.”

Snell: “You can’t remember. You are forgetting a lot of things today, Mr Manzoni.”

There was also potential forgetfulness when it came to discussion of Verify failing to get anywhere near its target of a 90% verification success rate:

Harrison: “The 90% figure appears in the 2015 business case, which is highly speculative and shows best probabilities. It was derived from what the likely success rate could be, based on demographics and looking backwards from a period when the product had been a success.”

Snell: “It is also the figure that was in the government business case that was used to allow sign-off of this programme to go forward. Regardless of whether that is a figure that is right or not, it is the figure on which this business case was predicated, and it failed to meet it.”

Cunnington: “No question. It was adjusted down to the range of 20 to 30 in the 2016 business case, when we recognised that wasn’t going to be true. As Nic says, the 90% was based on a future state where a lot of people are already verified – not the challenge of signing up.”

Really? The 2015 business case said, explicitly, as an objective: “90% of people can verify their identity online with a 90% success rate by April 2016”. I have a draft copy of the business case, it’s there in black and white and there has never been any suggestion the final document claimed any different. It has never been suggested that the 90% figure was a “future state” where people are “already verified”. To be fair to Cunnington, he was not in GDS at the time.

A further revelation around verification rates came later. It’s well known that the rate has rarely been above 50% – meaning more than half of users who attempt to register with Verify are unable to do so. Cunnington broke that figure down publicly for the first time, according to the higher (LOA2) and lower (LOA1) levels of assurance (LOA) offered:

“In our best performing services today, our best performing LOA1 service at 95% is the health service pension. Our best performing LOA2 services are in the range of 7% to 18%.”

LOA1 has only been available in the past couple of years – it’s little more than self-assertion that you are who you say you are, with minimal checks. Verify was designed for higher levels – where the success rate, we now learn, is less than one in five, and sometimes less than one in 10.

When Verify was given its final lifeline in October 2018, its saviour was the Department for Work and Pensions, which uses Verify as part of its Universal Credit (UC) system – despite achieving only 38% verification rate.

Cunnington revealed that Verify is not actually mandatory for UC: “The primary policy objective is to get claimants paid, and claimants go all the way through the process before saying, ‘Wouldn’t it be nice to verify yourself?’”

Not only that, but Cunnington admitted that DWP is “also looking at [its] own alternatives” to Verify. Even Verify’s last great Whitehall hope is considering its options.

Manzoni repeatedly claimed that Verify had delivered significant financial benefits, quoting a figure of £366m. As the committee pointed out, the NAO said it had “not been able to replicate or validate the benefits estimated by GDS.”

GDS remains convinced that Verify will prove itself a success as the vanguard of an emerging digital identity market in the UK, which will be stimulated by handing Verify to the public sector, using the standards established by GDS.

I had an interesting conversation last week with Don Thibeau, chairman of OIX, the identity standards body that has championed Verify and received significant funding from the Cabinet Office.

Thibeau, historically a strong supporter of Verify, said OIX is about to make a significant move – shifting its headquarters from Silicon Valley to London, because it believes the capital is set to be the centre of a nascent global identity market.

Sadly for GDS this is not a vote of confidence in the Verify programme.

Thibeau is excited by the UK’s regulatory support for open banking, which is driving retail banks to work together on standards for ID interoperability that are likely to become adopted globally. While there is overlap with the work done on Verify standards – and GDS has tried repeatedly to get banks enthused about Verify – open banking is instead emerging as the most likely driver for a digital ID market.

Let’s reiterate – people want Verify to succeed, they always have. Airbrushing its troubled history is not going to help. There will be a lot of people wishing Lisa Barrett every success in her new job.

March 22, 2019  11:39 AM

Hubris over Horizon? Post Office shocks High Court with accusation of bias

Bryan Glick Bryan Glick Profile: Bryan Glick

The Post Office likes to describes itself as “the nation’s most trusted brand”. Anyone following the latest developments in the increasingly heated High Court case about its Horizon IT system would be justified in questioning that assertion.

In a move that stunned the court – including the Post Office’s own legal team – the organisation has accused the judge, Sir Peter Fraser, of bias, and suggested he step down from the case. Fraser has been a barrister since 1989, Queen’s Counsel since 2009, and has practised in the Technology and Construction Court field since 1990. He is an experienced judge, who has shown during the case that he has extensive understanding of the often complex technological issues under examination.

Less than a week before, Fraser issued a damning ruling against the Post Office from the first of four trials in the case, in which over 500 subpostmasters claim they were wrongly held responsible for accounting errors allegedly caused by Horizon. In the ruling, Justice Fraser effectively told the most senior Post Office witness that he believed she had lied.

The shock application to recuse the judge came very soon after evidence was presented in court that could potentially undermine many of the Post Office’s previous denials of problems with Horizon. A senior employee of Fujitsu, the IT partner that runs Horizon, admitted that a mistake by one of its staff may have caused a discrepancy in one branch’s accounts, for which the subpostmaster in question was held responsible. Journalist Nick Wallis, who has been live tweeting the case every day from the High Court, called the evidence a “smoking gun”.

The Post Office has consistently claimed that such incidents cannot and have never occurred. Subpostmasters have lost their jobs, livelihoods and sometimes their liberty, after the Post Office refused to acknowledge potential errors in Horizon.

The accusation of bias came from a solicitor working for the Post Office, who claimed that the earlier ruling showed the judge could not be impartial. It’s a stunning accusation. It also means the case has been suspended for two weeks, and even if the claim is denied, it is likely to mean further costly delays to the trial, costing the claimants and all of us taxpayers who are funding the Post Office legal fees. If denied, it may also give the Post Office grounds for appeal should future rulings go against them.

Throughout the 10 years since Computer Weekly first revealed the plight of subpostmasters and the allegations around Horizon, the Post Office has refused to admit one simple, inescapable fact that everyone in IT knows to be true: software is not perfect. Even in the most efficient software application – Horizon has run the Post Office branch network for 20 years – there will be unforeseen bugs, or tiny glitches that are often near-impossible to replicate. Blinkers on, the Post Office has taken technology hubris to new levels.

The claimants’ barrister suggested the Post Office move was “calculated to derail” the trial. The Post Office stands to lose tens of millions of pounds if it loses the case, not to mention the reputational damage. The unanswered question now is how far the organisation is willing to go to prevent that happening.


March 6, 2019  10:24 AM

NAO hammers another nail into Gov.uk Verify

Bryan Glick Bryan Glick Profile: Bryan Glick

It’s not commonly known outside Whitehall that by the time the National Audit Office (NAO) – Parliament’s independent spending watchdog – publishes a report, it will have been through several iterations. The subjects of the report – government departments and agencies, civil servants and ministers – are offered a chance to comment and object to the contents, which means that conclusions are often watered down.

So as a journalist who has to read through such reports – typically many tens of pages long – there’s sport to be had in looking for phrases that subtly show the real opinion of the auditors that compiled the document.

In the NAO’s latest report, into the troubled progress of Gov.uk Verify, the Cabinet Office’s flagship digital identity system, one line stood out as the reveal for what the NAO really thought:

“It is difficult to conclude that successive decisions to continue with Verify have been sufficiently justified.”

Let’s dissect this masterpiece of Whitehall bureaucrat-speak.

“Difficult to conclude” – in other words, we cannot conclude.

“Successive decisions to continue” – they wouldn’t listen when they were told to stop (the NAO revealed there have been more than 20 internal and external reviews of Verify).

“Sufficiently justified” – not justified.

The highly critical report on Verify offered one other piece of classic NAO phraseology: “On the evidence made available to us, we have not been able to replicate or validate the benefits estimated” by the Government Digital Service (GDS), developer and now perhaps the only remaining champion for Verify.

In other words, you can’t rely on (believe?) GDS’s claim that Verify will generate £217m of benefits – a figure already 75% lower than promised in the original 2015 business case. You can believe that it has spent £154m already, with another £21.5m to come – plus an uncountable amount spent by the few departments that integrated Verify into their digital services. You’ll notice that the total spend comes in just nicely below £217m.

The rest of the NAO report simply catalogued what we already know – that Verify has missed every performance target for verification success rate, service adoption, and user take-up. And missed them by a country mile.

Verify has become tribal in Whitehall digital government circles. Outside GDS, there is widespread disappointment that the project has ended up here – the concept and aims were widely welcomed and plenty of people wanted it to succeed.

Over time, according to critics, GDS withdrew ever further into a bunker, determined to prove outsiders wrong. Still now, criticism is reluctantly received – even when the government’s major projects watchdog recommended the programme be scrapped last year.

GDS is right to say this was a challenging project at the leading edge of its field, with ambitious targets – an attempt to set standards that would create a thriving commercial market for digital identities in the UK. That was always going to be tough – it was brave to take it on. Others can comment on whether it was correct.

Talk to commercial providers of digital identity today, and they will say that Verify has been mostly a hindrance, restricting the market to a small number of providers given a contractual monopoly by GDS over access to public services. Two of the seven chosen providers pulled out when the contracts were recast last year, seeing that their business cases no longer stacked up, leaving 380,000 Verify users to have to go through the whole registration process again at some point to choose another provider.

One Whitehall insider posed an important – if so far rhetorical – question: how can it be right that the private sector was allowed to become the exclusive gatekeeper for deciding whether or not citizens can access online public services?

GDS is determined – convinced – that Verify can and will become the foundation for a standards-based approach to federated digital identity once it is handed over to the private sector in March 2020. The organisation insists that if that objective is achieved, Verify will be a success – even if it may not address whether, at £154m and counting, that success was value for taxpayers’ money.

It’s worth pointing out that, in functionality terms, Verify does exactly what it said it would. This is not one of those government IT systems that failed because the software didn’t work. But GDS was over-optimistic about what that software could deliver. Assurance levels were set too high. Not enough public data is available to prove in a 100% digital process that enough people are who they say they are. And critics say that the developers seemed to forget that the user’s real need was simply to access an online service, not to have to go through an extensive registration process that more than half of them failed.

Verify was originally justified as a means to reduce fraud, eliminate costly legacy technology, remove paper processes, and cut call centre costs – these remain a missed opportunity.

At the core of Verify is the document checking service – an API-addressable system that allows verification of government documents such as passports and driving licences. That’s the real prize for the private sector – that’s what they now want access to. And it’s also likely to be the most obvious success of the Verify programme, once government decides to open it up.

Sources close to GDS point to the behaviour of certain large Whitehall departments that set out to frustrate progress and dictate unachievable goals, setting Verify up to fail. There may well be some truth in that. But it has to be said – and has been, frequently – that GDS did itself few favours along the way.


February 15, 2019  11:47 AM

NHS takes a small digital step into the 21st century – with bigger ones to come

Bryan Glick Bryan Glick Profile: Bryan Glick

Here is our quote of the week: “We spend £8m a year on paper. We spend £2m a year on envelopes. We can save lives, save staff time and cut costs by using an extraordinary piece of technology that has the ability to allow two people to communicate instantaneously, and to allow groups of people to communicate instantaneously. If you take the result of someone’s test you can immediately communicate those results with the analysis, to the patient and the patient can reply. It’s called email. I don’t know whether any of you have heard of it.”

Don’t adjust your eyesight, the date on this blog post is correct, and no, you haven’t fallen through a tear in the fabric of space and time and found yourself in 1989 by mistake – although if you were reading Computer Weekly 20 years ago you probably saw very similar quotes from enlightened business leaders at the time.

The quote above was spoken on 13 February 2019. By what terrible luddite organisation, you may well ask? Has someone just introduced Jacob Rees-Mogg, minister for the 18th century, to a computer?

Well, not far off – a few rows of the House of Commons at most.

Those words were spoken, dripping with sarcasm, by secretary of state for health and social care Matt Hancock, announcing a new policy to use email by default for NHS England to communicate with patients. Therein lies the scale of the challenge in digitising the NHS.

How often have our highly tech-literate readers – and indeed, most of the highly tech-literate UK population – tutted and raised their eyebrows at the inability of their GP or hospital doctor to simply send them an email? Let’s not get ahead of ourselves and start suggesting instant messaging.

And yet still there was scepticism from some NHS dinosaurs about the wisdom of a policy that promises patients the ability to choose if they prefer to continue receiving paper letters.

Of course, email has to be secure – especially for such personal information as medical tests – but modern email can be made secure, and anyway shouldn’t patients have the choice?

Meanwhile, the NHS is working on an app that’s in trials, which it hopes will become the digital front door to the health service, using APIs to plug into third-party applications and data, as well as GP patient records, appointment bookings, prescriptions and other services. If it works, it’s the ideal way to help deliver digital health – but its biggest challenges will be changing NHS culture, let alone making the technology function correctly. Don’t tell the dinosaurs though – better still, send them an email.


February 7, 2019  2:57 PM

Of course there’s technology for the Irish border – that doesn’t mean it will work

Bryan Glick Bryan Glick Profile: Bryan Glick

It’s understandable that people express cynicism when a politician – particularly one with a clear ideological bent – proclaims that “digital” or “technology” can solve the core Brexit issue of the Irish border. With very few honourable exceptions, politicians have the same understanding of technology and how it works as the average voter has about Erskine May, the authoritative publication on parliamentary procedure.

But when an IT company makes a similar claim, should we sit up and take notice? It’s notable that we’re now seeing leaks of technology supplier proposals just as the Brexiteers in Parliament are pushing their “alternative arrangements” ideas to overcome objections to the UK/EU Withdrawal Agreement.

The Sun got its hands on a Fujitsu document, while Buzzfeed published details of a report from wireless sensor maker UtterBerry. It’s clear that the Brexiteers leaking these ideas believe this gives substance to the possibility of a technological solution to the Irish border.

Don’t believe a word of it.

From what’s been released into the public domain, these proposals read like classic, old-school, Big IT plans that offer simple-sounding solutions to difficult problems couched in ways that make it seem like the supplier has all the answers, knowing that the recipient won’t know the right questions to ask.

It’s true that technologies exists to automate border functions between two countries – internet of things, wireless, sensors, GPS tracking, facial recognition, automatic number plate recognition, cloud, real-time data analytics, to name a few – of course one of the proposals throws in blockchain, because, well, you have to these days.

And it may even be the case that such a solution could be demonstrated on a small-scale – a couple of delivery lorries perhaps, being tracked from start to finish across the border.

But what every conversation about a technology solution fails to grasp is the complexity such a project would have to deal with. Ask anybody that has worked on large-scale IT projects and you will hear the same thing – the technology is rarely the problem. Big government IT initiatives mostly fail because they underestimate the complexity of making large-scale technology work.

Sure, you can point to Amazon and say: look, here’s a hugely scaled technology that works. But when Amazon started, it sold books via a simple website. It’s taken two decades, billions of dollars and many millions of working hours to get to where it is now.

Has anybody at any of these tech companies that believe they have a solution done any in-depth user research to properly understand the problem? You suspect not.

This is a great Twitter thread from a director of a specialist vehicle routing software company, that touches on the complex questions that would need to be answered before even considering a technology solution such as those proposed.

All this also overlooks the unique political complexity inherent in the Irish border.

IT companies do themselves, the IT sector, and UK politics no favours by promoting apparently simple solutions to such an enormously complex problem. Of course, that’s what most of them have done in government IT for decades, so you can’t expect them to change when there’s a multibillion-pound contract in the offing.

But whenever technology is thrown up as the answer to this most difficult of Brexit issues, the question that needs to be asked is not whether the tech exists, but whether anyone can explain how to manage the complexity involved in making it work. If they claim they can, be very cynical.


January 4, 2019  2:34 PM

Hello 2019, same as 2018 (mostly)

Bryan Glick Bryan Glick Profile: Bryan Glick

“Nothing changes, on New Year’s Day,” sang U2 many years ago. This piece of self-evident wisdom doesn’t stop the world of tech punditry from excitedly making hyperbolic forecasts at the start of every year on the assumption that opening up their new wall calendar will magically change the fortunes of the sector.

The technology development cycle will roll on during 2019 much as it did during 2018 and every year before. The stuff people got excited about last year will be pretty much the same stuff they get excited about this year.

Likewise, the challenges faced by IT leaders when they left for their Christmas holidays will not miraculously have shifted to something else by the time they return from their seasonal excesses. Maybe their budget might be a little higher this year, as CEOs increasingly accept they need to invest in digital and innovation to stay competitive. Just ask the failing high street retailers who were sure they could keep up against the convenience and popularity of internet shopping.

So putting aside the tech predictions, let’s think ahead. What might be the things we’re talking about in December, when we look back on the year? Here are five areas we think will be key influences:

Brexit

Obviously. One reason why it’s difficult to predict anything for 2019 is that the UK doesn’t even know where it will be come April. Anything from sunlit uplands, to no-deal disaster, or business as usual remains a possibility. Even the UK tech sector is still arguing about what it wants. It’s certain though, that the subsequent nine months of the year will be determined by what happens in Westminster in the first three months.

Backlash

It was inevitable there would be a backlash against Big Tech at some point. As the capabilities of technology overtake the cultural capability of society to absorb the pace of change, society will rebel. It’s happened before – notably the dot com boom and bust either side of the new millennium. The anger is directed at the likes of Facebook, Google, Amazon and others for now. Governments have to come to terms with the new landscape of privacy and data protection, and stop trying to pigeonhole tech firms into existing regulatory regimes. Tech needs regulation but it needs to be new approaches for the digital age.

Security

Pretty much every major data breach makes front-page headlines these days, and leads radio and TV news bulletins. Think Marriott, Dixons Carphone, British Airways, Facebook – the list goes on. Of all the old issues that are still new issues, cyber security is the biggest. But have any of these high-profile breaches actually changed anything? For all the inconvenience caused to people affected by such incidents, nothing has happened yet to cause a real public backlash to the degree that it forces companies to do better. Maybe the first GDPR fines might change that. But the public mostly sees security like they do insurance – a necessary evil. That won’t last – before long there’s going to be a security incident that causes widespread economic damage to people’s lives. The worry is that security weaknesses won’t be fully addressed until the real dangers are demonstrated on a large scale.

Skills

Already this month, the British Chambers of Commerce has warned that 81% of manufacturers and 70% of service firms are struggling to recruit the skilled staff they need. Last month, the Trade Union Congress called on the government to create a million new high tech and manufacturing jobs by 2030. Skills shortages are real, as are the fears they will become even worse after Brexit with the loss of freedom of movement and stricter immigration policies. It’s going to be a tough year for anyone needing more IT staff to support growth. This is perhaps the single biggest issue facing the UK’s digital economy – if we don’t have the talent, we won’t have global leadership. Again, it’s hard to see how this will be solved without a radically different approach from the government.

Commoditisation

There’s an easy way to work out which technologies will be the ones to watch over the next 12 months – over any 12 months, frankly. So many people say the pace of change in tech is amazing these days. Actually, it’s no different to how it’s always been. The pace of invention hasn’t changed much at all. What’s happened is that commoditisation of technology is affecting more and more areas of our everyday life and work, so it feels like things are changing faster. The key is to watch what technologies are about to become commoditised – not which ones are emerging on the scene, many of which may never reach commodity status. Once a tech becomes a commodity, that’s when it takes off and changes things. The internet, smartphones, cloud, big data – all are examples of tech becoming cheap enough, powerful enough and scalable enough to become increasingly ubiquitous. What’s next? Basic forms of artificial intelligence (AI) are the most likely – process automation, simple machine learning, for example. More advanced AI as a commodity is still a way off. Blockchain? No – nowhere near commoditisation yet. Internet of things? Nearly there. Don’t expect any surprises here for 2019.


December 7, 2018  12:57 PM

O2 outage proves the weakest link in the digital revolution is fallibly human

Bryan Glick Bryan Glick Profile: Bryan Glick

So it turns out the O2 mobile network failure that took out data access for some 30 million people this week, was caused by an expired software certificate – no great conspiracy, no programming error, no undiscovered bug, no malicious interference, but one of the most basic systems administration mistakes you can imagine. Someone somewhere forgot to renew a certificate.

As a wise voice once said, there’s no patch for stupidity. And herein lies the great unspoken conundrum at the heart of the digital revolution. Computers go wrong. Why? Because they’re designed, manufactured, programmed, configured, secured and operated by the most fallible, unpredictable and unreliable resource in the technology world – people.

Of course, it’s those same people who every day ensure that the IT systems supporting every company and government in the world work mostly as intended, who keep the internet running and protect the vast majority of our personal data. That’s because people are pretty good at computers these days. But we’ll never be perfect.

The job of running IT systems is becoming increasingly abstracted from the technology – virtualisation, cloud, containers, serverless, orchestration, all these trends aim to remove that human fallibility from everyday tasks. Not forgetting that it still takes another human somewhere to make those technologies work in the first place.

Much as artificial intelligence (AI) and automation are replacing or augmenting corporate jobs, so the IT department will see further dramatic change as more of its responsibilities are taken over by software robots. Of course, those software robots were created and programmed by humans too. And they aren’t exactly perfect – as the Amazon workers in a New Jersey warehouse found out this week, when a robot accidentally punctured a can of bear repellent, sending 24 staff to hospital.

There is, correctly, much debate about ethics in AI and technology, not least the need to prevent human bias from becoming too infused in the algorithms they rely on. People outside IT are taking more of an interest in the workings of IT than ever before. It’s fair to assume those non-IT types are pretty fallible too.

When O2 went down, there was much humour taken from the site of people trying to consult paper maps to find their way around, and attempted insights from those who found a whole new world beyond the smartphone they’d been glued to until then. The outage was a small reminder of how reliant most of us have become on technology.

For all the great advances of recent decades, it’s going to be a long time before we no longer see headlines screaming “outage”. Whether through malice or simple error, human fallibility is a part of our digital future too.


November 22, 2018  4:34 PM

IT community needs to counter concerns behind ‘big tech’ backlash

Bryan Glick Bryan Glick Profile: Bryan Glick

Sometimes reporting the latest tech news at Computer Weekly throws up an entertaining juxtaposition. Take these two headlines, for example, from last week:

TechUK calls on Matt Hancock to fast-track NHS digitisation and Citizens don’t want the NHS to prioritise digital health, report finds.

Just as IT industry trade body TechUK is pushing health and social care secretary Matt Hancock to accelerate his grand technology vision for the NHS, the Institute for Public Policy Research publishes a report that suggests digital healthcare is the lowest priority for patients.

Sound familiar? The tech sector telling its prospective customers they should be buying more stuff, while the ultimate users of that stuff show somewhat less enthusiasm? In one form or another, this has been a recurring trait throughout enterprise IT history.

Of course, it goes back even further than that. Vehicle manufacturing pioneer Henry Ford is widely attributed as saying, “If I had asked people what they wanted, they would have said faster horses”.

While NHS patients might like the idea of faster doctors and nurses, there’s no escaping the reality that technology will accelerate their capacity and capability far more effectively.

But this particular juxtaposition highlights a deeper trend at the moment. In the decade since the launch of the iPhone, we’ve seen technology becoming ever-more ubiquitous and popular, but in the past year or so we’ve seen the start of what is, perhaps, an inevitable backlash.

Much of the growing negativity towards tech is coming from the dominance of US internet giants like Facebook, Google, Amazon and Apple – and especially continuing revelations about topics such as Facebook’s cavalier attitude towards our personal data, Amazon’s working practices, or Google’s tax policies.

It’s a concern that the phrase “big tech” is becoming commonplace, carrying with it the hint of malfeasance that originated from “big tobacco” and “big pharma”.

The underlying positive aspect is that the backlash is a natural response to our increasing reliance on tech, and the influence it’s having on driving social and cultural change. Such a process is never easy, but if you believe the benefits of technology outweigh the concerns, then it’s incumbent on tech evangelists to continue to make the case.

The next few years are likely to be difficult for the tech sector, and those who come through successfully will be the ones who change their behaviour – grow up, so to speak, from tech’s adolescence. Tech is not the young upstart anymore, it needs to be a responsible member of society.

For everyone who works in IT, it’s your responsibility too, to focus on the benefits, and mitigate the potential downsides of the digital economy.


November 8, 2018  3:36 PM

Brexit and tech – the big ‘if’

Bryan Glick Bryan Glick Profile: Bryan Glick

As we, seemingly, edge closer to something resembling a UK deal for leaving the European Union (and by the time you read this, that statement could quite possibly have been superseded by events), so the government is starting to reach out to the tech sector to ease its ongoing concerns.

This week, Brexit secretary Dominic Raab came along to meet a room-full of tech leaders and journalists in London, to answer their questions and put forward his case for Brexit.

It was on this occasion that Raab exposed himself to widespread mockery – far beyond his tech audience – for admitting that he had not fully understood the importance of the Dover to Calais crossing for UK trade. Cue facepalms all round.

But did he reassure the gathered leaders that Brexit is not the disaster that most of the industry fears it will be? Whether you feel he achieved that objective depends mostly on how much work you are willing to allow the word “if” to undertake on his behalf.

When asked about the prospects for the UK’s artificial intelligence (AI) sector – in which the UK likes to see itself as something of a pioneer and where there is undoubted future opportunity – Raab said all will be well, “if you get it right”.

That’s an answer that seems to sum up most of his pitch. Be reassured, leaders of a critically important UK industry, everything will be fine, if you get it right.

That poor ‘if’ is left to support all the justified concerns about data flows, regulatory compliance, lack of a deal for services, losing the customs union, exiting the digital single market, ending free movement of talent from the EU, attracting foreign investment – add your own weights to the straining bar that ‘if’ is holding.

“Brexit may create opportunities,” Raab continued – note, “may” not “will”, and this from one of the most ideologically committed Brexiteers.

“I want to deliver a global, outgoing Britain,” he said, to an audience of outgoing, globalist tech leaders, no doubt somewhat surprised to learn they were not already outgoing and global.

He repeated the government line that losing EU freedom of movement will not hinder the industry, because we’ll have a global immigration system instead. We do, of course, already have a global immigration system in the UK, which fails to attract enough overseas talent, and fails thus solely because of self-imposed restrictions.

“Most of the growth markets in the future will be in the non-EU markets, whether Latin America or Asia, and so, for instance we want to be able to promote e-commerce,” said Raab. It might be a surprise to UK e-commerce firms that apparently they don’t have the ability to target non-EU growth markets or promote their services therein. Pretty sure they do that now.

Computer Weekly has long held the view that Brexit is bad for tech. Should Raab’s positivity prove correct – if “if” steps up and carries all that weight – even then we’re still to be convinced the growth opportunities will be better than they would be within the EU. If we’re wrong, we’ll stand up and say so. But our “if” is a lot smaller than Raab’s.


November 2, 2018  1:02 PM

Chancellor’s Budget boost for tech is tempered by Brexit realities

Bryan Glick Bryan Glick Profile: Bryan Glick

Gone, or so it seems, are the days when Computer Weekly laments after every Budget statement from the Chancellor of the Exchequer that tech has been overlooked. There is little doubt that government now realises that support for, and investment in, the technology sector is critical to the UK’s future.

Of course, we can and will still observe that there is more to do, but Philip Hammond’s latest Budget put at least a small finger in many of the necessary pies – startups, R&D, skills, digital government, broadband and more – even if he did anger the IT contractor community by extending controversial IR35 reforms to the private sector.

Hammond’s digital services tax – targeting the big web giants and their creative tax accounting – has unsurprisingly been attacked by the tech industry, but putting aside the rights or wrongs of the policy, it’s another reflection of the growing importance and influence of the IT world on government decision-making.

As with so much coming from the UK government, however, the positive Budget announcements exist in a parallel world to the uncertainties and concerns over Brexit.

A few days earlier, digital minister Margot James told Parliament that she still cannot guarantee a data-sharing agreement with the European Union in the event of a no-deal Brexit. As Labour’s Liam Byrne, who was questioning James at a session of the European Select Committee, said: “Without data sharing our exports will grind to a halt”.

That same week, a National Audit Office report on the UK border’s preparedness for leaving the EU found that 11 of the 12 “critical systems” at the border are at risk of “not being delivered on time and to acceptable quality”.

Not quite Hammond’s previous glib and spectacularly uninformed observation when asked about a possible digital solution to the Irish border issue, where he proclaimed: “I don’t claim to be an expert on it but the most obvious technology is blockchain.”

Congratulations to whichever tech lobbyist persuaded whichever civil servant to tip the Chancellor on that fantastical idea.

The government’s new-found love for the tech sector is welcome, but its desire to make technology a panacea for all the ills of Brexit needs quickly to be tempered.

For technology leaders, the dilemma continues – eager to take advantage of Westminster’s tech-friendly approach, but fearful that a bad Brexit of whatever form might rapidly unravel the advances made in recent years.

We look forward to a future Budget where support for tech is not just welcome, but unequivocal and full of certainty.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: