I’ve had some interesting conversations recently with Professor Fred Piper regarding risk probability. The discussion started because I was concerned about assessments of risk probability, as one might routinely use to populate a risk heat map or risk register.
What’s the problem? For me, it’s the fact that, as the probability of an incident occurring approaches 1.0 or 100%, we have no scope to differentiate between an event that occurs only once, and another that’s likely to occur a thousand times (over a specific period).
I can get around this (as I do) by replacing the word probability with the term likelihood and using a simple ordinal scale to measure the relative likelihood of a risk.
Fred is an expert on betting odds, so I thought that we need something equivalent to odds-on for those incidents that are near certain to occur many times, which might enable unlimited extension of the scale. Bookmakers understand odds-on. (Though you could argue that they don’t seem to understand is the opposite, as recently illustrated by Leicester City football club winning the UK Premier League at 5,000 to one.)
But betting odds are a different kind of assessment from probability, reflecting the views of a party (e.g. a person, organization or community) on how likely an event is perceived to happen. Odds-on just means someone thinks the event is highly likely.
But what this illustrates is the need for more precise language to describe properties of risks, as well as realistic scales to position risks. Risk assessment is far from being a precise science, but it should at least be grounded on a more rigorous basis.
I’ve been pressing for greater speed in security management for many years. “Replace the Deming Loop with the Boyd (OODA) loop” has been my mantra. Yet when I first encountered DEVOPS, I immediately thought it would fail because it broke the segregation of duties principle. Perhaps it would be fine for a small start-up or a vendor, but not for a large enterprise subject to all manner of regulatory demands and frequent audits that inspect segregation of duties arrangements.
I’ve since changed my view, for the following reasons.
DEVOPS is a compelling movement, which enables continuous software delivery through automation and closer coordination of development and production teams. It introduces a powerful cultural change. And faster delivery means quicker bug fixing and therefore faster elimination of security vulnerabilities. This is a big security benefit, but what about those regulatory controls and standards that demand separation of duties and environments for development and production work?
The answer is that we need to bring these traditional ideas up to date. The starting point is to recognize that there is more than one driver behind these requirements. Segregation of duties is an anti-fraud check which applies to financial processes. No one person should be allowed unsupervised, end-to-end control over financial transactions. In contrast, separation of development and production environments is a broader, operational control to preserve the integrity of the production environment from the side effects of untested software.
These requirements are in fact expressed as two separate ISO 27001 controls. Unfortunately, they’re often conflated, with many people interpreting segregation of duties as a need for separate development and production teams. But that that’s not strictly necessary. We do need to separate the processing environments, but we don’t have to segregate the development and operations staff.
In fact, segregation of duties is just one solution to the anti-fraud. It’s often referred to as the “4 eyes principle” which is a broader and better way of expressing the requirement. That can mean simply having a second person authorize any changes (such as a new release), which then opens up a door to DEVOPS teamwork, though we are still constrained by the need for an extra check.
To eliminate potential delays from a secondary check, we need to update our concept of trust and control. The old-fashioned concept of trust was perhaps best summarized by the old Russian quote (equally ascribed to Stalin and Lenin) that “Trust is good but control is better”. Now that might have worked in an over-manned, slow-changing, industrial age environment. But it’s impossible in a fast-moving, empowered, information age world. A better adage is the Ronald Reagan quote (also based on a Russian proverb) of “Trust but verify”, which enables speed and empowerment.
The choice now is how best to implement such an ongoing checking mechanism, and whether, for example, an anomaly detection system might be sufficient to remove or reduce the need for human intervention. That justifies a bit more thinking. But I can envisage that on a small scale (which this is) something along the lines of a self-organizing map (a neural network) might serve as a fast, convenient method of periodic human/machine checking.
There are of course further things we need to achieve secure DEVOPS. Most importantly we need sound design and enforcement of access control policies, profiles and permissions. Interestingly, this is an extremely simple subject which is surprisingly poorly implemented. But that’s for another blog posting.
Heavy demands for research and consultancy have restricted my blog postings this year. It’s a reflection of the unrelenting growth in anything connected with cyber security. My New Year’s resolution however will be to return to regular blogging.
A year ago I forecast that the Internet of Things would the primary focus of this year’s research, but that few applications would emerge. That certainly happened, though I think the IoT hype was pipped by the hype for Bitcoin block chain, which even merited a major feature in the Economist.
Despite all the hype and investment around block chain applications I remain pessimistic about its use for serious finance applications. In my view, anything that doesn’t scale well, can be taken over, and presents a major threat to tax collection is unlikely to succeed in the long term.
It was a no-brainer to predict that the treacle of regulatory compliance would become ever deeper, and that Governance, Risk and Compliance (GRC) solutions would remain immature (because of the large scope and complexity of the underlying data). That situation will get even worse as enterprises prepare for the new EU General Data Protection Regulation (GDPR). I know some companies are concerned about the mountain of paper required to demonstrate evidence of GDPR compliance. But that’s mainly because of a lack of visibility and management of information flows. And it’s certainly not a bad thing to correct that situation.
Prediction has been the new dimension for security this year with increased promotion of artificial intelligence solutions and threat intelligence services. This is double-edged sword for the CISO, who will face an inevitable increase in false-positive reporting, which cannot be ignored because of the possibility of a nugget hidden within. My advice is to maximise the use of simple, rules-based mining before turning on the AI technology, and to generally ramp up the resources devoted to security event and trend analysis.
A longer term trend I drew attention to last year is the progressive commoditisation of many cyber security services, which are relatively easy to execute with scripts and open source tools. As technology becomes more powerful and easier to use, the security skill set will change, and enterprises will need to differentiate between areas that demands deep expertise and experience and those that can be easily carried out by an enthusiastic trainee.
A further trend to watch is the progressive growth of Cloud based services which will demand a different security architecture from traditional enterprise perimeter solutions.
The main trend in 2016 however will be a step change in the control and visibility of IT assets and information flows, as enterprises begin to exploit more powerful tools for discovery, analysis and management of information transfers. The introduction of the EU GDPR will certainly boost the sales of asset management and managed file transfer services.
I admit to being a long-standing critic of past UK government research initiatives. Having sponsored and managed several partly-funded research projects I’ve been disappointed with the decreasing incentives to convert blue-sky research into actual products. (The funding reduces to zero as you progress ideas towards commercial ventures.)
Clearly I’m not alone in this view as increased funding now seems to be aimed at encouraging start-up initiatives. I fully support this change and I’ve been pleased and impressed to be associated with the new London Digital Catapult Centre. This is a venture that reflects the latest thinking on how government funding can encourage innovation. It’s not an incubator, it’s not a research centre, but it has great facilitation potential.
Strip away the gimmicks of the automated yellow minion and the machine that blows bubbles in response to tweets and you’ll discover an interesting mix of researchers, entrepreneurs, investors and subject matter experts coming together to discuss emerging trends and business opportunities.
As I’ve often said, innovation in security will not come from industry (who are focused almost exclusively on compliance), or academia (who respond increasingly to industry demands), or vendors (who simply wish to promote new features). Real invention demands a serendipitous blend of users, vendors and investors, ideally enhanced by a left-field subject matter experts and the odd futurologist.
And that’s what you’ll find at a Digital Catapult workshop. New thinking needs a blend of contrasting experiences and perspectives. The Digital Catapult centres are equipped to deliver this. In a short two-day “pit stop” on identity and trust I discovered a surprising number of innovative product concepts, and was delighted to encounter kindred spirits open to my own inventions and ideas.
To be honest I’ve lost faith in traditional universities, vendor and research centres. Few new products are truly innovative and many lack the left-field and subject matter expertise needed to conceive killer products. If anything new and successful emerges in the security space in the next decade I’m sure it will have been identified and discussed at a Digital Catapult centre.
I missed the opening of this year’s Infosecurity Europe as I was speaking in Zurich. I did however catch the end, though there was little to fire my attention. The theme was dated, the slogans on stands (e.g. “security re-imagined”) were unrealistic, and the talks were from original. The exhibition however was much bigger and even more crowded. As usual, the conference was essentially a huge networking event, as well as a chance to seek out what might be new in cyber security.
Just about everyone in security attends at least one day of Infosecurity. I bumped into dozens of old acquaintances and met lots of new people, ranging from IT researchers to behavioral psychologists. This conference seems to attract a more diverse set of people than other big security conferences.
Little innovation was on show though there is much happening behind the scenes. For me, the underpinning trend is the continuing growth in the use of artificial intelligence (AI) in security products. Such technology is becoming mainstream. It has its advantages and shortcomings.
Things have certainly changed. Fifteen years ago when I was promoting the use of AI it was a dirty word in many academic circles. The Professor running Microsoft’s research labs in Cambridge told me he binned anything he received on the subject. Yet today Cambridge is the home of the most hyped security product in this space: Darktrace, a learning system inspired by the human immune system.
Clearly someone has been paying attention to my long-promoted advice that security technologies needs to steal ideas from nature, especially the human immune system. Back in 1999 I sponsored a three year project to develop a fraud detection system based on the human immune system. The technology worked to an extent, but was a long way from being ready for business deployment.
There are huge challenges in developing AI systems. We don’t fully understand the human immune system, and we can’t keep up with the accelerating changes going on across a modern, global enterprise. I always imagined that perfecting such technology would be a long haul. Professor Stephanie Forrest at the University of New Mexico for example has been trying to develop intrusion detection systems based on this approach for two decades.
Perhaps we just needed Mike Lynch’s magical Bayesian logic. Certainly something has accelerated the maturity of the technology which now appears to be ready for prime time.
But be warned. False positives might be acceptable in a research, intelligence or relatively small environment. In a large enterprise however they can be time consuming to process and deadly if you ignore them. We’ve all heard about the CISO who lost his job after not acting on an intrusion alert.
As I’ve pointed out for the past fifteen years, the future of security will be probabilistic rather than deterministic. But it’s a slow change. Don’t expect instant results.
It was interesting to see Tim Cook, CEO of Apple, voicing his opinions that government and companies should not have access to private consumer information. It’s rich coming from a vendor with access to so much of our personal information.
I don’t mind security services having access for national security purposes. It’s necessary in an increasingly dangerous world and they safeguard it well. Employees are vetted, keep their mouth shut (Snowden excepted), and there is no evidence of data breaches or misuse after decades of interception.
If only we could say that about vendors.
I almost forgot to mention that last week’s New Statesman carried a major feature on Cyber security in Britain, including articles from Francis Maude, Peter Sommer and myself. (Mine’s the doom and gloom “Ghosts in the Machine” piece.)
Last week GCHQ was censored over its sharing of internet surveillance data with the United States. There’s no real surprise here. But what is interesting is to read it in the context of the New Statesman’s feature last week about growing political interest in the “Anglosphere” – a global alliance of English speaking countries.
I am reminded of Bill Hayden’s observation from Tinker Tailor Soldier Spy “I still believe the secret services are the only real expression of a nation’s character”.
I keep reading defeatist talk. The latest is from a chap called James Lewis, a cybersecurity expert at the Washington DC based Center for Strategic and International Studies, who has been claiming that businesses should “stop worrying about preventing intruders getting into their computer networks, and concentrate instead on minimising the damage they cause when they do”.
It would be a very black day for cyber security if businesses stopped worrying about intrusions. Let’s face it the reason we have so many is because we don’t try hard enough to stop them. The attackers are fast, smart and agile, and our defences are sloppy, dumb and slow to react. The DC man is right to point this out, but the answer is to beef them up, not let the security managers off the hook.
Valuable intellectual property can be safeguarded by not storing it on networks. We don’t do enough of this. Intruders can be stopped or quickly detected by state-of-the-art defences, though these are rarely deployed effectively even in large enterprises. Admittedly, some intelligence services have the capability to by-pass any defence, but such attacks are selectively mounted and should not be a reason for a wholesale abandonment of confidence in preventative measures.
The “dwell time” of a sophisticated APT intrusion is the serious new metric, though there is no mention of this in the international standard on this subject ISO 27004, which is perhaps where it all goes wrong. The modern CISO is bogged down in hundreds of pages of paper nonsense which stops them applying common sense and judgement. The target should be to reduce the dwell time from several years to less than a day.
Zero days should be the target. But then that would be bordering on prevention…
The last two years have been an eye-opener for business, governments and citizens. They should now be aware of the vulnerability of information systems to penetration by spies, hackers and criminals. But do they care? Not that much it seems, as they clearly continue to trust service providers with their data.
Perhaps we might experience one or two wake-up calls this year. Certainly we can expect that everything to do with intellectual assets and cyber security will be bigger, faster and more volatile, as that is the underlying nature the Information Age. At the same time we can expect that little or nothing will get fixed or be any more secure, as that costs money and reduces business opportunity.
So what in particular will be waiting in the wings for cyber security professionals in 2015? Here are my personal forecasts.
The Internet of Things will be primary focus of this year’s research, investment and hype. But there will be no killer applications or compelling business cases. It will remain largely a solution looking for a problem, held back by a lack of imagination, standards and security. The idea of publishing sensor data to citizens is a daft aspiration from a security point of view. But researchers and product developers do not listen to security experts.
There will be no escape for security managers from the growing treacle of regulatory compliance. Amazingly, implementing an information security management system to ISO standards requires as many as fifty individual pieces of documentation. But the paper overhead will continue to increase with more competing standards and questionnaires surfacing each year. (I’ve had to develop a sophisticated 4D relational database to keep up.) Technology can help but current GRC solutions are immature, and some add to the swamp of data to be processed. This will be the year for CISOs to invest in more efficient enterprise solutions.
Prediction is the new, 4th dimension for security. The theme of this year’s Infosecurity Europe is “Smart data to detect, contain and respond”. But the theme is outdated: smart vendors such as Qualys have already added “predict” to the thirty-year old “prevent, detect, respond” paradigm. A decade of regulatory compliance treacle has relegated prediction to the back burner. It need to bounce back. Let’s all aim to reverse this trend by pushing the focus firmly towards the future. It could be the single most important paradigm shift of the year 2015.
Small data is the answer: We’ve seen increasing hype and emphasis about “big data” over the last few years. The hype is slightly misplaced. The data does not have to be big, but it needs to be intelligently selected and creatively combined. As Deming correctly pointed out (though he is a bad poster boy for the Information Age), running a business on visible figures alone is one of the seven deadly diseases of management. Today we have numerous sources of data, within and without the enterprise. Fusing this data will help shed visibility of risks and incidents. The data does not have to be big. Searching out, capturing and combining small data is the real key to predictive analytics.
The commoditisation of cyber security: t’s sad to say but many companies have been foolishly paying outrageously high fees for security experts that are little more than standards readers or script-kiddies armed with open-source software tools. There is a place for the expert and there is a place for the army of trainees. Don’t mix them up. Smart companies will outsource the latter to low cost off-shore service providers.