Quocirca Insights


June 29, 2016  1:10 PM

IoT design, security and PKI

Bob Tarzey Profile: Bob Tarzey

In a 2015 blog post – Securing the Internet of Things – time for another look at PKI? – Quocirca outlined why Public Key Infrastructure (PKI) is likely to see a new lease of life from the increasing deployment of applications that fit the general heading Internet of Things (IoT). As the first blog pointed out, IoT applications will only be a success if underlying security is ensured.

The assertion made in the first blog that the use of digital certificates and PKI to manage them are effective for securing the IoT is supported by a 2015 Quocirca research report (which will be was sponsored by Neustar). PKI achieves two objectives; the authentication of things and the security and integrity of the data they send and receive.

On the surface this use case for PKI may not appear that different from ones that have been around for years, for example, securing the communication of a user’s web browser (or smartphone app) with a banking service or confirming a software update is from a given supplier. The big difference with the IoT is that it involves relentless high volume machine-to-machine (M2M) communications, so PKI will only be effective if it is fast enough and cheap enough.

Application design

At first glance the volume problem may look insurmountable; however it can be addressed through application design. If every city, office, factory, home, car etc. is to be equipped with tens, hundreds or thousands of devices, how can they all even have an IP address let alone a digital certificate? True, the slow move to IPv6 does provide a virtually unlimited number of addresses compared to IPv4, but how do you manage them all? Good design means that volume of things need not be a problem at all. Why? You probably have an example in your pocket!

Smartphones are actually agglomerations of sensors and other devices: cameras, GPS receivers, Bluetooth and wireless chips, motion sensors and so on. None of these individual components has an independent IP address, they communicate, when necessary, via the phone’s CIM card or WiFi chip with a service provider. It is at this point that a digital certificate can guarantee a data feed is valid and secure. The phone is acting as a hub that communicates internally and securely with the various components (spokes). This hub and spoke approach can be repeated at any scale and systems may be layered like onion skins with one hub controlling others. The IoT volume problem is reduced by orders of magnitude and the use of PKI reserved for hub-to-hub and hub-to-central controller communications.

Of course, a hub’s communication with its spokes also needs to be secure. An obvious way is to use hard wired networking which is trickier to interfere with than wireless. However, wireless is a cheap and pragmatic way for implementing many IoT applications; here low cost approaches to security may be sufficient for hub to spoke communications, for example using device signatures based on hardware configuration. In fact, identity and security features are likely to be built more and more into hardware chips and Microsoft Windows 10 has specific features to improve support of IoT security on devices where it is installed. Hub and spoke also helps get around the encryption processing overhead that PKI introduces; this should not be a problem for powerful hubs, but spokes may be small or old devices without much compute power.

Hub and spoke also deals with issues around speed of communications and data volumes that need to be transmitted. A car may have a sensor on each of it 4 tyres, all constantly reporting to the hub every second; the air pressure is OK, the air pressure is OK…. There is no need for the hub to do anything about this until there is an exception; the air pressure is NOT OK. Only then does it need to raise an alert and get guidance from a controller. At this point security is essential, or it would be possible for false guidance to be issued to car, which is exactly the sort of risk that many flag for the IoT. So, hubs need to be smarter than spokes and that includes smart about security.

Why PKI?

The arguments in favour of PKI have been laid out many times. In summary, PKI (or asymmetric encryption) is a way of encrypting communications without both parties in the conversation having to know the key to unlock the encryption as is the case with the alternative symmetric encryption where private keys must be shared. Actually, PKI is often used to share the keys that will be used for symmetric encryption (which could also be used in some cases for secure communication of an IoT hub with its spokes).

The distribution of keys depends on the type of application. Hubs in cars and mobile phones need public keys to communicate with service providers that hold a private key. More complex situations may arise. For example, a wireless router may act as a hub for a home and need a public key to communicate with a given broadband service provider. However, it may also handle direct communications, over the broadband connection with smart TV manufacturer, which will require another set of separately managed keys.

Certificates themselves can be distributed by virtually anyone, shipped with the routers, smartphones, cars, TVs etc. However, they are only useable once validated and that is only done by a trusted certification authority (CA), of which there are many. Wikipedia lists in its Certificate Authority entry the four leading CAs as Comodo, Symantec, GoDaddy and GlobalSign.

The providers of PKI

Once a certificate has been distributed and certified, without the control of PKI systems it has a life of its own. It will expire if a date is set, but there will be no means of renewing, superseding or revoking it without PKI for life-cycle management. Effective PKI systems need to be able to manage certificates from any source as, with so many CAs, there will rarely be a single provider of certificates for any given IoT application. There is also a need to deal with widely varied certificate life-cycles; for example digital payments may be based on single use certificates whilst a road side sensor may require one that is valid for many years.

PKI vendors such as EntrustDatacard and Global Sign are actively repurposing and scaling out their PKI offerings for securing the IoT. Verizon, which ended up with the assets of Baltimore, a onetime star of the dotcom boom, now markets its PKI as Verizon Managed Certificate Services (MCS) and in January 2015 announced a new platform geared for securing IoT deployments.

Symantec has a Managed PKI platform too, as well as its PGP Key Management platform for symmetric keys; it sees a future where these need to be bought together to provide a broad trust capability. Another encryption key management vendor, Venafi, says it can do much of what is offered by PKI vendors to keep the use of certificates secure. Some PKI vendors are less proactive. Quocirca also spoke to RSA, which has a legacy foot in the PKI world as a onetime CA (since spun off as Verisign and now part of Symantec). RSA has put its PKI platform into maintenance mode.

If you think the IoT is going to be relevant to your business, then Quocirca’s 2015 research suggests you will not be alone. PKI is going to be one of the most important ways to secure IoT applications. With good design and a PKI platform provider that is up to the task you can proceed with confidence.

June 28, 2016  1:32 PM

Why EU data protection will still apply to post-Brexit UK

Bob Tarzey Profile: Bob Tarzey

The General Data Protection Regulation (GDPR) is expected to come in to force for EU member states in early 2018. It could be some time later that year that the UK finally severs its links with the EU. So for UK citizens will the GDPR be a short-lived regulation that can largely be ignored? The answer is no and the reasons fairly obvious; they are commercial, legal and moral.

Commercially, of course UK and European businesses will continue to trade, whatever happens to the balance of that trade in the longer term. So, any UK-based organisation that trades in the EU will have to comply with GDPR for at least the data stored about its EU-based customers; there is little point in having two regimes so many businesses will comply with the GDPR anyway.

The big benefit of GDPR at a high level, regardless of any shortfalls in the detail, is a common regime for multi-national businesses to deal with. A UK government that designed a data protection regime wholly different from the GDPR would just see UK descend the list as a target destination for foreign direct investment (FDI). This will be especially true for cloud service providers selecting a location to set up in Europe. In data protection, as in many other regulatory areas, it makes sense for UK to have a common status with its neighbours.

These commercial necessities lead on to the legal ones. The UK Data Protection Act is already closely aligned with the existing EU Data Protection Directive. It seems unlikely they any future UK government would reduce the protections provided to the privacy of UK citizens. Whatever its faults, the EU has never been an evil empire set to undermining the rights of the individual, it has always sought to improve their protection.

In fact, the most likely scenario is the all existing laws passed down by the EU over the last 40-odd years will be embedded wholesale into the corpus of UK law as scrapping them all overnight would leave UK business and citizens without much of the protection they have come to take for granted. This includes extant data protection laws.

This leads on to the moral reasons. Whether a UK citizen voted ‘remain’ or ‘leave’ in the June 2016 referendum and whatever their groans about EU law, few are going to turn round say; ‘no I don’t want to be informed when my data is compromised’ or ‘I don’t want the right to be forgotten’. The GDPR is ultimately about protecting EU citizens and, as with human rights in general, when it comes to crunch, the majority will recognise we are better off with these aspects of EU legislation than without them.

And there is good news there too, become a victim of a privacy violation and ultimately you will still have the Europe Court of Human Rights (ECHR) to appeal to. Many do not realise that the UK, along with 47 other countries, is signed up to the ECHR separately to its EU membership. Asking UK citizens to ditch a final court of appeal should their own nation let them down may be a harder sell than ditching the EU itself.


June 20, 2016  8:03 AM

Come spy with me: drones and info-sec

Bob Tarzey Profile: Bob Tarzey

UASs (unmanned aircraft system) or drones, as they are known in common rather than legal parlance, can easily cross physical barriers. As drone use increases, both for commercial applications and for recreational purposes, new challenges are emerging with regard to privacy and information security.

Millions of drones are estimated to have already been sold worldwide; tens of millions are expected to be out-there by 2020. As with any easily available new technology, criminals are early innovators, for example getting drugs across borders and mobile phones into prisons; here existing laws are being broken. However, drone operators who wish to remain within the law, need to be aware of evolving rules.

The basis for existing UK law lies with the Civil Aviation Authority’s (CAA) and its Air Navigation Order (ANO) and the European Air Safety Agency (EASA). These bodies have been around for years to regulate commercial aviation as well as dealing with traditional model aircraft. Today they are having to adapt to the rising use of and potential of drones.

Section 166 of the ANO (V4.1, republished in 2015) deals with small-UAS. The rules are most lenient for aircraft below 7KG in weight (heavy enough to cause injury, but not big enough to carry a significant bomb); any heavier and things start to get more restrictive. There is an operating limit of 400 feet above ground level (aviators stick with imperial for altitude) and UAS must be piloted by a human, albeit remotely, with visual line of sight (VLOS), which in practice is about 500M. So, for all the blather, the concept of delivering goods by drones is not legally practicable, regardless of technological issues, until the rules change to allow beyond-VLOS (BVLOS) operation.

There are two areas where information security issues overlap with the use of drones. First, the drones may be used for industrial espionage or to breach privacy. Second, drone operation may be interfered with, either to change the instructions sent or to intercept the data stored and/or transmitted.

Many current applications are in well-defined airspaces, for example farmers flying over their own fields which are of little interest to anyone else and inspection of infrastructure which, to all but those responsible, are often already no fly zones designated by the CAA. Other no fly zones include the regions around airports and military installations. There can also be temporary no-fly zones, for example during the visit of a dignitary to a given area. It is incumbent on the operator to know about and obey restrictions; but in practice has been hard to find out the current status.

This is where a newly launched service from a UK start-up called Altitude Angel helps, a kind of air traffic control system for drones. The basic service is free and anyone can go to www.altitudeangel.com and check on restrictions. The aim is to help operators be safer, legal pilots. It also allows users to register for alerts about manned aviation activity in an area of interest to them and has plans to add in information about UAS activity. Altitude Angel provides real time updates to operators and property owners; the service is dynamic and able to react to short term and long term changes. More advanced services are chargeable.

It is all well and good for governments and the military which can get no-fly zones set up. However, today there is nothing to stop someone flying a drone near commercially sensitive sites, nor are there any privacy restrictions per se around gardens etc. Ideas have been mooted about changing the default position, making all residential areas no-fly zones, that would protect privacy but make it harder to use drones for building surveys by builders or estate agents. There could be a future scenario where new restrictions can be applied for to protect certain locations or, in more controlled circumstances, temporarily lift them. Such dynamic changes would only work in practice if the information is readily available to drone operators via services such as Altitude Angel.

Of course, criminals will just ignore the rules and currently there is little control over this. Small-UAS do not have to be registered and cannot always be uniquely identified. This is starting to change, the USA and Ireland are putting in place registration processes. Furthermore, drones are quite capable of capturing and storing telemetry data, for example, GPS coordinates. This could even be required via a black-box style process, which, alongside registration, would make non-repudiation harder; you could not deny when challenged, that your drone had not been a given location.

As commercial use increases, criminals could try and interfere with the systems that control drones, diverted aircraft and stealing goods or data. The data sent back to operators by surveillance drones (for which the ANO already has additional rules) could be intercepted. Ground to air communication is in many cases still via unencrypted short distance radio. That is changing as more drones carry a 4G mobile receivers/transmitters and many are controllable from smartphone applications. Altitude Angel is working on secure protocols for the 4G exchange of data with drones.

For those who have thought about the problem of the growing number of drones the obvious concerns are about one dropping on your head or crashing into a commercial airliner. The first of these would be bad luck, perhaps no risker than the branch of a tree falling on you, the latter should not be possible if existing controls are observed. However, with the number of drones set to grow twenty-fold in the next few years, better systems and rules are going to have to put in place to protect operators, businesses and consumers.


June 13, 2016  10:10 PM

SD-WAN: Take a good look at the outliers

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

Is management cutting your IT budget in 2017?

From recent conversations with UK service providers, CIOs and telco analysts, the general perception is that enterprise IT budgets will face 30% cut in their operating expenses in 2017, counter balanced by a 20% increase in innovation spending. Such budget realignments require radical rethinking and can certainly not be achieved unless ops and innovation go hand in hand.

SDN and SD-WAN into the breach

One such win-win strategy is to reduce the reliance on expensive dedicated quality-of-service (QoS) network connections like MPLS and enhance the use of cheaper access technologies. If siloed wide-area network (WAN) access modes like MPLS, dedicated leased lines, DSL and LTE mobile connections can be combined, it will increase available bandwidth. But it also must ensure QoS while lowering overall cost

Screen Shot 2016-06-13 at 21.16.40

Software-defined networking (SDN) is the core WAN evolutionary concept, that gradually removes proprietary hardware constraints, and recasts hardware network capabilities as software options. SDN will allow network administrators to manage WAN network services by decoupling the control plane that decides where traffic is sent, from the underlying data plane that forwards data packets to the selected destination. With software defined access (SD-WAN) at the edges, users can already begin to reap the advantages of the emerging SDN capabilities, and with application program interfaces (APIs) they can tie functions together across different software stacks and hardware boxes.

This breaks down older telecom business models and threatens the legacy vendor-telco relationships where business critical networks rely on proprietary vendor technology operating across dedicated telco WAN connections. SD-WAN opens the market for smaller innovative players who can provide the network intelligence in customer premise equipment (CPE) at the WAN edge.

The lesser dependency on dedicated networks is accompanied by ever increasing use of 4G LTE mobile data capacity, where better coverage and dropping prices are in stark contrast to the increasing cost of landline services. This makes 4G LTE and soon 5G mobile access very important carriers of corporate data traffic.

There are obvious advantages to combining fixed and mobile WAN access streams. WAN customers should not be tied down by WAN access siloes, and be forced to use a tangle of incompatible access options: leased lines with QoS, and best effort fixed, mobile, and Internet voice and data connections. The IT department has worked for years to virtualised their data centres – now it’s time to virtualise their WAN access modes.

What are the telcos doing?

Many telcos are now launching SD-WAN services, and positioning them as an on-ramp to future SDN services. However, the telco SD-WAN approach has been to salvage as much of their MPLS investment as possible by redeploying their MPLS end-point gear to multi-channel connect to their cloud services:

“BT is offering its enterprise customers SD-WAN as a managed service, using Cisco routers that are already in place as MPLS network termination boxes and Cisco’s IWAN technology. Customers benefit from better network performance, and insight into the performance of their applications, without having to spend more on bandwidth. The service is managed through BT’s My Account portal.”

However, this and similar telco offerings from Verizon and Singapore Telecom miss out on some of the intelligent end point solutions notably inclusion of any 3G and 4G LTE connections from any mobile operator, and the ability to maintain uninterrupted connectivity across all access channels. Maintaining uninterrupted sessions across multiple access modes optimises availability, cost and reliability. Remote SD- WAN devices have to be able to cleverly manage WAN links based on their availability and performance profiles which are all things that also need to be measured locally at the other end of the connection.

Look to the disruptive SD-WAN players

Introducing distributed software smarts into WAN technologies is disrupting the incumbent vendor landscape in a couple of ways; new software models are emerging with vendors such as BMC that are taking advantage of applying the cloud to control commodity WAN equipment. Then there are the early market entrants such as Peplink with strong experience in access channel bonding and wireless networking. They are challenging the traditional hardware WAN sector with low cost but still powerful hardware tightly coupled with flexible software management.

So to meet the coming budget cuts in IT operations, while increasing their innovation activities, IT departments will need to step up to the SD-WAN challenge. Finding the dedicated SD-WAN vendors that can demonstrate relevant vertical industry solutions that bond access channels, integrate a wide range of wireless capabilities, and maintain multi-channel connectivity seamlessly. There are interesting case studies out there already.


June 13, 2016  10:36 AM

Working with giants – 25 years of IT security

Bob Tarzey Profile: Bob Tarzey

The IT security industry as we know it could be said to be enjoying its 25 anniversary. Of course, there has been a need for IT security for longer than this, but the release of HTML and the birth of the web in 1991, which saw widespread internet use take-off, was a game changer. Device-based security measures from existing anti-virus vendors like Norton (acquired by Symantec in 1990) and McAfee (acquired by Intel in 2011) had to be adapted from monitoring the occasional arrival of new content via portable media to the internet as a major new threat source. Checkpoint was founded in 1993 and released Firewall 1; network security barriers were being put in place.

In those 25 years, the IT security industry has created some giants; multi-billion dollar concerns such as Symantec, Trend Micro, Checkpoint and Intel Security (the former McAfee). These security giants keep adjusting their portfolios, mainly through the acquisition of, and sometimes through divestiture of, companies and assets.

There are many aspects to security but broadly speaking they either address network threats, monitoring stuff in motion; or protect against host threats, monitoring what is happening on device or platform which can be anything from a smartphone to a cloud storage service. That the giants want a foot in both camps was made clear by this week’s announcement that Symantec plans to acquire the network security vendor Blue Coat.

As Quocirca wrote in Feb 2016, Blue Coat was already on rapid expansion curve under the ownership of Bain Capital. Bringing Blue Coat into the fold will add a wide range of network security capabilities to Symantec’s portfolio. Furthermore, Blue Coat was in the process of extending many of its network security capabilities from being appliance-based to cloud-enabled services, an area where Symantec has been flagging. Symantec’s move mirrors Trend Micro’s 2015 acquisition of Tipping Point from HP, which was also an extension into network security.

Can such security giants be a force for good in IT security or do they just close down choice? Over 25 years, the rate of change in IT security has been rapid. This often means organisations end up with a wide range of point security products from many vendors; eventually this can become costly and unmanageable. For some, working with the giants make sense.

At the InfoSec Europe tradeshow last week, Quocirca met a CISO of a UK regulatory body who took this view. Accumulated point security products had become an expensive and hard to manage problem rather than an integrated security solution. It was felt that many core requirements including anti-virus, port control, vulnerability management, web gateways, email security etc. could now be single sourced from one of the broad portfolio IT security giants.

A short list of three vendors was drawn up and after a two-week test deployment of each vendor’s solution as available at the time, Trend Micro was selected over McAfee and Kaspersky Labs. All three vendors had their merits, not least in reduced licence and maintenance costs. However, Trend Micro scored well on having a single integrated management console and “spectacular” security for virtualised environments. Trend Micro Deep Security operates at the hypervisor level securing multiple virtual machine including desktop VDIs. The efficiency of the way Deep Security operates meant the regulator improved the efficiency of its use of virtual platforms by about 25%.

The savings of licence fees, ease of management and platform capacity more than covered the cost of investment for the organisation which is faced with government-imposed budget cuts of 15%. Furthermore, public cloud is seen as a likely way for future lower cost deployments and Trend Micro’s Hybrid Cloud Security, which provides a common set of tools for both internal data centre and external cloud platforms, ensures the current investment made now can be utilised flexibly in the future.

Small and innovative vendors will continue to emerge and drive the IT security industry forward as new threats emerge. There have been many such pulses of innovation over the years; email filtering, SSL VPNs, data loss prevention, next generation firewalls and so on. One of the most recent has been the rise of cloud access security brokers (CASB) to address the rise of shadow IT, this has been led by new vendors such as Skyhigh Networks, Netskope and Elastica. Oh sorry, Elastica is no more as it, was acquired by Blue Coat in November 2015 and it now set to become part of Symantec. The giants will prevail!


June 7, 2016  10:13 AM

NetApp/SolidFire – a new powerhouse, or straws grasping at each other?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

fas (3)_480x384In December 2015, NetApp made its bid for SolidFire at $870m. 6 months in, with the integration of the two companies and their products still ongoing, what does the future look like for the new company?

In June 2016, SolidFire held its last analyst day as an independent company in Boulder, Colorado. With only the ‘i’s to dot and the ‘t’s to cross, the SolidFire executives were in a position to talk more about the future in many areas – and NetApp also sent across a couple of their guys, including CEO George Kurian. Kurian himself has only been in position for a year, having joined NetApp from Cisco in 2011. The previous CEO, Tom Georgens, left under a cloud – NetApp revenues were in decline, and shareholders were beginning to make their feelings felt (from a high of close to $150, NetApp’s shares traded at around $33 when Georgens stepped down). Although NetApp had set the cat amongst the pigeons as it pressurised the big incumbency of EMC, forcing EMC to lower its prices and become less hubristic in its approach to the market, maintaining innovation and market pressure was proving a bit of an issue.

Also, NetApp was not performing well in certain spaces – it was, along with EMC, slow to see how flash was going to take over the storage market in a rapid timescale. Although it did start to support flash, its first moves were for hybrid flash/spinning disk systems, and its first forays into all-flash arrays were – well – pretty poor. FlashRay was postponed, and when it finally made its way to market in late 2014, its prices were too high and its performance was not up to scratch. Only recently did it come to the market with a better all-flash offering based on its flagship fabric attached storage (FAS) products. However, it did, again try to be disruptive here – the starting price for its all-flash FAS8000 systems came in at $25,000. This was meant to put the new kids on the block back in their place – but many of these had already started to make a name for themselves.

Companies such as Pure Storage, Nimble, Violin, Kaminario and SolidFire were making a lot of noise – not all of it based on reality, but they were gaining the focus of attention, somewhat like NetApp did in its earlier days of taking on EMC.

SolidFire had started up in 2010 by a young David Wright, fresh from having been an engineer at GameSpy, which was acquired by IGN. Here, he became chief engineer, overseeing IGN’s integration into Fox. Upon leaving, he set up Jungledisk, which was acquired by Rackspace.

NetApp’s biggest problem though, was that its ONTAP software and its FAS approach were unsuited to one major sector – the burgeoning cloud provider market. It needed a system that could scale out easily in such environments – and it was pretty apparent that changing FAS to do this was not going to be easy.

Finally, NetApp decided that it needed more of a mature cloud-capable all-flash system, and decided to acquire SolidFire. This also fitted in quite well with NetApp’s approach – SolidFire believes that its value lies in its software (you can buy SolidFire as a software-only system), which is also pretty much as NetApp sees itself with its ONTAP software.

Does the new company therefore bring a new force to the market, or is it a case of a once-great storage company clutching at straws?

At the event, SolidFire executives were eager to show how the SolidFire products (SolidFire will remain a brand under the NetApp business) were still moving forward. It has released the ninth version of its Element OS (Fluorine) with support for VVOLs, a new GUI, support for up to 40 storage nodes via fibre channel and increasing the IOPS limit from 300,000 per fibre channel pair to 500,000 per node or 1,000,000 per fibre channel pair.

NetApp was also keen to talk about its 15TB SSDs for its all flash FAS – these are, in fact, 15.3TB, rounded down for simplicity’s sake. To round down by 300GB – a storage volume that just a year or so ago was the high end of available SSDs – is pretty impressive.

Another major discussion point was SolidFire’s move to a new licencing model – FlashForward. This pulls the hardware and software aspects of the licences apart, creating some interesting usage models. For example, depreciation can be carried out at different rates: hardware depreciating over, say, three years, while software depreciates over five. New ideas can be tried out – an example provided by one of the service providers at the event was entry into a new market.

The cost of the storage hardware itself is reasonably small. Therefore, the service provider can purchase the hardware and have it delivered directly to a datacentre in the new market. It can then use the new software licence model, which is based on paying for the amount of provisioned storage, to try out the new market. If everything works out, it just continues using the hardware and software as it is. If it doesn’t work out, it can stop using the hardware and roll back the software licence, saving money.

Unfortunately, SolidFire’s messaging behind FlashForward left much to be desired, and the volume of questions from the analysts present showed how much work is still required to get this right.
Although SolidFire showed that it is maintaining its own momentum in the market, this does not make life that much easier for the new NetApp. It now has Element OS and ONTAP as storage software systems that it needs to pull together, as well as manage a combined sales force that will still be tempted to sell what it knows best to customers, rather than what from the combined portfolio best suits the customer.

NetApp is still struggling in the market – its last financials shows that, even allowing for the costs of SolidFire’s acquisition, its underlying figures were still not strong. Kurian has stated that he expects the main turnaround to happen in 2018 – a long time for Wall Street to wait.

Meanwhile, the new Dell Technologies will be fighting in the market with hyper-converged (complete systems of server, network and storage for running total IT workloads), converged (intelligent storage systems with server components for running storage workloads) and storage-only systems, and Pure Storage may cross the chasm to become a strong player. Other incumbents, such as IBM, HDS and Fujitsu, have not been standing still and will remain strong competitors to the new NetApp.

Some of the new kids on the block, such as Violin Memory, may well leave the playing field; Kaminario, Nimble and others may have to market themselves more aggressively to get to the critical mass required – and the financial performance – to remain viable in the markets.

Overall, NetApp is still in a fragile position – SolidFire certainly adds strength to its portfolio, but Kurian has a hard job ahead of him in ensuring that this portfolio is played well in the field.


May 17, 2016  1:12 PM

Updates, updates – hares and tortoises in the software vulnerability race

Bob Tarzey Profile: Bob Tarzey

To penetrate a target organisation’s IT systems hackers often make use of vulnerabilities in application and/or infrastructure software. Quocirca research published in 2015 (sponsored by Trend Micro) shows that scanning for software vulnerabilities is a high priority for European organisations in the on-going battle against cybercrime.

Scanning is just one way of identifying vulnerabilities and is of particular importance for software developed in-house. For off-the-shelf software, news of newly discovered vulnerabilities often comes via the suppliers of commercial packages or, in the case of open source software, from some part of the community. This also applies to components embedded in in-house developed software, such as the high profile Heartbleed vulnerability that was identified in OpenSSL in 2014.

Software flaws come to the attention of vendors in three main ways. First, an organisation using the software may discover a problem and report it, perhaps having had the misfortune to be an early victim of an exploited vulnerability (when this turns out to be the very first use of an exploit it is termed a zero-day attack). Second, a flaw may be reported by a bug bounty hunter or third, a vendor may find a flaw itself. Regardless of who discovers a vulnerability, users need to be made aware and once the news is out there, a race is on.

Software vendors need to provide a patch as soon as possible and will aim to keep publicity to a minimum in the interim whilst the fix is prepared. Meanwhile, any sniff of a vulnerability and hackers will work at hare-speed to see if it can be exploited, either for their own ends or to sell on as an exploit kit on the dark web. All too often the tortoises in this race are end user organisations that are too slow to become aware of flaws and apply patches, thus extending the window of opportunity for hackers.

In principle this should not be the case. Most reputable software vendors have well-oiled routines for getting software updates to their customers, for example Microsoft’s Patch Tuesday. However, the reality is not that simple.

For a start, applying updates is disruptive. In an age where 24-hour, 7-day application availability is required, taking applications down for maintenance can be unacceptable to businesses. Also, as more organisations move to dynamic DevOps-style application development and deployment, software is fast changing and keeping tabs on all applications and components can be tricky. Software patching methods have had to adapt accordingly.

Then there is the problem of legacy software. Older applications are increasingly being targeted by hackers because the patching regimes are lax. This applies both to software from vendors that have disappeared through long forgotten acquisitions or have gone out of business. All too often their software still sits at the core of business processes. It also applies to old versions of software from vendors that have made it clear that said software is no longer supported and will not be updated. For example, many of Microsoft’s older server and desktop operating systems remain in use despite repeated prompts to move to more recent versions; the upgrade proving to be too expensive or complicated.

There are many ways to mitigate all these problems. However, wherever possible the primary way should be to keep software up to date; as one chief information security officer (CISO) put it to Quocirca recently, ‘vulnerability management is the cornerstone of our IT security’. That responsibility can be sourced either through the use of managed security service providers (MSSP) or through the use of cloud services that are responsible for keeping their own software up to date.

There will be advice from CISOs from some leading organisations in the frontline in the fight against cybercrime at Infosec Europe this year. These include Network Rail, The National Trust and Live Nation’s Ticketmaster; all are highly dependent on their online infrastructure and see keeping their software up to date as critical. Quocirca will be chairing the panel at 16:30 on June 7th; more detail can be found at the following link Updates, Updates, Updates! Getting the Basics Right for Resilient Security.


May 12, 2016  5:00 PM

IT Untethered – How Wireless is Changing the World

Bob Tarzey Profile: Bob Tarzey

Not much more than 20 years ago, nearly all local area networks (LAN) involved cables. There had been a few pioneering efforts to eliminate the wires but for most it was still a wired world. With the advent of client-server computing and the need for access to IT being required by more and more employees this was becoming a problem. Furthermore, smaller computers meant more mobility, devices were starting to move with their users.

Cables could be hard to lay down in older buildings and modern buildings become messy to reconfigure as needs changed and users wanted more flexibility. Structured cabling systems and patch panels helped but going wireless network could make things even easier. The race was on to get rid of the wires altogether. you-are-invited-st-pauls-768x437

Move forward to today and what we now call Wi-Fi is everywhere. Often used in conjunction with wide area wireless provide by mobile operators over 3G and 4G networks and low power/wide area (LPWA) technologies, wireless has moved beyond the initial use case of flexible LANs to provide the cornerstone of two huge movements in IT: ubiquitous mobile computing, often via pocket size devices and the Internet of Things (IoT). Neither would be possible without wireless and hence wireless is changing the world.

Development of the 802.11x (Wi-Fi) standard has delivered potential throughput capacity thousands of times faster than the earliest wireless LANs. Forthcoming 5G cellular networks will offer a range of improvements over their 4G and 3G predecessors including a huge capacity upgrade. For many organisations the volume of wireless network traffic now exceeds wired.

User sessions can be seamlessly handed off from one Wi-Fi access point to another and from Wi-Fi to cellular. It is estimated that there are 65M Wi-Fi hot spots in the world today and there will be 400M by 2020. High speed cellular data access is ubiquitous, being available in nearly every major city. The mobile user has never been better served and the stage is set for the IoT-explosion that is predicted to lead to many more connected things than there are people on Earth.

Yes, wireless is changing the world, but it is not all good. There are concerns about data privacy, rogue devices joining networks, the expanded attack surface created by the IoT and so on. These security issues are addressable with technologies such as network access control (NAC) and enterprise mobility management.

On May 18th 2016 Quocirca will be given a presentation on “How Wireless is Changing the World” at St. Paul’s Cathedral with CSA Waverley and Aruba. To find learn more about how your organisation can benefit from mobility and the IoT whilst keeping wireless risk to a minimum you can attend this free event by registering at the following link http://www.csawaverley.com/aruba-event-st-pauls-cathedral-2/


May 6, 2016  3:37 PM

The Final Countdown: The last EMC World before Dell Technologies

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

MBC_EMCW16_5-3-5012The recent EMC World event held in Las Vegas could have been a flop. With the Dell takeover of the EMC Federation (EMC Corp and all its divisions of EMC II, VMware, RSA, Pivotal and Virtustream) in full swing, it would have been easy for the EMC management to claim that they were in a ‘quiet period’ and so refuse to disclose much.

It wasn’t quite like that.

Firstly, Joe Tucci (pictured left with Clive Longbottom and Tony Lock) was on stage saying his farewells (very pointedly, not his goodbyes), and talking about the synergies that were possible between the various parts of EMC and Dell. This was followed by Michael Dell exploding around the stage, looking far more animated and pumped up than I have seen him for many a year. It was obvious that this ‘quiet period’ is anything but – Dell and EMC are working hard to make sure that the new Dell Technologies (the chosen name for the new combined company) will hit the road not just running, but at light speed.

Such energy is laudable – but without evolution in what EMC is doing, ultimately futile. In a world where technology is changing so rapidly, EMC and its divisions have faced the possibility of being the next dinosaur; meeting the extinction event caused by the impact of the new all-flash array storage providers and the web-based, software as a service information and security management players.

In another article, I took a look at some of the possible outcomes from the merger.

I now have had to rethink. Not long after the article went live, VCE was spun in to the EMC II portfolio, with Cisco becoming less of an investor and more of a partner. At this year’s EMC World, the main presentations were awash with converged and hyperconverged systems – it is obvious that VCE is unlikely to be sold off, but will take on the mantle of converged/hyperconverged within the new Dell Technologies Dell EMC division (the naming convention for the new ‘family’ – not, very pointedly, a ‘federation’ – is a little longwinded). Indeed, across pretty much everywhere, this drive toward hardware convergence was evident – even within the new DSSD offering (a massive super-fast all flash box based around direct attached, server-side storage technologies, but offered as a rackable system, named the D5). Whereas DSSD has been primarily a server-side storage system before, the D5 will be able to be installed in converged and hyperconverged modes.

This enhanced scale out as well as scale up capability is a strong differentiator for EMC, and so it will be for Dell Technologies. Some others in the hyperconverged markets have little scale out capabilities – if you run out of storage, buy another complete system. All that extra compute and network power is wasted, but at least you have more storage. With EMC VCE systems, each resource can be expanded independently, while still maintaining a hyperconverged architecture.

Noticeable by his absence was Pat Gelsinger, the CEO of the VMware division. VMware has been a focus of the acquisition deal for many reasons – one of which is that it is a publicly quoted company in which EMC owns 80% of the shares. The idea was for Michael Dell to raise a new share class to part fund the acquisition of EMC: the US SEC frowned on this and wanted full tax to be paid on the shares if this went ahead. This would have left a rather large hole in the financing of the deal. So, how much of the existing public shares should Michael Dell sell off to raise money? Selling off 29% still leaves him with a majority holding – but not much incoming funds. 50% would still give a 30% holding, which is still a board seat and a safety net against a hostile takeover of VMware (Carl Icahn is still a threat, and he is probably not happy since Michael Dell beat him and took Dell private). Selling 59% maximizes the incoming funds while still maintaining a seat on the board, but does make it easier for a hostile takeover to happen. How Michael Dell plays this, and the roll of Gelsinger going forward, will be interesting.

There were lots of other announcements at the event – it was pretty obvious that while there are a lot of discussions and machinations going on at the top of Dell and the various parts of EMC, the message to the EMC staff is ‘full steam ahead’ with continued new products across the board.

Again, all so good – but can the new Dell Technologies make it? I was all for Michael Dell taking Dell private; I have been cooler on the Dell/EMC deal. Why? The existing full ‘solution’ players have not fared well. HP has had to split in two; IBM has divested itself of large parts of its business and is reinventing itself around a cloud model. Regionally, French-based Bull has been acquired by Atos; Japan-headquartered NEC is seeing revenues continue to fall slowly; also Japanese-headquartered Fujitsu has seen its revenues plateau.

Into this landscape of underperformance will emerge the new Dell technologies – a global one-stop-shop for IT platforms. It is pretty dependent on managing the long tail of on-premise data centre installations; of making the most of the continued moves to colocation, and in becoming a platform of choice for the various ‘as a service’ players in the markets.

The first will be an ever shrinking market – Dell Technologies cannot count on this going forward. The second is a fair target: IBM will be less of a player here, leaving it pretty much a head-to-head between HP and Dell technologies for a platform sell. In the medium term, this is where the majority of Dell Technologies’ money will come from.

On the third item, there remains a lot of work to be done on cloud product and messaging. The old Dell had tried and failed at being a public cloud operator itself, and had decided instead to become a cloud aggregator, something Quocirca supported. EMC bought Virtustream, and created an infrastructure as a service (IaaS) public cloud using Virtustream’s xStream cloud management software, as well as EMC’s Pivotal Cloud Foundry. From the presentations at EMC World, it is pretty evident that Virtustream cloud is still a strategic platform. However, EMC currently has too many different cloud services and messages – this will only be worse when the Dell Technologies deal goes through. Creating clearer messaging around a less complex portfolio and playing it effectively in the Dell Technologies “Enterprise Hybrid” and “Native Hybrid” cloud messaging will be key in battling AWS, Microsoft, Google, IBM and all the other cloud platforms out there.

Overall, then, it was apparent that the feeling within EMC is that the Dell deal is exciting, and that everyone is up for it. The feeling from the top is that the effort will be put in to make it all work, and that power bases will be dealt with as swiftly as possible to cut down on any internal wars breaking out. End users at the event also seemed positive – the early worries about what it meant to both companies seem to be disappearing.

However, the building of a massive, platform-centric company when others are moving away from the model could be the biggest gamble of Michael Dell’s life. I really hope that it all works out well.


April 25, 2016  11:17 AM

Before & during targeted attacks – the 2016 Eskenzi IT Security Analyst & CISO Forum

Bob Tarzey Profile: Bob Tarzey

A recent Quocirca report, The trouble at your door, sponsored by Trend Micro, looked at the scale of targeted attacks faced by UK and European businesses and the before, during and after measures in place to mitigate such attacks. Trend Micro has plenty of its own products on offer, not least its recently upgraded Hybrid Cloud Security offering. However, last week, Quocirca got a chance to review ideas from some smaller vendors at the annual Eskenzi PR IT Security Analyst and CISO Forum. The 10 vendors that sponsored the forum were focussed mainly on before and after measures.

Targeted attacks often rely on IT infrastructure vulnerabilities. The best way to protect against these is to find and fix them before the attackers do. White Hat Security discussed the latest developments in its static (before deployment) and dynamic (post deployment) software scanning services and how its focus has extended from web-enabled applications to a significant emerging attack vector – mobile apps. This is backed by White Hat’s global threat research capability, including a substantial security operations centre (SOC) in Belfast, UK.

Cigital is an IT services company also focussed on software code scanning, mainly using IBM’s AppScan. It helps its customers improve the way they develop and deploy software in the first place, as Cigital puts it we can “do it for you, do it with you or teach you to do it yourself“. The company is based in Virginia but has an established UK presence and customer base.

Tripwire provides a broader vulnerability scanning capability looking for known problems across an organisation’s IT infrastructure. In 2015 Tripwire was acquired by Belden, the US-based manufacturer of networking, connectivity and cabling products. Belden sees much opportunity in the Internet of Things (IoT) and Tripwire extends vulnerability scanning to the multitude of devices involved.

The continual need to interact with third parties online introduces new risk for most organisations; how can the security of the IT systems and practices of 3rd parties be better evaluated? RiskRecon offers a service for assessing the online presence of third parties, for example looking at how up to date web site software and DNS infrastructure are; poor online practice may point to deeper internal problems. RickRecon is considering extending its US-only operations to Europe.

UK-based MIRACL provides a commercial distribution of a new open source Milagro encryption project of which it is one of the major backers. Milagro is an alternative to public key encryption that relies on identity based keys that are broken down using a distributed trust authority which only the identity owner can reassemble. MIRACL believes IoT will be a key use case as confidence in the identity of devices is one of the barriers that needs to be overcome.

Illumio provides a set of APIs for embedding security into workloads, thus ensuring security levels are maintained wherever the workload is deployed, for example when moved from in-house to public cloud infrastructure. This moves security away from the fractured IT perimeter into the application itself; for example, enabling deployments on the same virtualised infrastructure to be ring fenced from each other – in effect creating virtual internal firewalls.

FireEye was perhaps the best know brand at the forum and one of four vendors more focussed on during measures. Its success in recent years has been mitigating threats at the network level using sandboxes that test files before they are opened in the user environment. FireEye’s success has enabled it to expand to offer threat protection on a broad front including user end-points, email and file stores.

Lastline also mitigates network threats by providing a series of probes that detect bad files and embedded links. Its main development centre is in Cambridge, UK. A key route to market for Lastline is a series of OEM agreements with other security vendors including WatchGuard, Hexis, SonicWall and Barracuda.

UK-based Mimecast was itself a sponsor at the forum. Its on-demand email management services have always had a strong security focus. It has been expanding fastest in the USA and this included a 2015 IPO on NASDAQ. Mimecast has also been focussing on new capabilities to detect highly targeted spear phishing and supporting the growing use amongst its customers of Microsoft Office 365 and Google Apps.

Last but not least, Corero is a specialist in DDoS mitigation. In a mirror image of Mimecast it is US-based but listed on the UK’s Alternative Investment Market (AIM). Its appliances are mainly focussed on protecting large enterprises and service providers. Its latest technology initiative has been to move DDoS protection inline, enabling immediate detection and blocking of attacks as opposed to sampling traffic out of line and therefore blocking attacks only after they have started by diverting network traffic.

Quocirca’s research underlines how attackers are getting more sophisticated. The Eskenzi forum provides a snapshot of how the IT security industry is innovating too. There were no vendors present specifically focussed on responding to successful attacks and the need for such plans to be in place for when an attack has been successful is paramount. That said, decreasing the likelihood of being breached with better before and during measures should reduce the need for clearing up after the event.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: