Quocirca Insights


July 25, 2016  5:21 AM

There’s more to a container than, well, a container.

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

It seems that virtual machines (VMs) are so last year. A new ship has sailed in and docked – it’s a container ship and it is full of Dockers (as well as Rkts and Mesos). The idea of containerisation is taking hold – the promise of a lighter means of dealing with applications enabling much higher workload densities and lower storage costs seems to attract developers, sysadmins and line of business people like moths to a flame.

However, as with all relatively young technologies, problems are appearing. Containers are small because they share as much as they can, in particular when it comes to the underlying operating system (OS) on which the containers are provisioned. Any action through to the OS is worked against that single physical copy of the OS. Only the application code is held within the container – so keeping it small. Therefore, we can call the likes of Docker, Rkt and Mesos ‘application containers’.

That seems to make sense at first look – why have multiple copies of the same code (i.e. the OS) doing exactly the same function (as is the case with VMs)?

The problem is where privileged access is required by an application to one of these shared functions. As the shared function is shared across all application containers, if that privileged access is used to compromise the underlying function, it compromises all application containers using it.

This, for what I hope are obvious reasons, is not a good thing.

As VMs are completely self-contained systems with the OS and everything held within them, they are less prone to privilege-based security issues. But, back to the initial problem, VMs are large, unwieldy and require a lot of management to ensure that the multiple copies of OSs held across them are continually patched and upgraded.

If only there was some way to bring the best parts of VMs and application containers together, so as to provide secure, lightweight systems that are easy to manage and also highly portable across IT platforms?

Well, luckily, there is.

This is what is called a ‘system container’.

canstockphoto7138859

It still has an underlying OS, but it also applies a ‘proxy namespace’ between that OS and the application container itself. Through this means, any calls to the underlying shared services are captured and can be secured in transit. So, an application container that makes a call to a specific port, LUN or network address can have that captured and managed within the proxy namespace. Any action that could be detrimental can be hived off and sent to a virtual copy of the library, port, LUN or whatever, ensuring that it is more effectively sandboxed away from other application containers’ needs.

This also gets around another issue with application containers. On the whole, application containers require that all containers are capable of running on not only the same version of the underlying OS, but also the same patch level.

A system container can ensure that calls that are dependent on a specific version of a library, or even a whole OS, are routed as required. Further, application containers really only perform well with a modern microservice based architecture – legacy client-server (amazing how we are regarding so many live systems as already be legacy, eh?) applications struggle to gain the advantages of a container-based architecture, but tend to work well within a VM. System containers get around this issue, as the can hold and manage legacy applications in the same way as an application container – all calls made by the legacy app can be dealt with through the proxy namespace. Therefore, system containers enable a mixed set of workloads to be run across an IT platform.

System containers also enable greater workload mobility. As all dependencies are managed within the container and the proxy namespace, moving a workload from one platform to another, whether it be within an organisation or across a hybrid cloud environment is far easier. This also then feeds into DevOps – the movement of code through from development into test and then into the production environment can be streamlined.

For organisations that are looking at using application containers within their IT strategy, Quocirca strongly recommends that system containerisation is on the shopping list.

Quocirca has written a freely downloadable short report on the subject, commissioned by Virtuozzo, that can be downloaded here.

July 14, 2016  9:55 AM

IoT in the enterprise – a pragmatic approach

Rob Bamforth Rob Bamforth Profile: Rob Bamforth
Facilities management, iot, MITIE, samsung, Wearable devices

There is a degree of hype surrounding the internet of things (IoT) with many wild ideas reminiscent of ideas for internet businesses during the dot com boom. Despite this, the combination of exuberant innovation and pragmatism is already paying off with some practical and tangible business benefits. However, it is important for businesses not to solely focus on the shiny nature of ‘things’ but to take a broader view with their digital connected strategy.

IoT digital transformation

A recent event, hosted by Mitie, in conjunction with Samsung and TBS Mobility, brought together many important aspects that underpin how IoT technology and wearable devices could have a significant impact on businesses.

WIRED magazine gave presentation of current innovation in this area, which explored the potential for dramatic impact, especially for consumers. Only a couple of years ago many of the ideas would have seemed fanciful and far-fetched. All were based on current concepts ranging from working prototypes to customer ready products.

From ‘consumerisation’ to industrialisation

Consumerisation is important as it lays the groundwork for increased acceptance of technology in both the home and workplace. Recent Quocirca research of UK company attitudes to IoT and wearable devices (“The many guises of IoT” report) has noted a growing appetite for the use of these technologies, especially if both line of business and IT are working together.

This is something that Mitie has addressed with the application of IoT and wearable devices to aid facilities management tasks. Many of the activities involved in managing, securing and cleaning workplaces and facilities may seem straightforward. They may not appear to be obvious contenders for the use of novel technology, but there are opportunities to streamline processes.

The simple repetitive task of cleaning toilet facilities is one area being addressed and has already provided interesting results. Most people will be familiar with the signed sheets outside on the wall indicating when a facility was last cleaned, usually once per hour. But this routine approach to maintenance ignores actual usage of facilities and consequent requirements.

By use of a simple sensor monitoring ‘traffic’ levels, Mitie has gained an understanding of usage patterns. In the past this sort of checking might have been conducted periodically and analysed after the event to establish new working routines. Now the data can be acted upon immediately and dynamically; if the facilities have not been used, then no work is required, but if usage suddenly rises (perhaps a large meeting or event), a message is sent directly to an operative to act.

The combination of IoT with the use of devices worn by the operatives – in this case Samsung smartwatches – means that the messaging technology does not get in the way or encumber the worker. Added to this the organisation does not have to provide what would once have been much more expensive IT to its workforce. The process has been streamlined, but service levels are also improved.

Incremental investment

The idea does not need to stop there. With some external data sources and analytics, a more predictive approach could be taken. Additional sensors could be added to check the use of soap from soap dispensers and toilet paper so that intelligent replenishment schemes could be put in place. It might seem unimportant to those not directly involved, but like other areas where processes can be semi-automated, real efficiency savings can be made. In low margin services where much of the cost is people, such as facilities management, efficiency makes a big difference.

The investment cost has not been significantly high either and this indicates a great way to apply innovative technology to improve a business process:

  • First, identify a business problem that could benefit from incremental improvement by gathering more data or applying some level of automation.
  • Next, look to technologies that are becoming commoditised by consumerisation so that employee acceptance can be readily achieved and propositions can be tested quickly and deployed relatively cheaply.
  • Finally, measure and analyse the return to plot next steps. It might require more investment or enhancement, or even for the current concept approach to just be made more robust. However, if the returns are already demonstrable and the decisions about the next level of investment are based on valid experimentation, then the next small leap is not in the dark.

With current levels of innovation it is clear that there will be many new IoT technologies and concepts over the coming years, but businesses do not need wait. There are plenty of smart devices and sensors available to use today, and costs have already been driven down to levels that make enterprise applications worthwhile. It doesn’t require special IoT magic or even a CIoTO (Chief IoT Officer), just a bit of business led thought combined with smart IT application of what’s already available.


June 29, 2016  1:10 PM

IoT design, security and PKI

Bob Tarzey Profile: Bob Tarzey

In a 2015 blog post – Securing the Internet of Things – time for another look at PKI? – Quocirca outlined why Public Key Infrastructure (PKI) is likely to see a new lease of life from the increasing deployment of applications that fit the general heading Internet of Things (IoT). As the first blog pointed out, IoT applications will only be a success if underlying security is ensured.

The assertion made in the first blog that the use of digital certificates and PKI to manage them are effective for securing the IoT is supported by a 2015 Quocirca research report (which will be was sponsored by Neustar). PKI achieves two objectives; the authentication of things and the security and integrity of the data they send and receive.

On the surface this use case for PKI may not appear that different from ones that have been around for years, for example, securing the communication of a user’s web browser (or smartphone app) with a banking service or confirming a software update is from a given supplier. The big difference with the IoT is that it involves relentless high volume machine-to-machine (M2M) communications, so PKI will only be effective if it is fast enough and cheap enough.

Application design

At first glance the volume problem may look insurmountable; however it can be addressed through application design. If every city, office, factory, home, car etc. is to be equipped with tens, hundreds or thousands of devices, how can they all even have an IP address let alone a digital certificate? True, the slow move to IPv6 does provide a virtually unlimited number of addresses compared to IPv4, but how do you manage them all? Good design means that volume of things need not be a problem at all. Why? You probably have an example in your pocket!

Smartphones are actually agglomerations of sensors and other devices: cameras, GPS receivers, Bluetooth and wireless chips, motion sensors and so on. None of these individual components has an independent IP address, they communicate, when necessary, via the phone’s CIM card or WiFi chip with a service provider. It is at this point that a digital certificate can guarantee a data feed is valid and secure. The phone is acting as a hub that communicates internally and securely with the various components (spokes). This hub and spoke approach can be repeated at any scale and systems may be layered like onion skins with one hub controlling others. The IoT volume problem is reduced by orders of magnitude and the use of PKI reserved for hub-to-hub and hub-to-central controller communications.

Of course, a hub’s communication with its spokes also needs to be secure. An obvious way is to use hard wired networking which is trickier to interfere with than wireless. However, wireless is a cheap and pragmatic way for implementing many IoT applications; here low cost approaches to security may be sufficient for hub to spoke communications, for example using device signatures based on hardware configuration. In fact, identity and security features are likely to be built more and more into hardware chips and Microsoft Windows 10 has specific features to improve support of IoT security on devices where it is installed. Hub and spoke also helps get around the encryption processing overhead that PKI introduces; this should not be a problem for powerful hubs, but spokes may be small or old devices without much compute power.

Hub and spoke also deals with issues around speed of communications and data volumes that need to be transmitted. A car may have a sensor on each of it 4 tyres, all constantly reporting to the hub every second; the air pressure is OK, the air pressure is OK…. There is no need for the hub to do anything about this until there is an exception; the air pressure is NOT OK. Only then does it need to raise an alert and get guidance from a controller. At this point security is essential, or it would be possible for false guidance to be issued to car, which is exactly the sort of risk that many flag for the IoT. So, hubs need to be smarter than spokes and that includes smart about security.

Why PKI?

The arguments in favour of PKI have been laid out many times. In summary, PKI (or asymmetric encryption) is a way of encrypting communications without both parties in the conversation having to know the key to unlock the encryption as is the case with the alternative symmetric encryption where private keys must be shared. Actually, PKI is often used to share the keys that will be used for symmetric encryption (which could also be used in some cases for secure communication of an IoT hub with its spokes).

The distribution of keys depends on the type of application. Hubs in cars and mobile phones need public keys to communicate with service providers that hold a private key. More complex situations may arise. For example, a wireless router may act as a hub for a home and need a public key to communicate with a given broadband service provider. However, it may also handle direct communications, over the broadband connection with smart TV manufacturer, which will require another set of separately managed keys.

Certificates themselves can be distributed by virtually anyone, shipped with the routers, smartphones, cars, TVs etc. However, they are only useable once validated and that is only done by a trusted certification authority (CA), of which there are many. Wikipedia lists in its Certificate Authority entry the four leading CAs as Comodo, Symantec, GoDaddy and GlobalSign.

The providers of PKI

Once a certificate has been distributed and certified, without the control of PKI systems it has a life of its own. It will expire if a date is set, but there will be no means of renewing, superseding or revoking it without PKI for life-cycle management. Effective PKI systems need to be able to manage certificates from any source as, with so many CAs, there will rarely be a single provider of certificates for any given IoT application. There is also a need to deal with widely varied certificate life-cycles; for example digital payments may be based on single use certificates whilst a road side sensor may require one that is valid for many years.

PKI vendors such as EntrustDatacard and Global Sign are actively repurposing and scaling out their PKI offerings for securing the IoT. Verizon, which ended up with the assets of Baltimore, a onetime star of the dotcom boom, now markets its PKI as Verizon Managed Certificate Services (MCS) and in January 2015 announced a new platform geared for securing IoT deployments.

Symantec has a Managed PKI platform too, as well as its PGP Key Management platform for symmetric keys; it sees a future where these need to be bought together to provide a broad trust capability. Another encryption key management vendor, Venafi, says it can do much of what is offered by PKI vendors to keep the use of certificates secure. Some PKI vendors are less proactive. Quocirca also spoke to RSA, which has a legacy foot in the PKI world as a onetime CA (since spun off as Verisign and now part of Symantec). RSA has put its PKI platform into maintenance mode.

If you think the IoT is going to be relevant to your business, then Quocirca’s 2015 research suggests you will not be alone. PKI is going to be one of the most important ways to secure IoT applications. With good design and a PKI platform provider that is up to the task you can proceed with confidence.


June 28, 2016  1:32 PM

Why EU data protection will still apply to post-Brexit UK

Bob Tarzey Profile: Bob Tarzey

The General Data Protection Regulation (GDPR) is expected to come in to force for EU member states in early 2018. It could be some time later that year that the UK finally severs its links with the EU. So for UK citizens will the GDPR be a short-lived regulation that can largely be ignored? The answer is no and the reasons fairly obvious; they are commercial, legal and moral.

Commercially, of course UK and European businesses will continue to trade, whatever happens to the balance of that trade in the longer term. So, any UK-based organisation that trades in the EU will have to comply with GDPR for at least the data stored about its EU-based customers; there is little point in having two regimes so many businesses will comply with the GDPR anyway.

The big benefit of GDPR at a high level, regardless of any shortfalls in the detail, is a common regime for multi-national businesses to deal with. A UK government that designed a data protection regime wholly different from the GDPR would just see UK descend the list as a target destination for foreign direct investment (FDI). This will be especially true for cloud service providers selecting a location to set up in Europe. In data protection, as in many other regulatory areas, it makes sense for UK to have a common status with its neighbours.

These commercial necessities lead on to the legal ones. The UK Data Protection Act is already closely aligned with the existing EU Data Protection Directive. It seems unlikely they any future UK government would reduce the protections provided to the privacy of UK citizens. Whatever its faults, the EU has never been an evil empire set to undermining the rights of the individual, it has always sought to improve their protection.

In fact, the most likely scenario is the all existing laws passed down by the EU over the last 40-odd years will be embedded wholesale into the corpus of UK law as scrapping them all overnight would leave UK business and citizens without much of the protection they have come to take for granted. This includes extant data protection laws.

This leads on to the moral reasons. Whether a UK citizen voted ‘remain’ or ‘leave’ in the June 2016 referendum and whatever their groans about EU law, few are going to turn round say; ‘no I don’t want to be informed when my data is compromised’ or ‘I don’t want the right to be forgotten’. The GDPR is ultimately about protecting EU citizens and, as with human rights in general, when it comes to crunch, the majority will recognise we are better off with these aspects of EU legislation than without them.

And there is good news there too, become a victim of a privacy violation and ultimately you will still have the Europe Court of Human Rights (ECHR) to appeal to. Many do not realise that the UK, along with 47 other countries, is signed up to the ECHR separately to its EU membership. Asking UK citizens to ditch a final court of appeal should their own nation let them down may be a harder sell than ditching the EU itself.


June 20, 2016  8:03 AM

Come spy with me: drones and info-sec

Bob Tarzey Profile: Bob Tarzey

UASs (unmanned aircraft system) or drones, as they are known in common rather than legal parlance, can easily cross physical barriers. As drone use increases, both for commercial applications and for recreational purposes, new challenges are emerging with regard to privacy and information security.

Millions of drones are estimated to have already been sold worldwide; tens of millions are expected to be out-there by 2020. As with any easily available new technology, criminals are early innovators, for example getting drugs across borders and mobile phones into prisons; here existing laws are being broken. However, drone operators who wish to remain within the law, need to be aware of evolving rules.

The basis for existing UK law lies with the Civil Aviation Authority’s (CAA) and its Air Navigation Order (ANO) and the European Air Safety Agency (EASA). These bodies have been around for years to regulate commercial aviation as well as dealing with traditional model aircraft. Today they are having to adapt to the rising use of and potential of drones.

Section 166 of the ANO (V4.1, republished in 2015) deals with small-UAS. The rules are most lenient for aircraft below 7KG in weight (heavy enough to cause injury, but not big enough to carry a significant bomb); any heavier and things start to get more restrictive. There is an operating limit of 400 feet above ground level (aviators stick with imperial for altitude) and UAS must be piloted by a human, albeit remotely, with visual line of sight (VLOS), which in practice is about 500M. So, for all the blather, the concept of delivering goods by drones is not legally practicable, regardless of technological issues, until the rules change to allow beyond-VLOS (BVLOS) operation.

There are two areas where information security issues overlap with the use of drones. First, the drones may be used for industrial espionage or to breach privacy. Second, drone operation may be interfered with, either to change the instructions sent or to intercept the data stored and/or transmitted.

Many current applications are in well-defined airspaces, for example farmers flying over their own fields which are of little interest to anyone else and inspection of infrastructure which, to all but those responsible, are often already no fly zones designated by the CAA. Other no fly zones include the regions around airports and military installations. There can also be temporary no-fly zones, for example during the visit of a dignitary to a given area. It is incumbent on the operator to know about and obey restrictions; but in practice has been hard to find out the current status.

This is where a newly launched service from a UK start-up called Altitude Angel helps, a kind of air traffic control system for drones. The basic service is free and anyone can go to www.altitudeangel.com and check on restrictions. The aim is to help operators be safer, legal pilots. It also allows users to register for alerts about manned aviation activity in an area of interest to them and has plans to add in information about UAS activity. Altitude Angel provides real time updates to operators and property owners; the service is dynamic and able to react to short term and long term changes. More advanced services are chargeable.

It is all well and good for governments and the military which can get no-fly zones set up. However, today there is nothing to stop someone flying a drone near commercially sensitive sites, nor are there any privacy restrictions per se around gardens etc. Ideas have been mooted about changing the default position, making all residential areas no-fly zones, that would protect privacy but make it harder to use drones for building surveys by builders or estate agents. There could be a future scenario where new restrictions can be applied for to protect certain locations or, in more controlled circumstances, temporarily lift them. Such dynamic changes would only work in practice if the information is readily available to drone operators via services such as Altitude Angel.

Of course, criminals will just ignore the rules and currently there is little control over this. Small-UAS do not have to be registered and cannot always be uniquely identified. This is starting to change, the USA and Ireland are putting in place registration processes. Furthermore, drones are quite capable of capturing and storing telemetry data, for example, GPS coordinates. This could even be required via a black-box style process, which, alongside registration, would make non-repudiation harder; you could not deny when challenged, that your drone had not been a given location.

As commercial use increases, criminals could try and interfere with the systems that control drones, diverted aircraft and stealing goods or data. The data sent back to operators by surveillance drones (for which the ANO already has additional rules) could be intercepted. Ground to air communication is in many cases still via unencrypted short distance radio. That is changing as more drones carry a 4G mobile receivers/transmitters and many are controllable from smartphone applications. Altitude Angel is working on secure protocols for the 4G exchange of data with drones.

For those who have thought about the problem of the growing number of drones the obvious concerns are about one dropping on your head or crashing into a commercial airliner. The first of these would be bad luck, perhaps no risker than the branch of a tree falling on you, the latter should not be possible if existing controls are observed. However, with the number of drones set to grow twenty-fold in the next few years, better systems and rules are going to have to put in place to protect operators, businesses and consumers.


June 13, 2016  10:10 PM

SD-WAN: Take a good look at the outliers

Bernt Ostergaard Bernt Ostergaard Profile: Bernt Ostergaard

Is management cutting your IT budget in 2017?

From recent conversations with UK service providers, CIOs and telco analysts, the general perception is that enterprise IT budgets will face 30% cut in their operating expenses in 2017, counter balanced by a 20% increase in innovation spending. Such budget realignments require radical rethinking and can certainly not be achieved unless ops and innovation go hand in hand.

SDN and SD-WAN into the breach

One such win-win strategy is to reduce the reliance on expensive dedicated quality-of-service (QoS) network connections like MPLS and enhance the use of cheaper access technologies. If siloed wide-area network (WAN) access modes like MPLS, dedicated leased lines, DSL and LTE mobile connections can be combined, it will increase available bandwidth. But it also must ensure QoS while lowering overall cost

Screen Shot 2016-06-13 at 21.16.40

Software-defined networking (SDN) is the core WAN evolutionary concept, that gradually removes proprietary hardware constraints, and recasts hardware network capabilities as software options. SDN will allow network administrators to manage WAN network services by decoupling the control plane that decides where traffic is sent, from the underlying data plane that forwards data packets to the selected destination. With software defined access (SD-WAN) at the edges, users can already begin to reap the advantages of the emerging SDN capabilities, and with application program interfaces (APIs) they can tie functions together across different software stacks and hardware boxes.

This breaks down older telecom business models and threatens the legacy vendor-telco relationships where business critical networks rely on proprietary vendor technology operating across dedicated telco WAN connections. SD-WAN opens the market for smaller innovative players who can provide the network intelligence in customer premise equipment (CPE) at the WAN edge.

The lesser dependency on dedicated networks is accompanied by ever increasing use of 4G LTE mobile data capacity, where better coverage and dropping prices are in stark contrast to the increasing cost of landline services. This makes 4G LTE and soon 5G mobile access very important carriers of corporate data traffic.

There are obvious advantages to combining fixed and mobile WAN access streams. WAN customers should not be tied down by WAN access siloes, and be forced to use a tangle of incompatible access options: leased lines with QoS, and best effort fixed, mobile, and Internet voice and data connections. The IT department has worked for years to virtualised their data centres – now it’s time to virtualise their WAN access modes.

What are the telcos doing?

Many telcos are now launching SD-WAN services, and positioning them as an on-ramp to future SDN services. However, the telco SD-WAN approach has been to salvage as much of their MPLS investment as possible by redeploying their MPLS end-point gear to multi-channel connect to their cloud services:

“BT is offering its enterprise customers SD-WAN as a managed service, using Cisco routers that are already in place as MPLS network termination boxes and Cisco’s IWAN technology. Customers benefit from better network performance, and insight into the performance of their applications, without having to spend more on bandwidth. The service is managed through BT’s My Account portal.”

However, this and similar telco offerings from Verizon and Singapore Telecom miss out on some of the intelligent end point solutions notably inclusion of any 3G and 4G LTE connections from any mobile operator, and the ability to maintain uninterrupted connectivity across all access channels. Maintaining uninterrupted sessions across multiple access modes optimises availability, cost and reliability. Remote SD- WAN devices have to be able to cleverly manage WAN links based on their availability and performance profiles which are all things that also need to be measured locally at the other end of the connection.

Look to the disruptive SD-WAN players

Introducing distributed software smarts into WAN technologies is disrupting the incumbent vendor landscape in a couple of ways; new software models are emerging with vendors such as BMC that are taking advantage of applying the cloud to control commodity WAN equipment. Then there are the early market entrants such as Peplink with strong experience in access channel bonding and wireless networking. They are challenging the traditional hardware WAN sector with low cost but still powerful hardware tightly coupled with flexible software management.

So to meet the coming budget cuts in IT operations, while increasing their innovation activities, IT departments will need to step up to the SD-WAN challenge. Finding the dedicated SD-WAN vendors that can demonstrate relevant vertical industry solutions that bond access channels, integrate a wide range of wireless capabilities, and maintain multi-channel connectivity seamlessly. There are interesting case studies out there already.


June 13, 2016  10:36 AM

Working with giants – 25 years of IT security

Bob Tarzey Profile: Bob Tarzey

The IT security industry as we know it could be said to be enjoying its 25 anniversary. Of course, there has been a need for IT security for longer than this, but the release of HTML and the birth of the web in 1991, which saw widespread internet use take-off, was a game changer. Device-based security measures from existing anti-virus vendors like Norton (acquired by Symantec in 1990) and McAfee (acquired by Intel in 2011) had to be adapted from monitoring the occasional arrival of new content via portable media to the internet as a major new threat source. Checkpoint was founded in 1993 and released Firewall 1; network security barriers were being put in place.

In those 25 years, the IT security industry has created some giants; multi-billion dollar concerns such as Symantec, Trend Micro, Checkpoint and Intel Security (the former McAfee). These security giants keep adjusting their portfolios, mainly through the acquisition of, and sometimes through divestiture of, companies and assets.

There are many aspects to security but broadly speaking they either address network threats, monitoring stuff in motion; or protect against host threats, monitoring what is happening on device or platform which can be anything from a smartphone to a cloud storage service. That the giants want a foot in both camps was made clear by this week’s announcement that Symantec plans to acquire the network security vendor Blue Coat.

As Quocirca wrote in Feb 2016, Blue Coat was already on rapid expansion curve under the ownership of Bain Capital. Bringing Blue Coat into the fold will add a wide range of network security capabilities to Symantec’s portfolio. Furthermore, Blue Coat was in the process of extending many of its network security capabilities from being appliance-based to cloud-enabled services, an area where Symantec has been flagging. Symantec’s move mirrors Trend Micro’s 2015 acquisition of Tipping Point from HP, which was also an extension into network security.

Can such security giants be a force for good in IT security or do they just close down choice? Over 25 years, the rate of change in IT security has been rapid. This often means organisations end up with a wide range of point security products from many vendors; eventually this can become costly and unmanageable. For some, working with the giants make sense.

At the InfoSec Europe tradeshow last week, Quocirca met a CISO of a UK regulatory body who took this view. Accumulated point security products had become an expensive and hard to manage problem rather than an integrated security solution. It was felt that many core requirements including anti-virus, port control, vulnerability management, web gateways, email security etc. could now be single sourced from one of the broad portfolio IT security giants.

A short list of three vendors was drawn up and after a two-week test deployment of each vendor’s solution as available at the time, Trend Micro was selected over McAfee and Kaspersky Labs. All three vendors had their merits, not least in reduced licence and maintenance costs. However, Trend Micro scored well on having a single integrated management console and “spectacular” security for virtualised environments. Trend Micro Deep Security operates at the hypervisor level securing multiple virtual machine including desktop VDIs. The efficiency of the way Deep Security operates meant the regulator improved the efficiency of its use of virtual platforms by about 25%.

The savings of licence fees, ease of management and platform capacity more than covered the cost of investment for the organisation which is faced with government-imposed budget cuts of 15%. Furthermore, public cloud is seen as a likely way for future lower cost deployments and Trend Micro’s Hybrid Cloud Security, which provides a common set of tools for both internal data centre and external cloud platforms, ensures the current investment made now can be utilised flexibly in the future.

Small and innovative vendors will continue to emerge and drive the IT security industry forward as new threats emerge. There have been many such pulses of innovation over the years; email filtering, SSL VPNs, data loss prevention, next generation firewalls and so on. One of the most recent has been the rise of cloud access security brokers (CASB) to address the rise of shadow IT, this has been led by new vendors such as Skyhigh Networks, Netskope and Elastica. Oh sorry, Elastica is no more as it, was acquired by Blue Coat in November 2015 and it now set to become part of Symantec. The giants will prevail!


June 7, 2016  10:13 AM

NetApp/SolidFire – a new powerhouse, or straws grasping at each other?

Clive Longbottom Clive Longbottom Profile: Clive Longbottom

fas (3)_480x384In December 2015, NetApp made its bid for SolidFire at $870m. 6 months in, with the integration of the two companies and their products still ongoing, what does the future look like for the new company?

In June 2016, SolidFire held its last analyst day as an independent company in Boulder, Colorado. With only the ‘i’s to dot and the ‘t’s to cross, the SolidFire executives were in a position to talk more about the future in many areas – and NetApp also sent across a couple of their guys, including CEO George Kurian. Kurian himself has only been in position for a year, having joined NetApp from Cisco in 2011. The previous CEO, Tom Georgens, left under a cloud – NetApp revenues were in decline, and shareholders were beginning to make their feelings felt (from a high of close to $150, NetApp’s shares traded at around $33 when Georgens stepped down). Although NetApp had set the cat amongst the pigeons as it pressurised the big incumbency of EMC, forcing EMC to lower its prices and become less hubristic in its approach to the market, maintaining innovation and market pressure was proving a bit of an issue.

Also, NetApp was not performing well in certain spaces – it was, along with EMC, slow to see how flash was going to take over the storage market in a rapid timescale. Although it did start to support flash, its first moves were for hybrid flash/spinning disk systems, and its first forays into all-flash arrays were – well – pretty poor. FlashRay was postponed, and when it finally made its way to market in late 2014, its prices were too high and its performance was not up to scratch. Only recently did it come to the market with a better all-flash offering based on its flagship fabric attached storage (FAS) products. However, it did, again try to be disruptive here – the starting price for its all-flash FAS8000 systems came in at $25,000. This was meant to put the new kids on the block back in their place – but many of these had already started to make a name for themselves.

Companies such as Pure Storage, Nimble, Violin, Kaminario and SolidFire were making a lot of noise – not all of it based on reality, but they were gaining the focus of attention, somewhat like NetApp did in its earlier days of taking on EMC.

SolidFire had started up in 2010 by a young David Wright, fresh from having been an engineer at GameSpy, which was acquired by IGN. Here, he became chief engineer, overseeing IGN’s integration into Fox. Upon leaving, he set up Jungledisk, which was acquired by Rackspace.

NetApp’s biggest problem though, was that its ONTAP software and its FAS approach were unsuited to one major sector – the burgeoning cloud provider market. It needed a system that could scale out easily in such environments – and it was pretty apparent that changing FAS to do this was not going to be easy.

Finally, NetApp decided that it needed more of a mature cloud-capable all-flash system, and decided to acquire SolidFire. This also fitted in quite well with NetApp’s approach – SolidFire believes that its value lies in its software (you can buy SolidFire as a software-only system), which is also pretty much as NetApp sees itself with its ONTAP software.

Does the new company therefore bring a new force to the market, or is it a case of a once-great storage company clutching at straws?

At the event, SolidFire executives were eager to show how the SolidFire products (SolidFire will remain a brand under the NetApp business) were still moving forward. It has released the ninth version of its Element OS (Fluorine) with support for VVOLs, a new GUI, support for up to 40 storage nodes via fibre channel and increasing the IOPS limit from 300,000 per fibre channel pair to 500,000 per node or 1,000,000 per fibre channel pair.

NetApp was also keen to talk about its 15TB SSDs for its all flash FAS – these are, in fact, 15.3TB, rounded down for simplicity’s sake. To round down by 300GB – a storage volume that just a year or so ago was the high end of available SSDs – is pretty impressive.

Another major discussion point was SolidFire’s move to a new licencing model – FlashForward. This pulls the hardware and software aspects of the licences apart, creating some interesting usage models. For example, depreciation can be carried out at different rates: hardware depreciating over, say, three years, while software depreciates over five. New ideas can be tried out – an example provided by one of the service providers at the event was entry into a new market.

The cost of the storage hardware itself is reasonably small. Therefore, the service provider can purchase the hardware and have it delivered directly to a datacentre in the new market. It can then use the new software licence model, which is based on paying for the amount of provisioned storage, to try out the new market. If everything works out, it just continues using the hardware and software as it is. If it doesn’t work out, it can stop using the hardware and roll back the software licence, saving money.

Unfortunately, SolidFire’s messaging behind FlashForward left much to be desired, and the volume of questions from the analysts present showed how much work is still required to get this right.
Although SolidFire showed that it is maintaining its own momentum in the market, this does not make life that much easier for the new NetApp. It now has Element OS and ONTAP as storage software systems that it needs to pull together, as well as manage a combined sales force that will still be tempted to sell what it knows best to customers, rather than what from the combined portfolio best suits the customer.

NetApp is still struggling in the market – its last financials shows that, even allowing for the costs of SolidFire’s acquisition, its underlying figures were still not strong. Kurian has stated that he expects the main turnaround to happen in 2018 – a long time for Wall Street to wait.

Meanwhile, the new Dell Technologies will be fighting in the market with hyper-converged (complete systems of server, network and storage for running total IT workloads), converged (intelligent storage systems with server components for running storage workloads) and storage-only systems, and Pure Storage may cross the chasm to become a strong player. Other incumbents, such as IBM, HDS and Fujitsu, have not been standing still and will remain strong competitors to the new NetApp.

Some of the new kids on the block, such as Violin Memory, may well leave the playing field; Kaminario, Nimble and others may have to market themselves more aggressively to get to the critical mass required – and the financial performance – to remain viable in the markets.

Overall, NetApp is still in a fragile position – SolidFire certainly adds strength to its portfolio, but Kurian has a hard job ahead of him in ensuring that this portfolio is played well in the field.


May 17, 2016  1:12 PM

Updates, updates – hares and tortoises in the software vulnerability race

Bob Tarzey Profile: Bob Tarzey

To penetrate a target organisation’s IT systems hackers often make use of vulnerabilities in application and/or infrastructure software. Quocirca research published in 2015 (sponsored by Trend Micro) shows that scanning for software vulnerabilities is a high priority for European organisations in the on-going battle against cybercrime.

Scanning is just one way of identifying vulnerabilities and is of particular importance for software developed in-house. For off-the-shelf software, news of newly discovered vulnerabilities often comes via the suppliers of commercial packages or, in the case of open source software, from some part of the community. This also applies to components embedded in in-house developed software, such as the high profile Heartbleed vulnerability that was identified in OpenSSL in 2014.

Software flaws come to the attention of vendors in three main ways. First, an organisation using the software may discover a problem and report it, perhaps having had the misfortune to be an early victim of an exploited vulnerability (when this turns out to be the very first use of an exploit it is termed a zero-day attack). Second, a flaw may be reported by a bug bounty hunter or third, a vendor may find a flaw itself. Regardless of who discovers a vulnerability, users need to be made aware and once the news is out there, a race is on.

Software vendors need to provide a patch as soon as possible and will aim to keep publicity to a minimum in the interim whilst the fix is prepared. Meanwhile, any sniff of a vulnerability and hackers will work at hare-speed to see if it can be exploited, either for their own ends or to sell on as an exploit kit on the dark web. All too often the tortoises in this race are end user organisations that are too slow to become aware of flaws and apply patches, thus extending the window of opportunity for hackers.

In principle this should not be the case. Most reputable software vendors have well-oiled routines for getting software updates to their customers, for example Microsoft’s Patch Tuesday. However, the reality is not that simple.

For a start, applying updates is disruptive. In an age where 24-hour, 7-day application availability is required, taking applications down for maintenance can be unacceptable to businesses. Also, as more organisations move to dynamic DevOps-style application development and deployment, software is fast changing and keeping tabs on all applications and components can be tricky. Software patching methods have had to adapt accordingly.

Then there is the problem of legacy software. Older applications are increasingly being targeted by hackers because the patching regimes are lax. This applies both to software from vendors that have disappeared through long forgotten acquisitions or have gone out of business. All too often their software still sits at the core of business processes. It also applies to old versions of software from vendors that have made it clear that said software is no longer supported and will not be updated. For example, many of Microsoft’s older server and desktop operating systems remain in use despite repeated prompts to move to more recent versions; the upgrade proving to be too expensive or complicated.

There are many ways to mitigate all these problems. However, wherever possible the primary way should be to keep software up to date; as one chief information security officer (CISO) put it to Quocirca recently, ‘vulnerability management is the cornerstone of our IT security’. That responsibility can be sourced either through the use of managed security service providers (MSSP) or through the use of cloud services that are responsible for keeping their own software up to date.

There will be advice from CISOs from some leading organisations in the frontline in the fight against cybercrime at Infosec Europe this year. These include Network Rail, The National Trust and Live Nation’s Ticketmaster; all are highly dependent on their online infrastructure and see keeping their software up to date as critical. Quocirca will be chairing the panel at 16:30 on June 7th; more detail can be found at the following link Updates, Updates, Updates! Getting the Basics Right for Resilient Security.


May 12, 2016  5:00 PM

IT Untethered – How Wireless is Changing the World

Bob Tarzey Profile: Bob Tarzey

Not much more than 20 years ago, nearly all local area networks (LAN) involved cables. There had been a few pioneering efforts to eliminate the wires but for most it was still a wired world. With the advent of client-server computing and the need for access to IT being required by more and more employees this was becoming a problem. Furthermore, smaller computers meant more mobility, devices were starting to move with their users.

Cables could be hard to lay down in older buildings and modern buildings become messy to reconfigure as needs changed and users wanted more flexibility. Structured cabling systems and patch panels helped but going wireless network could make things even easier. The race was on to get rid of the wires altogether. you-are-invited-st-pauls-768x437

Move forward to today and what we now call Wi-Fi is everywhere. Often used in conjunction with wide area wireless provide by mobile operators over 3G and 4G networks and low power/wide area (LPWA) technologies, wireless has moved beyond the initial use case of flexible LANs to provide the cornerstone of two huge movements in IT: ubiquitous mobile computing, often via pocket size devices and the Internet of Things (IoT). Neither would be possible without wireless and hence wireless is changing the world.

Development of the 802.11x (Wi-Fi) standard has delivered potential throughput capacity thousands of times faster than the earliest wireless LANs. Forthcoming 5G cellular networks will offer a range of improvements over their 4G and 3G predecessors including a huge capacity upgrade. For many organisations the volume of wireless network traffic now exceeds wired.

User sessions can be seamlessly handed off from one Wi-Fi access point to another and from Wi-Fi to cellular. It is estimated that there are 65M Wi-Fi hot spots in the world today and there will be 400M by 2020. High speed cellular data access is ubiquitous, being available in nearly every major city. The mobile user has never been better served and the stage is set for the IoT-explosion that is predicted to lead to many more connected things than there are people on Earth.

Yes, wireless is changing the world, but it is not all good. There are concerns about data privacy, rogue devices joining networks, the expanded attack surface created by the IoT and so on. These security issues are addressable with technologies such as network access control (NAC) and enterprise mobility management.

On May 18th 2016 Quocirca will be given a presentation on “How Wireless is Changing the World” at St. Paul’s Cathedral with CSA Waverley and Aruba. To find learn more about how your organisation can benefit from mobility and the IoT whilst keeping wireless risk to a minimum you can attend this free event by registering at the following link http://www.csawaverley.com/aruba-event-st-pauls-cathedral-2/


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: