In December 2015, NetApp made its bid for SolidFire at $870m. 6 months in, with the integration of the two companies and their products still ongoing, what does the future look like for the new company?
In June 2016, SolidFire held its last analyst day as an independent company in Boulder, Colorado. With only the ‘i’s to dot and the ‘t’s to cross, the SolidFire executives were in a position to talk more about the future in many areas – and NetApp also sent across a couple of their guys, including CEO George Kurian. Kurian himself has only been in position for a year, having joined NetApp from Cisco in 2011. The previous CEO, Tom Georgens, left under a cloud – NetApp revenues were in decline, and shareholders were beginning to make their feelings felt (from a high of close to $150, NetApp’s shares traded at around $33 when Georgens stepped down). Although NetApp had set the cat amongst the pigeons as it pressurised the big incumbency of EMC, forcing EMC to lower its prices and become less hubristic in its approach to the market, maintaining innovation and market pressure was proving a bit of an issue.
Also, NetApp was not performing well in certain spaces – it was, along with EMC, slow to see how flash was going to take over the storage market in a rapid timescale. Although it did start to support flash, its first moves were for hybrid flash/spinning disk systems, and its first forays into all-flash arrays were – well – pretty poor. FlashRay was postponed, and when it finally made its way to market in late 2014, its prices were too high and its performance was not up to scratch. Only recently did it come to the market with a better all-flash offering based on its flagship fabric attached storage (FAS) products. However, it did, again try to be disruptive here – the starting price for its all-flash FAS8000 systems came in at $25,000. This was meant to put the new kids on the block back in their place – but many of these had already started to make a name for themselves.
Companies such as Pure Storage, Nimble, Violin, Kaminario and SolidFire were making a lot of noise – not all of it based on reality, but they were gaining the focus of attention, somewhat like NetApp did in its earlier days of taking on EMC.
SolidFire had started up in 2010 by a young David Wright, fresh from having been an engineer at GameSpy, which was acquired by IGN. Here, he became chief engineer, overseeing IGN’s integration into Fox. Upon leaving, he set up Jungledisk, which was acquired by Rackspace.
NetApp’s biggest problem though, was that its ONTAP software and its FAS approach were unsuited to one major sector – the burgeoning cloud provider market. It needed a system that could scale out easily in such environments – and it was pretty apparent that changing FAS to do this was not going to be easy.
Finally, NetApp decided that it needed more of a mature cloud-capable all-flash system, and decided to acquire SolidFire. This also fitted in quite well with NetApp’s approach – SolidFire believes that its value lies in its software (you can buy SolidFire as a software-only system), which is also pretty much as NetApp sees itself with its ONTAP software.
Does the new company therefore bring a new force to the market, or is it a case of a once-great storage company clutching at straws?
At the event, SolidFire executives were eager to show how the SolidFire products (SolidFire will remain a brand under the NetApp business) were still moving forward. It has released the ninth version of its Element OS (Fluorine) with support for VVOLs, a new GUI, support for up to 40 storage nodes via fibre channel and increasing the IOPS limit from 300,000 per fibre channel pair to 500,000 per node or 1,000,000 per fibre channel pair.
NetApp was also keen to talk about its 15TB SSDs for its all flash FAS – these are, in fact, 15.3TB, rounded down for simplicity’s sake. To round down by 300GB – a storage volume that just a year or so ago was the high end of available SSDs – is pretty impressive.
Another major discussion point was SolidFire’s move to a new licencing model – FlashForward. This pulls the hardware and software aspects of the licences apart, creating some interesting usage models. For example, depreciation can be carried out at different rates: hardware depreciating over, say, three years, while software depreciates over five. New ideas can be tried out – an example provided by one of the service providers at the event was entry into a new market.
The cost of the storage hardware itself is reasonably small. Therefore, the service provider can purchase the hardware and have it delivered directly to a datacentre in the new market. It can then use the new software licence model, which is based on paying for the amount of provisioned storage, to try out the new market. If everything works out, it just continues using the hardware and software as it is. If it doesn’t work out, it can stop using the hardware and roll back the software licence, saving money.
Unfortunately, SolidFire’s messaging behind FlashForward left much to be desired, and the volume of questions from the analysts present showed how much work is still required to get this right.
Although SolidFire showed that it is maintaining its own momentum in the market, this does not make life that much easier for the new NetApp. It now has Element OS and ONTAP as storage software systems that it needs to pull together, as well as manage a combined sales force that will still be tempted to sell what it knows best to customers, rather than what from the combined portfolio best suits the customer.
NetApp is still struggling in the market – its last financials shows that, even allowing for the costs of SolidFire’s acquisition, its underlying figures were still not strong. Kurian has stated that he expects the main turnaround to happen in 2018 – a long time for Wall Street to wait.
Meanwhile, the new Dell Technologies will be fighting in the market with hyper-converged (complete systems of server, network and storage for running total IT workloads), converged (intelligent storage systems with server components for running storage workloads) and storage-only systems, and Pure Storage may cross the chasm to become a strong player. Other incumbents, such as IBM, HDS and Fujitsu, have not been standing still and will remain strong competitors to the new NetApp.
Some of the new kids on the block, such as Violin Memory, may well leave the playing field; Kaminario, Nimble and others may have to market themselves more aggressively to get to the critical mass required – and the financial performance – to remain viable in the markets.
Overall, NetApp is still in a fragile position – SolidFire certainly adds strength to its portfolio, but Kurian has a hard job ahead of him in ensuring that this portfolio is played well in the field.
To penetrate a target organisation’s IT systems hackers often make use of vulnerabilities in application and/or infrastructure software. Quocirca research published in 2015 (sponsored by Trend Micro) shows that scanning for software vulnerabilities is a high priority for European organisations in the on-going battle against cybercrime.
Scanning is just one way of identifying vulnerabilities and is of particular importance for software developed in-house. For off-the-shelf software, news of newly discovered vulnerabilities often comes via the suppliers of commercial packages or, in the case of open source software, from some part of the community. This also applies to components embedded in in-house developed software, such as the high profile Heartbleed vulnerability that was identified in OpenSSL in 2014.
Software flaws come to the attention of vendors in three main ways. First, an organisation using the software may discover a problem and report it, perhaps having had the misfortune to be an early victim of an exploited vulnerability (when this turns out to be the very first use of an exploit it is termed a zero-day attack). Second, a flaw may be reported by a bug bounty hunter or third, a vendor may find a flaw itself. Regardless of who discovers a vulnerability, users need to be made aware and once the news is out there, a race is on.
Software vendors need to provide a patch as soon as possible and will aim to keep publicity to a minimum in the interim whilst the fix is prepared. Meanwhile, any sniff of a vulnerability and hackers will work at hare-speed to see if it can be exploited, either for their own ends or to sell on as an exploit kit on the dark web. All too often the tortoises in this race are end user organisations that are too slow to become aware of flaws and apply patches, thus extending the window of opportunity for hackers.
In principle this should not be the case. Most reputable software vendors have well-oiled routines for getting software updates to their customers, for example Microsoft’s Patch Tuesday. However, the reality is not that simple.
For a start, applying updates is disruptive. In an age where 24-hour, 7-day application availability is required, taking applications down for maintenance can be unacceptable to businesses. Also, as more organisations move to dynamic DevOps-style application development and deployment, software is fast changing and keeping tabs on all applications and components can be tricky. Software patching methods have had to adapt accordingly.
Then there is the problem of legacy software. Older applications are increasingly being targeted by hackers because the patching regimes are lax. This applies both to software from vendors that have disappeared through long forgotten acquisitions or have gone out of business. All too often their software still sits at the core of business processes. It also applies to old versions of software from vendors that have made it clear that said software is no longer supported and will not be updated. For example, many of Microsoft’s older server and desktop operating systems remain in use despite repeated prompts to move to more recent versions; the upgrade proving to be too expensive or complicated.
There are many ways to mitigate all these problems. However, wherever possible the primary way should be to keep software up to date; as one chief information security officer (CISO) put it to Quocirca recently, ‘vulnerability management is the cornerstone of our IT security’. That responsibility can be sourced either through the use of managed security service providers (MSSP) or through the use of cloud services that are responsible for keeping their own software up to date.
There will be advice from CISOs from some leading organisations in the frontline in the fight against cybercrime at Infosec Europe this year. These include Network Rail, The National Trust and Live Nation’s Ticketmaster; all are highly dependent on their online infrastructure and see keeping their software up to date as critical. Quocirca will be chairing the panel at 16:30 on June 7th; more detail can be found at the following link Updates, Updates, Updates! Getting the Basics Right for Resilient Security.
Not much more than 20 years ago, nearly all local area networks (LAN) involved cables. There had been a few pioneering efforts to eliminate the wires but for most it was still a wired world. With the advent of client-server computing and the need for access to IT being required by more and more employees this was becoming a problem. Furthermore, smaller computers meant more mobility, devices were starting to move with their users.
Cables could be hard to lay down in older buildings and modern buildings become messy to reconfigure as needs changed and users wanted more flexibility. Structured cabling systems and patch panels helped but going wireless network could make things even easier. The race was on to get rid of the wires altogether.
Move forward to today and what we now call Wi-Fi is everywhere. Often used in conjunction with wide area wireless provide by mobile operators over 3G and 4G networks and low power/wide area (LPWA) technologies, wireless has moved beyond the initial use case of flexible LANs to provide the cornerstone of two huge movements in IT: ubiquitous mobile computing, often via pocket size devices and the Internet of Things (IoT). Neither would be possible without wireless and hence wireless is changing the world.
Development of the 802.11x (Wi-Fi) standard has delivered potential throughput capacity thousands of times faster than the earliest wireless LANs. Forthcoming 5G cellular networks will offer a range of improvements over their 4G and 3G predecessors including a huge capacity upgrade. For many organisations the volume of wireless network traffic now exceeds wired.
User sessions can be seamlessly handed off from one Wi-Fi access point to another and from Wi-Fi to cellular. It is estimated that there are 65M Wi-Fi hot spots in the world today and there will be 400M by 2020. High speed cellular data access is ubiquitous, being available in nearly every major city. The mobile user has never been better served and the stage is set for the IoT-explosion that is predicted to lead to many more connected things than there are people on Earth.
Yes, wireless is changing the world, but it is not all good. There are concerns about data privacy, rogue devices joining networks, the expanded attack surface created by the IoT and so on. These security issues are addressable with technologies such as network access control (NAC) and enterprise mobility management.
On May 18th 2016 Quocirca will be given a presentation on “How Wireless is Changing the World” at St. Paul’s Cathedral with CSA Waverley and Aruba. To find learn more about how your organisation can benefit from mobility and the IoT whilst keeping wireless risk to a minimum you can attend this free event by registering at the following link http://www.csawaverley.com/aruba-event-st-pauls-cathedral-2/
The recent EMC World event held in Las Vegas could have been a flop. With the Dell takeover of the EMC Federation (EMC Corp and all its divisions of EMC II, VMware, RSA, Pivotal and Virtustream) in full swing, it would have been easy for the EMC management to claim that they were in a ‘quiet period’ and so refuse to disclose much.
It wasn’t quite like that.
Firstly, Joe Tucci (pictured left with Clive Longbottom and Tony Lock) was on stage saying his farewells (very pointedly, not his goodbyes), and talking about the synergies that were possible between the various parts of EMC and Dell. This was followed by Michael Dell exploding around the stage, looking far more animated and pumped up than I have seen him for many a year. It was obvious that this ‘quiet period’ is anything but – Dell and EMC are working hard to make sure that the new Dell Technologies (the chosen name for the new combined company) will hit the road not just running, but at light speed.
Such energy is laudable – but without evolution in what EMC is doing, ultimately futile. In a world where technology is changing so rapidly, EMC and its divisions have faced the possibility of being the next dinosaur; meeting the extinction event caused by the impact of the new all-flash array storage providers and the web-based, software as a service information and security management players.
In another article, I took a look at some of the possible outcomes from the merger.
I now have had to rethink. Not long after the article went live, VCE was spun in to the EMC II portfolio, with Cisco becoming less of an investor and more of a partner. At this year’s EMC World, the main presentations were awash with converged and hyperconverged systems – it is obvious that VCE is unlikely to be sold off, but will take on the mantle of converged/hyperconverged within the new Dell Technologies Dell EMC division (the naming convention for the new ‘family’ – not, very pointedly, a ‘federation’ – is a little longwinded). Indeed, across pretty much everywhere, this drive toward hardware convergence was evident – even within the new DSSD offering (a massive super-fast all flash box based around direct attached, server-side storage technologies, but offered as a rackable system, named the D5). Whereas DSSD has been primarily a server-side storage system before, the D5 will be able to be installed in converged and hyperconverged modes.
This enhanced scale out as well as scale up capability is a strong differentiator for EMC, and so it will be for Dell Technologies. Some others in the hyperconverged markets have little scale out capabilities – if you run out of storage, buy another complete system. All that extra compute and network power is wasted, but at least you have more storage. With EMC VCE systems, each resource can be expanded independently, while still maintaining a hyperconverged architecture.
Noticeable by his absence was Pat Gelsinger, the CEO of the VMware division. VMware has been a focus of the acquisition deal for many reasons – one of which is that it is a publicly quoted company in which EMC owns 80% of the shares. The idea was for Michael Dell to raise a new share class to part fund the acquisition of EMC: the US SEC frowned on this and wanted full tax to be paid on the shares if this went ahead. This would have left a rather large hole in the financing of the deal. So, how much of the existing public shares should Michael Dell sell off to raise money? Selling off 29% still leaves him with a majority holding – but not much incoming funds. 50% would still give a 30% holding, which is still a board seat and a safety net against a hostile takeover of VMware (Carl Icahn is still a threat, and he is probably not happy since Michael Dell beat him and took Dell private). Selling 59% maximizes the incoming funds while still maintaining a seat on the board, but does make it easier for a hostile takeover to happen. How Michael Dell plays this, and the roll of Gelsinger going forward, will be interesting.
There were lots of other announcements at the event – it was pretty obvious that while there are a lot of discussions and machinations going on at the top of Dell and the various parts of EMC, the message to the EMC staff is ‘full steam ahead’ with continued new products across the board.
Again, all so good – but can the new Dell Technologies make it? I was all for Michael Dell taking Dell private; I have been cooler on the Dell/EMC deal. Why? The existing full ‘solution’ players have not fared well. HP has had to split in two; IBM has divested itself of large parts of its business and is reinventing itself around a cloud model. Regionally, French-based Bull has been acquired by Atos; Japan-headquartered NEC is seeing revenues continue to fall slowly; also Japanese-headquartered Fujitsu has seen its revenues plateau.
Into this landscape of underperformance will emerge the new Dell technologies – a global one-stop-shop for IT platforms. It is pretty dependent on managing the long tail of on-premise data centre installations; of making the most of the continued moves to colocation, and in becoming a platform of choice for the various ‘as a service’ players in the markets.
The first will be an ever shrinking market – Dell Technologies cannot count on this going forward. The second is a fair target: IBM will be less of a player here, leaving it pretty much a head-to-head between HP and Dell technologies for a platform sell. In the medium term, this is where the majority of Dell Technologies’ money will come from.
On the third item, there remains a lot of work to be done on cloud product and messaging. The old Dell had tried and failed at being a public cloud operator itself, and had decided instead to become a cloud aggregator, something Quocirca supported. EMC bought Virtustream, and created an infrastructure as a service (IaaS) public cloud using Virtustream’s xStream cloud management software, as well as EMC’s Pivotal Cloud Foundry. From the presentations at EMC World, it is pretty evident that Virtustream cloud is still a strategic platform. However, EMC currently has too many different cloud services and messages – this will only be worse when the Dell Technologies deal goes through. Creating clearer messaging around a less complex portfolio and playing it effectively in the Dell Technologies “Enterprise Hybrid” and “Native Hybrid” cloud messaging will be key in battling AWS, Microsoft, Google, IBM and all the other cloud platforms out there.
Overall, then, it was apparent that the feeling within EMC is that the Dell deal is exciting, and that everyone is up for it. The feeling from the top is that the effort will be put in to make it all work, and that power bases will be dealt with as swiftly as possible to cut down on any internal wars breaking out. End users at the event also seemed positive – the early worries about what it meant to both companies seem to be disappearing.
However, the building of a massive, platform-centric company when others are moving away from the model could be the biggest gamble of Michael Dell’s life. I really hope that it all works out well.
A recent Quocirca report, The trouble at your door, sponsored by Trend Micro, looked at the scale of targeted attacks faced by UK and European businesses and the before, during and after measures in place to mitigate such attacks. Trend Micro has plenty of its own products on offer, not least its recently upgraded Hybrid Cloud Security offering. However, last week, Quocirca got a chance to review ideas from some smaller vendors at the annual Eskenzi PR IT Security Analyst and CISO Forum. The 10 vendors that sponsored the forum were focussed mainly on before and after measures.
Targeted attacks often rely on IT infrastructure vulnerabilities. The best way to protect against these is to find and fix them before the attackers do. White Hat Security discussed the latest developments in its static (before deployment) and dynamic (post deployment) software scanning services and how its focus has extended from web-enabled applications to a significant emerging attack vector – mobile apps. This is backed by White Hat’s global threat research capability, including a substantial security operations centre (SOC) in Belfast, UK.
Cigital is an IT services company also focussed on software code scanning, mainly using IBM’s AppScan. It helps its customers improve the way they develop and deploy software in the first place, as Cigital puts it we can “do it for you, do it with you or teach you to do it yourself“. The company is based in Virginia but has an established UK presence and customer base.
Tripwire provides a broader vulnerability scanning capability looking for known problems across an organisation’s IT infrastructure. In 2015 Tripwire was acquired by Belden, the US-based manufacturer of networking, connectivity and cabling products. Belden sees much opportunity in the Internet of Things (IoT) and Tripwire extends vulnerability scanning to the multitude of devices involved.
The continual need to interact with third parties online introduces new risk for most organisations; how can the security of the IT systems and practices of 3rd parties be better evaluated? RiskRecon offers a service for assessing the online presence of third parties, for example looking at how up to date web site software and DNS infrastructure are; poor online practice may point to deeper internal problems. RickRecon is considering extending its US-only operations to Europe.
UK-based MIRACL provides a commercial distribution of a new open source Milagro encryption project of which it is one of the major backers. Milagro is an alternative to public key encryption that relies on identity based keys that are broken down using a distributed trust authority which only the identity owner can reassemble. MIRACL believes IoT will be a key use case as confidence in the identity of devices is one of the barriers that needs to be overcome.
Illumio provides a set of APIs for embedding security into workloads, thus ensuring security levels are maintained wherever the workload is deployed, for example when moved from in-house to public cloud infrastructure. This moves security away from the fractured IT perimeter into the application itself; for example, enabling deployments on the same virtualised infrastructure to be ring fenced from each other – in effect creating virtual internal firewalls.
FireEye was perhaps the best know brand at the forum and one of four vendors more focussed on during measures. Its success in recent years has been mitigating threats at the network level using sandboxes that test files before they are opened in the user environment. FireEye’s success has enabled it to expand to offer threat protection on a broad front including user end-points, email and file stores.
Lastline also mitigates network threats by providing a series of probes that detect bad files and embedded links. Its main development centre is in Cambridge, UK. A key route to market for Lastline is a series of OEM agreements with other security vendors including WatchGuard, Hexis, SonicWall and Barracuda.
UK-based Mimecast was itself a sponsor at the forum. Its on-demand email management services have always had a strong security focus. It has been expanding fastest in the USA and this included a 2015 IPO on NASDAQ. Mimecast has also been focussing on new capabilities to detect highly targeted spear phishing and supporting the growing use amongst its customers of Microsoft Office 365 and Google Apps.
Last but not least, Corero is a specialist in DDoS mitigation. In a mirror image of Mimecast it is US-based but listed on the UK’s Alternative Investment Market (AIM). Its appliances are mainly focussed on protecting large enterprises and service providers. Its latest technology initiative has been to move DDoS protection inline, enabling immediate detection and blocking of attacks as opposed to sampling traffic out of line and therefore blocking attacks only after they have started by diverting network traffic.
Quocirca’s research underlines how attackers are getting more sophisticated. The Eskenzi forum provides a snapshot of how the IT security industry is innovating too. There were no vendors present specifically focussed on responding to successful attacks and the need for such plans to be in place for when an attack has been successful is paramount. That said, decreasing the likelihood of being breached with better before and during measures should reduce the need for clearing up after the event.
Windows 10 has an unnerving habit of throwing up a screen following certain updates that says “all your files are right where you left them“. Quocirca has not been alone on first seeing this, in thinking it might be a ransomware message. Microsoft has said it is planning to change the alert following user complaints.
Real ransomware does just as the Windows message says; it leaves your files in place, but encrypts them, demanding a ransom (usually payable in anonymous bitcoins) for the decryption keys. Ransomware is usually distributed via dodgy email attachments or web links with cash demands that are low enough so that users who are caught out will see coughing-up as the easiest way forward. Consumers are particularly vulnerable, along with smaller business users who lack the protection of enterprise IT security. However, in the age of BYOD and remote working, users from larger organisations are not immune.
Ransomware is usually sent out en-masse, randomly and many times. So, traditional signature-based anti-virus products become familiar with common versions and provide protection for those that use them. In response, criminals tweak ransomware to make it look new and avoid file-based signature detection. To counter this, anti-virus products from vendors such as Trend Micro (which has built-in specific ransomware protection) detect modified ransomware by looking for suspicious behaviours such as the sequential accessing of many files and key exchange mechanisms with command and control servers used by would be extorters.
Avoiding infection in the first place is the best course of action. However, should the worst happen, there is of course another sure way to protect your data from ransomware that has been around since electronic storage was invented – data backup. Simple: if a device is encrypted by ransomware, clean it up or replace it and restore the data from backup. Data loss will only be since the last recovery point, and if you are using a cloud storage service to continuously backup your data the data loss should be minimal. Or will it?
The trouble is with online cloud storage services is that they appear as just another drive on a given device; this makes them easy for an authorised user to access. Unfortunately, that is also true for the ransomware which has to achieve authorised access before it can execute. So, following an infection data will likely be encrypted both locally and on the cloud storage system, which just sees the encryption of the file as another user driven update. So is this back to square one with all data lost? Not quite.
Whilst cloud storage services from vendors such as Dropbox and Google are not designed to mitigate the problem of ransomware per se, the fact that they provide versioning still enables a recovery of files from a previous state. As a Dropbox user Quocirca took a closer look at how its users could respond to a ransomware infection.
Dropbox is certainly diligent about keeping previous versions of files; by default, it goes back about a month keeping hundreds of versions of regularly used files if necessary. The user, and therefore the ransomware, cannot see previous versions without issuing a specific request to the Dropbox server. Following an infection, every file will have an immediate previous version that is untouched by the ransomware. Good news, clean up or replace the device and restore your files. However, this may take some time!
With the standard Dropbox service each file has to be retrieved in turn. Dropbox does provide a service for customers who get hit by ransomware for the retrieval of entire directory trees and its API provides file-level access and version history which programmers and other software applications can use to automate the process.
This is certainly a better position to be in than having no backup at all and the benefits of continuous copying to the cloud are one of the surest ways of protecting data against user device loss, theft or failure. Ultimately Dropbox is protecting its customers from a potential ransomware infection, and anyone relying on a similar continuous cloud back up service should check how their provider operates.
It also underlines the benefit of having a secondary backup process, for example to a detachable disk drive. This would save having to contact a third party for help when all your files have been encrypted by ransomware. The bulk of a file system can quickly be copied back to a cleaned or new device and just the most recent files recovered from the cloud. However, if you do that, then remember to actually detach the drive, or just like your cloud storage, it will appear as just another device and ransomware will be able to set about its evil work on your secondary back up as well.
The Internet of Things is the latest tech sector ‘must have’, with some even claiming organisations need to have a CIoTO (Chief IoT Officer). However, this is unnecessary and completely misses the point. The IoT is not separate or entirely new (being in essence an open and scaled up evolution of SCADA – supervisory control and data acquisition), but it is something that has to be integrated into the wider set of business needs.
As an example of potential, take a look at something going on in the hospitality sector and how innovative technologies and IoT can be incorporated.
At one time booking hotels was a hit and miss affair conducted pretty much over the phone or via a visit to a third party travel agent. Now we have online direct booking, price comparison aggregators that search widely and the online brokering or ‘sharing economy’ with things like Airbnb.
Technology is starting to make an impact, but queuing for check-in at a desk is still more widespread than receptionists wandering around with tablets, or directing guests to self-serve on touch screen kiosks. In-room use of phones, internet access and TV (with varying amounts of paid for movies) have become commonplace, but controlling the lighting and air-conditioning is still a case of hunt for the right switch or dial or click and hope – unless you are staying in really expensive places.
‘Things’ appear to be changing and a small number of (still low budget) Premier Inn locations are now ‘Hub Hotels‘. These have kiosks for checking in and creating a room card key, and while it’s still possible to interact with the room facilities though old fashioned switches or TV remote control, there is an app. It works on Android, Apple’s iOS and yes, even on a smart watch. This app wirelessly connects guests to ‘Things’ in their rooms and allows them remote control and information access.
This might only be a first step, but it pulls together several aspects of innovative IT; wearable, mobile, IoT, touch interface, self-service and the inevitable cloud of services to back it up. However, rather than focussing in on any particular one of these technologies, the focus is quite rightly on customer experience – it is all about hospitality after all.
There is also the impact it has on running the business, efficiency, cost savings and therefore improved competitiveness is what is a crowded sector. It has an impact on staff too, and although efficiencies such as self-serve check-in gives an opportunity to reduce staff numbers, it also allows progressive thinking organisations to enable their staff to do more, and act more as welcoming hosts, rather than performing an administration role.
Interesting that a budget hotel would focus on improving service and competitive differentiation rather than just technology and cost cutting, but this is where the IoT can have a far greater impact than the futuristic scenarios often played out by the PowerPoint decks of IoT vendors.
Organisations looking to benefit from the IoT opportunity need to think a bit differently about what they are doing, and the answer is not to start with a CIoTO, but with more integrated business thinking which can encompass a range of technologies that gather more data, permit more remote control, and are managed centrally.
Some steps any organisation looking into IoT could take would then be:
- Prioritise understanding of the core business processes that are resource intensive or time consuming.
- Identify intelligence gaps about what real-time information is available into how those processes are faring and how they might be better understood.
- Find out where third parties (in particular customers) feel they do not have sufficient control or information.
- Take a holistic approach to understand the whole range of internal systems would be impacted by the two previous steps – more information, more remote control – and plan a strategy that encompasses this bigger picture and abstracts overall control.
- Implement an element of the strategy to test out a) if customers like it, b) if the organisation can manage and cope with all the extra information generated, and ultimately c) if it makes a difference to the business.
- Refine and repeat
Too much focus on exotic applications and devices or a perceived need for specialist or segregated roles undermines the reality of the benefits that might be available. The IoT, perhaps even more than many other technology advances, needs to be ’embedded’ – not simply in devices, but in business processes, and that’s where the effort needs to be into first, not the technology.
In too many situations, a discussion can end up as an argument between what was originally communicated between two parties. You know the sort of thing: “I only bought this because you promised that”; “You never told me that when we discussed it”; “Go on, then – prove that you said that”.
Many of the mis-selling cases that have cost financial institutions dearly have hinged on such a need to prove what was originally agreed. In many cases, the lack of proof from the seller’s side has been the problem – as long as the customer can claim that they were promised something or not told about something then the law will err on their side.
Likewise, credit card ‘friendly fraud’ is an increasing issue. With electronically provided goods (e.g. games, video, music), the buyer can claim that they never received the goods and claim back payment from the credit card issuer. The seller is then out of pocket in the form of the goods actually having been delivered, but not provably, plus the additional costs that the card issuer places on them for dealing with the case.
Organisations have tried various ways of dealing with such problems – all of which have been fraught with issues. Anything that depends on the recipient of an electronic communication to take an action – for example, responding to an email, clicking on a button on a web site or whatever – has been shown to fail the legal test of standing up in court in the long run.
However, one very simple tool is still there, even with the media and other commentators having long predicted its demise. Email still rules as the one standardised means of exchanging electronic information between two parties. It makes no odds what email reading tool the recipient uses and what system the sender uses, the fact of this strong standardisation and proven communications medium makes email a great choice as a starting point for evidential communications.
However, when it comes to legal proof of communication, depending on delivery and read receipts no longer works – it is far too easy for users to disable these (indeed, many email clients disable them by default). The actual content of an email cannot be counted on as being sacrosanct – it is just a container of text, and that text can be edited by the recipient or the sender.
Where legal proof of the actual delivered content is required, a different approach is required.
By placing a proxy between the sender and recipient, an email can be ‘captured’. The proxy takes the form of an intermediary where emails are sent automatically before being forwarded on to the original recipient. The sender still addresses the email in the normal manner – the proxy can be a completely transparent step in the process. The message can then be created as immutable content through the creation of a final form pdf file, timestamped and stored either by the sender or by the proxy. The message continues on its way to the recipient, who will be unaware that this action has been taken.
If necessary, any responses can also go via the proxy, creating an evidentiary trail of what was communicated between the two parties.
Such records then meet the requirements of legal admissibility – if it ever came to the point where a complaint had to go to court, these records meet the general needs of any court world-wide. However, the general idea is to avoid court wherever possible.
By being able to find, recover and provide a communication to a recipient along with proof that this is exactly what was sent and agreed to, most issues around proof of information communicated and delivered can be stopped quickly and cost effectively at an early stage.
Legal proof of communications is not the only value of such proxies. Consider the movement of intellectual property across a value chain of suppliers, customers and other necessary third parties. Details of something pre-patent has to be communicated to a third party. By sending it via a proxy, an evidentiary copy is created and time stamped, so ensuring that the sender’s rights to the intellectual property are recorded and maintained.
The use cases are many – travel companies can show what was agreed when a travel package was booked; financial companies proving that terms and conditions were supplied to a customer and agreed by them; electronic goods retailers showing that a message was sent and delivered to a recipient on a certain day and that the recipient then did click and download the product sold to them.
Quocirca has just published a report on the subject, commissioned by eEvidence. The report, “The myth of email as proof of communication”, is available for free download here.
In a previous blog post, ‘The rise and rise of public cloud services‘, Quocirca pointed out that the crowds heading for Cloud Security Expo in April 2016 should be more enthusiastic than ever given the growing use of cloud-based services. The blog looked at the measures organisations can take to enable the use of cloud services whilst ensuring their data is reasonably protected; knowing what users are up to in the cloud rather than just saying no.
However, there is another side to the cloud coin. For many businesses, adopting cloud services will actually be a way of ensuring better protection of data, for example from the growing number of targeted cyber-attacks. A recent Quocirca research report, ‘The trouble at your door‘, sponsored by Trend Micro, shows that the greatest concern about such attacks is that they will be used by cybercriminals to steal personal data.
The scale of the problem is certainly worrying. Of the 600 European business surveyed, 62% knew they had been targeted (many of the others were unsure) and for 42%, at least one recent attack had been successful. One in five had lost data as a result; for one in ten it was a lot or devastating amount of data. One in six say a targeted attack had caused reputational damage to their business.
So how can cloud services reduce the risk? For a start, the greatest concern regarding how IT infrastructure might be attacked is the exploitation of software vulnerabilities. End user organisations and cloud service providers alike face similar flaws that are inevitable through the software development process. The difference is that many businesses are poor at identifying and tardy in fixing such vulnerabilities, whilst for cloud service providers, their raison d’être means they must have rigorous processes in place for scanning and patching their infrastructure.
Second, when it comes to regulated data, cloud service providers make sure they are able to tick all the check boxes and more. After personal data, the data type of greatest concern is payment card data – a very specific type of personal data. Many cloud service providers will already have implemented the relevant controls for the PCI-DSS standard that must be adhered to when storing payment card data (or of course you could simply outsource collections to a cloud-based payment services provider). They will also adhere to other data security standards such as ISO27001. Cloud service providers cannot afford to claim adherence and then fall short.
If infrastructure security and regulatory compliance is not enough, think of the physical security that surrounds the cloud service providers’ data centres. And of course, it goes beyond security to resilience and availability through backup power supplies and multiple data connections.
No organisation can fully outsource the responsibility for caring for its data, but most can do a lot to make sure it is better protected and for many a move to a cloud service provider will be a step in the right direction. Quocirca has often posed the question, “think of a data theft that has happened because an organisation was using cloud-based rather than on-premise infrastructure“: no examples have been forthcoming. Sure, data has been stolen from cloud data stores and cloud deployed applications, but these are usually the fault of the customer, for example a compromised identity or faulty application software deployed on to a more robust cloud platform.
Targeted cyberattacks are not going to go away, in fact all the evidence suggests they will continue to increase in number and sophistication. The good news is that cybercriminals will seek out the most vulnerabile targets, and if your infrastructure proves too hard to penetrate they will move on the next target. A cloud service provider may give your organisation the edge that ensures this is the case.
Data centres used to be built with the knowledge that they could, with a degree of reworking, be used for 25 years or more. Now, it is a brave person who would hazard a guess as to how long a brand new data centre would be fit for purpose without an extensive refit.
Why? Power densities have been rapidly increasing requiring different distribution models. Cooling has changed from the standardised computer room air conditioning (CRAC) model to a range of approaches including free air, swamp, Kyoto Wheel and hot running, while also moving from full volume cooling to highly targeted contained rows or racks.
The basic approach to IT has changed too – physical, one-application-per-box has been superseded by the more abstract virtualisation. This in turn is being superseded by private clouds, often interoperating with public infrastructure and platform as a service (I/PaaS) systems, which are also continuously challenged by software as a service (SaaS).
Even at the business level, the demands have changed. The economic meltdown in 2008 led to most organisations realising that many of their business processes were far too static and slow to change. Businesses are therefore placing more pressure on IT teams to ensure that the IT platform can respond to provide support for more flexible processes and, indeed, to provide what individual employees are now used to in their consumer world – continuous delivery of incremental functional improvements.
What does this mean for the data centre, then? Quocirca believes that it would be a very brave or foolish (no – just a foolish) organisation that embarked on building itself a new general data centre now.
Organisations must start to prioritise their workloads and plan as to when and how these are renewed, replaced or relocated. If a workload is to be renewed, is it better replaced with SaaS, or relocated onto I/PaaS? If it is supporting the business to the right extent, would it be better placed in a private cloud in a colocation facility, or hived off to I/PaaS?
Each approach has its merits – and its problems. What is clear is that the problem will continue to be a dynamic one, and that organisations must plan for continuous change.
Tools will be required to intelligently monitor workloads and move them and their data to the right part of the overall platform as necessary. This ‘necessary’ may be defined by price, performance and or availability – but has to be automated as much as possible so as to provide the right levels of support to the business.
Therefore, the tools chosen must be able to deal with future predictions – when is it likely that a workload will run out of resources; what will be the best way to avoid such issues; what impact could this have on users?
These tools need to be able to move things rapidly and seamlessly – this will require use of application containers and advanced data management systems. End-to-end performance monitoring will also be key, along with root cause identification, as the finger pointing of different people across the extended platform has to be avoided at all costs.
If it becomes apparent that the data centre that you own is changing massively, what can you do with the facility? Downsizing is an option – but can be costly. A smaller data centre could leave you with space that could be repurposed for office or other business usage – but this only works if the conversion can be carried out effectively. New walls will be required that run from real floor to real ceiling – otherwise you could end up trying to cool down office workers while trying to keep the IT equipment cool at the same time.
Overall security needs to be fully maintained – is converting a part of the data centre to general office space a physical security issue? It may make sense to turn it into space for the IT department – or it may just not be economical.
A data centre facility is constructed to do one job: support the IT systems. If it finds itself with a much smaller amount of IT to deal with, you could find that replacing UPS, auxiliary power and cooling systems is just too expensive. In this case, colocation makes much better sense – which leaves you with the nuclear option – an empty data centre that needs repurposing.
Repurposing a data centre is probably a good business decision. It could be cost-effectively converted into office space – unlike where only part of it is converted, a full conversion can avoid many of the pitfalls of trying to run a data centre and an office in the same facility. If all else fails, that data centre is valuable real estate. If the business cannot make direct use of it, a decommissioned data centre could be a suitable addition to the organisation’s bottom line through selling it off.