The proposed Dell-EMC merger was always going to be a major talking point at VMworld in Barcelona, given news of the deal was confirmed on the eve of this year’s show.
In this guest post, Rafi Azim-Khan, head of data privacy in Europe at legal firm Pillsbury Law, explains how the cloud provider community can side-step the European Court of Justice’s Safe Harbour verdict.
The European Court of Justice (ECJ), in response to a case brought by Austrian student, Maximilian Schrems against Ireland’s Data Protection Commissioner, has confirmed the current Safe Harbour system of data-sharing between EEA states and the US is invalid. A conclusion that looks set to have a widespread economic impact, given just how many businesses rely on Safe Harbour to transfer and handle data in the US.
The Court has ruled that Facebook should not have been allowed to save Schrems’ private data in the US and this is – essentially – a formal confirmation of what has been growing criticism of the scheme over a period of time.
The million dollar question is now: where does this leave US companies who heavily rely on Safe Harbour? And what about US cloud providers who are yet to build a European datacentre?
The facts of the matter
To re-cap, this case has arisen from proceedings before the Irish courts brought by Schrems, in which he challenged the Irish Data Protection Commissioner’s decision not to investigate claims that his personal data should have been safeguarded against security surveillance by the US intelligence services when it was in the possession of Facebook.
The claim was brought in Ireland, as Facebook’s European operations are headquartered there, but was referred up to the ECJ.
So, given the serious question marks that loom over the future of Safe Harbour and the threat of significant new fines under the imminent General Data Protection Regulation, what should US businesses, including cloud providers, look to be doing now to avoid having to process their data in the EU?
Handily, there is another legal mechanism that they can turn to.
Binding Corporate Rules (BCRs) are designed to allow multinational companies to transfer personal data from the EEA to their affiliates located outside of the EEA in a compliant manner.
BCRs are increasingly becoming a preferred option for those who have a lot of data flowing internationally and wish to demonstrate compliance, keep regulators at bay and prepare for a world without Safe Harbour.
Companies who put BCRs in place commit to certain data security and privacy standards relating to their processing activities and, once approved, the “blessed” scheme allows a safe environment within which data transfers can take place.
BCRs also have material long-term benefits in the sense that some upfront work, via preparing and submitting the application, should reduce risk of fines and undoubtedly position an applicant in line for a privacy “seal” once the new EU Data Protection Regulation is introduced.
Model contract clauses, which can also be used to “adequately safeguard” data transfers from Europe, also present themselves as a safer route to ensuring compliance compared to Safe Harbour as things stand.
However, they do have a number of drawbacks compared to BCRs, including inflexibility, large numbers of contracts being required in large organisations and the need for regular updates.
Post-Safe Harbour: Next steps
In short, any US companies, whether big brands or smaller enterprises, that have existing EU offices, customers, marketing or business partners, as well as those which are yet to build an EU datacentre, would be well advised to reassess their procedures, policies and documents regarding how they handle data.
The storm of new laws, much higher fines and enforcement, with more due shortly when the final draft of the new EU Data Protection Regulation is published, means it would be a false economy not to act now and seek advice.
Boosting the take-up of cloud services across Europe has been the mission statement of both public sector and commercial organisations for several years now.
From the latter point of view, HP has been actively involved in this since the formal launch of its Cloud 28+ initiative in March 2015, which aims to provide European companies of all sizes with access to a federated catalogue that they can use to buy cloud services.
If you’re thinking this sounds spookily like the UK government’s G-Cloud public sector-focused procurement initiative, you would be right. The key principles are more or less the same, except the use of Cloud 28+ isn’t limited to government departments or local authorities. It’s open to all.
That message – during the two years that HP has been talking up its efforts in this area – doesn’t seem to have reached everyone, though, particularly the providers one would assume would be a good fit for it.
Namely, the members of the G-Cloud community, who are already well-versed in how a setup like Cloud 28+ operates, and what is required to win business through it.
However, several key participants in the government procurement framework have privately expressed misgivings to Ahead In the Clouds about whether HP would welcome their involvement because they don’t use its technologies to underpin their services.
Similarly, some said they weren’t sure how they feel about hawking their cloud wares through an HP-branded catalogue, or if it would mean sharing details of the deals they do through Cloud 28+ with the firm.
The latter has been a long-held concern of cloud resellers, because – once the maker of the service you’re reselling access to knows whose buying it – what’s to stop them from cutting you out and dealing with them direct?
All these points HP seemed intent on addressing during its Cloud 28+ in Action event in Brussels earlier this week, which saw the firm take steps to almost distance itself from the initiative it is supposed to be spearheading.
As such, there were protestations on stage from Xavier Poisson, EMEA vice president of HP Converged Cloud, about how Cloud 28+ belongs to the providers that populate its catalogue, not to HP, and how its future will be influenced by participants.
The attitude seems to be, while HP may have had a hand in inviting people to the Cloud 28+ party, it’s not going to dictate who should be invited, the tunes they should dance to or what food gets served. It’s simply providing a venue and directing people how to get there, before letting everyone get on with enjoying the revelry.
From a governance point of view, it won’t be HP calling the shots. That will be the job of a new, independent Cloud 28+ board who made their debut at the event.
On the topic of billing, the firm made a point of saying users won’t be able to pay for services through Cloud 28+, and that it will – instead – rely on third-parties to handle the payment and settlement side of using the catalogue.
For those worried that being a non-user of HP technologies could preclude them from Cloud 28+, the news wasn’t so good.
As it emerged that providers will have one year from joining Cloud 28+ to ensure the applications they want to sell through the catalogue run on the Helion-flavoured version of OpenStack. A move, HP said, is designed to guard users against the risk of vendor lock-in.
Even so, given the firm spent the majority of the event trying to play down its role in the initiative, it’s a stipulation that might leave an odd taste in the mouth of some would-be participants and users. Especially in light of the uncertainty over just how open vendor-backed versions of OpenStack truly are.
HP said this is an area that could be reviewed later down the line by the Cloud 28+ governance board, but it will be interesting to see (once the initial hype around its launch dies down) if this emerges as turn-off for some potential participants.
Opening up Europe for business
Admittedly, it would be short-sighted of them to dismiss joining Cloud 28+ out of hand on that basis, in light of the opportunities it could potentially open up for them to do business across Europe.
While the European Commission has stopped short of endorsing the initiative, it has acknowledged what Cloud 28+ is trying to do shares some common ground with its vision to create a Digital Single Market (DSM) across Europe, and might be worth paying attention to.
If Cloud 28+ emerges as the preferred method for the enterprise to procure IT, once the preparatory work to deliver the DSM is complete, for example, the Helion OpenStack requirement would pale in significance to the amount of business participants could gain through it.
Measuring the success of Cloud 28+
While Cloud 28+ is still under construction, it’s only right the focus has been on the provider side of things, because – without them – there is no service catalogue.
HP is preparing to go-live with Cloud 28+ in early December at its Discover event in London, and Poisson said the “client-side” of it will become a bigger focus after that, so it’s likely we’ll hear some momentum announcements around end user adoption in the New Year.
But, until there is a sizeable amount of business transacted through the catalogue, or some other form of demonstrable end user interest in it, there will remain a fair few providers who won’t get why its worth their while to join.
In this guest post, Frank Denneman, chief technologist of storage management software vendor PernixData, sets out why datacentre management could soon emerge as the main use case for big data analytics.
IT departments can sometimes be slow to recognise the power they yield, and the rise of cloud computing is a great example of this.
Over the last three decades IT departments focused on assisting the wider business, through automating activities that could increase output or refine the consistency of product development processes, before turning its attention to the automation of its own operations.
The same needs to happen with big data. A lot of organisations have looked to big data analytics to discover unknown correlations, hidden patterns, market trends, customer preferences and other useful business information.
Many have deployed big data systems, forcing end users to look for hidden patterns between the new workloads and consumed resources within their own datacentre and see how this impacts current workloads and future capabilities.
The problem is virtual datacentres are comprised of a disparate stack of components. Every system is logging and presenting data the vendor seems appropriate.
Unfortunately, variations in the granularity of information, time frames, and output formats make it extremely difficult to correlate data and understand the dynamics of the virtual datacentre.
However, hypervisors are very context-rich information systems, and are jam-packed with data ready to be crunched and analysed to provide a well-rounded picture of the various resource consumers and providers.
Having this information at your fingertips can help optimise current workloads and identify systems better suited to host new ones.
Operations will also change, as users are now able to establish a fingerprint of their system. Instead of micro-managing each separate host or virtual machine, they can monitor the fingerprint of the cluster.
For example, how have incoming workloads changed the clusters’ fingerprint over time, paving the way for a deeper trend analysis into resource usage.
Information like this allows users to manage datacentres differently and – in turn – design them with a higher degree of accuracy.
The beauty of having this set of data all in the same language, structure and format is that it can now start to transcend the datacentre.
The dataset gleaned from each facility can be used to manage the IT lifecycle, improve deployment and operations, optimise existing workloads and infrastructure, leading to a better future design. But why stop there?
Combining datasets from many virtual datacentres could generate insights that can improve the IT-lifecycle even more.
By comparing facilities of the same size, or datacentres in the same vertical market, it might be possible to develop an understanding of the TCO of running the same VM on a particular host system, or storage system.
Alternatively, users may also discover the TCO of running a virtual machine in a private datacentre versus a cloud offering. And that’s the type of information needed in modern datacentre management.
In this guest post Amit Singh, president of Google for Work, explains why enterprises need to start adopting a mobile- and cloud-first approach to doing business if they want to remain one step ahead of the competition.
One of the most exciting things happening today is the convergence of different technologies and trends. In isolation, a trend or a technological breakthrough is interesting, at times significant. But taken together, multiple converging trends and advances can completely upend the way we do things.
Netflix is a classic example. It capitalised on the widespread adoption of broadband internet and mobile smart devices, as well as top-notch algorithmic recommendations and an expansive content strategy, to connect a huge number of people with content they love. The company just announced that it has more than 65 million subscribers.
Other examples of new and improved approaches to existing problems abound. As Tom Goodwin, SVP of Havas Media, said recently: “Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. And Airbnb, the world’s largest accommodation provider, owns no real estate. Something interesting is happening.”
Each of these companies has capitalised on a convergence of various trends and technological breakthroughs to achieve something spectacular.
Some of the factors I see driving change include exponential technological growth and the democratisation of opportunity, as well as the emergence of public cloud platforms that are fast, secure and easy to use. Together, these trends underpin a powerful formula for rapid business growth: mobile plus cloud.
We know the future of computing is mobile. There are 2.1 billion smartphone subscriptions worldwide, and that number grew by 23% last year.
We spend a lot of time on our mobile devices. Since 2014, more internet traffic has come from mobile devices than from desktop computers. Forward-looking companies are building mobile-first solutions to reach their users and customers, because that’s where we all are.
On the backend, the cost of computing has been dropping exponentially, and now anyone has access to massive computing and storage resources on a pay as you go basis because of cloud. Companies can get started by hosting their data and infrastructure in the cloud for almost nothing.
Hence mobile plus cloud. You can use mobile platforms to reach customers while powering your business with cloud computing. You can build lean and scale fast, and benefit automatically from the exponential growth curve of technology.
As computing power increases and costs decrease, cloud platforms grow more capable and the mobile market expands. In this state, technological change is an opportunity.
How cloud challenges the incumbents to think different
Snapchat is one of the best examples of how this can work. It was founded in 2011. The team used Google Cloud Platform for their infrastructure needs and focused relentlessly on mobile. Just four years later, Snapchat supports more than 100 million active users per day, who share more than 8,000 photos every second
The mobile plus cloud formula is exciting, but it also poses challenges for established players. According to a study by IBM, some companies spend as much as 80% of their IT budgets on maintaining legacy systems, such as onsite servers.
For these companies, technological change is a threat. Legacy systems don’t incorporate the latest performance improvements and cost savings. They aren’t benefitting from exponential growth, and they risk falling behind their competitors who are.
This can be daunting, since it’s not realistic for most companies to make big changes overnight.
If you run a business with less than agile legacy systems, here’s one practical way to respond to the fast pace of technological change: foster an internal culture of experimentation.
The cost of trying new technologies is very low, so run trials and expand them if they produce results. For example, try using cloud computing for a few data analysis projects, or give a modern browser to employees in one department of the company and see if they work better.
There are no “one size fits all” solutions, but with an open mind, smart leaders can discover what works best for their team.
It’s important to try, especially as technology becomes more capable and more of the world adopts a mobile plus cloud formula. Those who experiment will be best placed to capitalise on future convergences.
Cloud-championing CIOs love to bang on about how ditching on-premise technologies helps liberate IT departments, as it means they can spend less time propping up servers and devote more to developing apps and services that will propel the business forward.
Google has spent the best part of a decade telling firms to ditch on-premise productivity tools and use its cloud-based Google Apps suite instead. So, the news that it’s moving all of the company’s in-house IT assets to the cloud may have surprised some.
Surely a company that spends so much time talking up the benefits of cloud computing should have ditched on-premise technology years ago, right?
Not necessarily, and with so many enterprises wrestling with the what, when and how much questions around cloud, the fact Google has only worked out the answers for itself now is sure to be heartening stuff for enterprise cloud buyers to hear.
Reserving the right
The search giant has been refreshingly open in the past with its misgivings about entrusting the company’s corporate data to the cloud (other people’s clouds, that is) because of security concerns.
Instead, it prefers employees to use its online storage, collaboration and productivity tools, and has shied away from letting them use services that could potentially send sensitive corporate information to the datacentres of its competitors.
This was a view the company held as recently as 2013, but now it’s worked through its trust issues, and made a long-term commitment to running its entire business from the cloud.
So much so, the firm has already migrated 90% of its corporate applications to the cloud, a Google spokesperson told the Wall Street Journal.
What makes this really interesting is the implications this move has for other enterprises. If a company the size of Google feels the cloud is a safe enough place for its data, surely it’s good enough for them too?
Particularly as Google has overcome issues many other enterprises may have grappled with already (or are likely to) during their own move to the cloud.
Walking the walk
What the Google news should serve to do is get enterprises thinking a bit more about how bought-in the other companies whose cloud services they rely on really are to the idea.
While they publicly talk up the benefits of moving to the cloud, and why it’s a journey all their customers should be embarking on, have they (or are they in the throes of) going on a similar journey themselves?
If not, why not, and why should they expect their customers to do so? If they are (or have), then talk about it. Not only will doing so add some much needed credibility to their marketing babble, but will show customers they really do believe in cloud, and aren’t just talking it up because they’ve got a product to sell.
Myths and misunderstandings around the use and benefits of cloud computing are slowing down IT project implementations, impeding innovation, inducing fear and distracting enterprises from yielding business efficiency and innovation, analyst firm Gartner has warned.
It has identified the top ten common misunderstandings around cloud:
Myth 1: Cloud is always about the money
Assuming that the cloud always saves money can lead to career-limiting promises. Saving money may end up one of the benefits, but it should not be taken for granted. It doesn’t help when all the big daddies of the cloud world – AWS, Google Microsoft – are doing are tripping over each other to cut down prices. But cost savings must be seen as a nice-to-have benefit while agility and scalability should be the top reasons for adopting cloud services.
Myth 2: You have to do cloud to be good
According to Gartner, this is the result of rampant “cloud washing.” Some cloud washing is based on a mistaken mantra (fed by hype) that something cannot be “good” unless it is cloud, a Gartner analyst said.
Besides, enterprises are billing many of their IT projects cloud for a tick in the box and to secure funding from the stakeholders. People are falling into the trap of believing that if something is good it has to be cloud.
There are many use cases where cloud may not be a great fit – for instance, if your business does not experience too many peaks and lulls, then cloud may not be right for you. Also, for enterprises in heavily regulated sector or those operating within strict data protection regulations, a highly agile datacentre within IT’s full control may be a best bet.
Myth 3: Cloud should be used for everything
Related to the previous myth, this refers to the belief that the characteristics of the cloud are applicable to everything – even legacy applications or data-intensive workloads.
Unless there are cost savings, moving a legacy application that doesn’t change is not a good candidate.
Myth 4: “The CEO said so” is a cloud strategy
Many companies don’t have a cloud strategy and are doing it just because their CEO wants. A cloud strategy begins by identifying business goals and mapping potential benefits of the cloud to them, while mitigating the potential drawbacks. Cloud should be thought of as a means to an end. The end must be specified first, Gartner advises.
Myth 5: We need One cloud strategy or one vendor
Cloud computing is not one thing, warns Gartner. Cloud services include IaaS, SaaS or PaaS models and cloud types include private, public or hybrid clouds. Then there are applications that are right candidates for one type of cloud. A cloud strategy should be based on aligning business goals with potential benefits. Those goals and benefits are different in various use cases and should be the driving force for businesses, rather than standardising on one strategy.
Myth 6: Cloud is less secure than on-premises IT
Cloud is perceived as less secure. To date, there have been very few security breaches in the public cloud — most breaches continue to involve on-premises datacentre environments.
Myth 7: Cloud is not for mission-critical use
Cloud is still mainly used for test and development. But the analyst firm notes that many organisations have progressed beyond early use cases and are using the cloud for mission-critical workloads. There are also many enterprises (such as Netflix or Uber) that are “born in the cloud” and run their business completely in the cloud.
Myth 8: Cloud = Datacentre
Most cloud decisions are not (and should not be) about completely shutting down datacentres and moving everything to the cloud. Nor should a cloud strategy be equated with a datacentre strategy. In general, datacentre outsourcing, datacentre modernisation and datacentre strategies are not synonymous with the cloud.
Myth 9: Migrating to the cloud means you automatically get all cloud characteristics
Don’t assume that “migrating to the cloud” means that the characteristics of the cloud are automatically inherited from lower levels (like IaaS), warned Gartner. Cloud attributes are not transitive. Distinguish between applications hosted in the cloud from cloud services. There are “half steps” to the cloud that have some benefits (there is no need to buy hardware, for example) and these can be valuable. However, they do not provide the same outcomes.
Myth 10: Private Cloud = Virtualistaion
Virtualisation is a cloud enabler but it is not the only way to implement cloud computing. Not only is it sufficient either. Even if virtualisation is used (and used well), the result is not cloud computing. This is most relevant in private cloud discussions where highly virtualised, automated environments are common and, in many cases, are exactly what is needed. Unfortunately, these are often erroneously described as “private cloud”, according to the analyst firm.
“From a consumer perspective, ‘in the cloud’ means where the magic happens, where the implementation details are supposed to be hidden. So it should be no surprise that such an environment is rife with myths and misunderstandings,” said David Mitchell Smith, vice president and Gartner Fellow.