On Day 1 of its annual conference VMworld 2014 themed “No Limits”, VMware unveiled its strategies around open cloud platform OpenStack and around container technology Kubernetes. It also launched new tools to extend its software-defined datacentre and hybrid cloud offerings.
Open software-defined datacentre
One of the significant announcements was the VMware Integrated OpenStack – a service that provides enterprises – especially SMBs the flexibility to build a software-defined datacentre on any technology platform (VMware or not).
VMware Integrated OpenStack distribution is aimed at helping customers repatriate workloads from “unmanageable and insecure public clouds”. Take that AWS.
Container technology and VMware infrastructures; Kubernetes collaboration
VMware is collaborating with Docker, Google and Pivotal to allow enterprises to run and manage container-based applications on its platforms.
At the annual conference, VMware said it has joined the Kubernetes community and will make Kubernetes’ patterns, APIs and tools available to enterprises. Kubernetes, currently in pre-production beta, is an open-source implementation of container cluster management.
With Google, VMware’s efforts will focus on bringing the pod based networking model of Open vSwitch to enable multi-cloud integration of Kubernetes.
Not only will deep integration with the VMware product line bring the benefits of Kubernetes to enterprise customers, but their commitment to invest in the core open source platform will benefit users running containers,” said Joerg Heilig, VP Engineering, Google Cloud Platform. “Together, our work will bring VMware and Google Cloud Platform closer together as container based technologies become mainstream.”
With Docker, it will collaborate to allow Docker Engine on VMware workflows. It will also work to improve interoperability between Docker Hub with VMware vCloud Air, VMware vCenter Server and VMware vCloud Automation Center.
New hybrid cloud capabilities
At VMworld, VMware released new hybrid cloud service capabilities and a new line-up of third-party mobile application services. The new capabilities include vCloud Air Virtual Private Cloud OnDemand that offers customers with on demand access to vCloud Air. Another capability – VMware vCloud Air Object Storage – is aimed at providing users with scalable storage options for unstructured data. It will enable customers to easily scale to petabytes and only pay for what they use, according to the company.
It also launched mobile development services within VMware’s vCloud Air’s service catalog.
Management as a service offerings
VMware also released two new IT management tools under its vRealize brand- for managing a software-defined datacentre and public cloud infrastructure services (IaaS).
VMware vRealize Air Automation is the cloud management tool that allows users to automate the delivery of application and infrastructure services while maintaining compliance with IT policies.
Meanwhile, VMware vRealize Operations Insight offers performance management, capacity optimization, and real-time log analytics. The tool also extends operations management beyond vSphere to an enterprise’s entire IT infrastructure. Another sign than VMware is opening up its ecosystem to accommodate other virtualisation platforms.
Partnerships with Dell on software defined services
VMware has extended collaboration with Dell to combine its NSX network virtualisation platform with the latter’s converged infrastructure products.
“Global organisations are adopting the software-defined datacentre as an open, agile, secure and efficient architecture to simplify IT and transition to the hybrid cloud,” said Raghu Raghuram, executive vice president, SDDC division, VMware. “The software-defined datacentre enables open innovation at speeds that cannot be matched in the hardware-defined world. As partners, VMware and Dell will advance networking in the SDDC, and collaborate to make advanced network virtualisation available to mutual customers.”
Partnership with HP on hybrid cloud
VMware and HP have extended their collaboration to give momentum to users’ SDDC and hybrid cloud adoption. As part of the partnership, HP Helion OpenStack will support enterprise-class VMware virtualisation technologies.
The companies will also make standalone HP-VMware networking solution generally available. Together, these collaborative efforts can help simplify the adoption of the software-defined datacentre and hybrid cloud with less risk, and with greater operational efficiency and lower costs.
All in all, looks like VMware is opening up to competitive platforms and warming up to open source technologies but retains its standoffish traits when it comes to public cloud services.
Just when I thought to myself: Cloud services must be improving as there are fewer outages reported this year than there were last year, Microsoft Azure cloud service went down for many users, including European ones, earlier this week.
Microsoft’s Azure status page currently displays a chirpy:
Everything is running great.
It also displays a bright green check besides its core Azure platform components such as Active Directory, and popular cloud services including its SQL Databases, and storage services.
A snoop into its history page shows that all wasn’t good aboard Azure on Monday and Tuesday. Users experiencing full service interruption and performance degradation across several services including StorSimple, storage services, website services, backup and recovery and virtual machine offerings.
For a brief moment on Tuesday, August 19th, a subset of its customers in West Europe and North Europe using Virtual Machines, SQL Database, Cloud Services, and Storage were unable to access Azure resources or perform management operations. Users accessing Azure’s Website cloud services in Northern Europe too faced connectivity issues.
WELCOME TO Microsoft® (Photo credit: Wikipedia)
The previous day, some of its customers across multiple regions were unable to connect to Azure Services such as Cloud Services, Virtual Machines, Websites, Automation, Service Bus, Backup, Site Recovery, HDInsight, Mobile Services, and StorSimple.
Some of the services were down for almost five hours.
This week’s global outage follows last week’s (August 14th) Azure outage where users across multiple regions experienced full service interruption to its Visual Studio Online. The news doesn’t bode well for CEO Satya Nadella’s “cloud-first” strategy.
Well, I may have tempted fate. Resilience and reliability are two words I’ll use sparingly to describe public cloud services.
Internet of Things, big data, and social media are all creating an insatiable demand for scalable, sophisticated and agile IT resources, making datacentres a true utility. This is making big tech and telecom companies to drift a bit from their core competency and build their own customised datacentres – take Telefonica’s €420m investment in its new Madrid datacentre.
But the mind-boggling growth of computing infrastructure is occurring amid shocking increases in energy prices. Datacentres consume up to 3% of global electricity and produce 200 million metric tons of carbon dioxide, at an annual cost of $60bn. No wonder, IT energy efficiency is primary concern for everyone from CFOs to climate scientists.
In this guest blog post, Dave Wagner, TeamQuest’s director of market development with 30 years of experience in the capacity management space explains why enterprises must not be too hung up on PUE alone to measure their datacentre efficiency.
Measuring datacentre productivity? Go beyond PUE
-by Dave Wagner
In their relentless pursuit of cost effectiveness, companies measure datacentre efficiency with power utilization effectiveness (PUE). The metric measures the total amount of power coming onto the datacentre floor, divided by how much of that power is actually used by the computing equipment.
PUE = Total energy
PUE is necessary but not a sufficient indicator to gauge the costs associated with running or leasing datacentres.
While PUE is a detailed measure of datacentre electrical efficiency, it is one of several elements that actually determine total efficiency. In the bigger picture, focus should be on more holistic and accurate measures of business productivity, not solely on efficient use of electricity.
Gartner analyst Cameron Haight talked about how a very large technology company owns the most efficient datacentre in the world with a PUE of 1.06. This basically means that 94% of every watt that comes into the floor actually gets to processing equipment. This remarkably efficient PUE achievement does not detail what they do with all of that power, and how much total work is accomplished. If all that power is going to servers that are switched on but essentially idling and not actually accomplishing any useful work, what does PUE really tell us? Actual efficiency in terms of doing real-world work could be nearly zero even when the PUE metric indicates a well-run datacentre in isolation.
Datacenter (Photo credit: Wikipedia)
Boiled down, what companies end up measuring with PUE is how efficiently they are moving electricity around within the datacentre.
By some estimates, many datacentres are actually only using 10-15% of their electricity to power servers that are actually computing something. Companies should minimize costs and energy use, but nobody invests in a company solely based on how efficiently they move electricity.
Datacentres are built and maintained for their computing capacity, and for the business work that can be done thereupon. I recommend correlating computing and power efficiency metrics with the amount of useful work and with customer or end user satisfaction metrics. When these factors are optimised in a continuous fashion, true optimization can be realised.
I’ve talked about addressing power and thermal challenges in datacentres for over a decade, and have seen progress made – recent statistics show a promising slowdown in datacentre power consumption rates in the US and Europe due to successful efficiency initiatives. Significant improvements in datacentre integration have helped IT managers control the different variables of a computing system, maximising efficiency and preventing over- or under-provisioning, both having obvious negative consequences.
An integrated approach to planning and managing datacentres enables IT to automate and optimise performance, power, and component management with the goal of efficiently balancing workloads, response times, and resource utilisation with business changes. Just as the IT side analyses the relationships between the components of the stack–networking, server, compute, and applications–the business side of the equation must always be an integral part of these analyses. Companies should always ask how much work they are accomplishing with the IT resources they have; unfortunately, often easier said than done. In the majority of datacentres and connected enterprises, the promise of continuous optimisation has not been fully realised, leaving lots of room for improvement.
As datacentres grow in size and capabilities, so must the tools used to manage them. Advanced analytics have become essential to bridging IT and business demands, starting with relatively simple co-relative and descriptive methods and progressing through predictive to prescriptive approaches. Predictive analytics are uniquely suited to understand the nonlinear nature of virtualised datacenter environments.
These advanced analytic approaches enable enterprises to combine IT and non-IT metrics in such a powerful way that the data generated by the networked computing stack can become the basis for automated and embedded business intelligence. In the most sophisticated scenarios, analytics and machine algorithms can be applied in such a way that the datacentre learns from itself and generates insight and models for decision-making approaching the level of artificial intelligence.
I am just back from Oregon where I attended a workshop at Intel’s Hillsboro campus. What amazed me the most – apart from the most delicious Peruvian cuisine I had at Portland of course – is Intel’s large presence in the area and the number of big datacentres in Oregon.
Intel is the biggest employer of the region and has multiple, vast campuses there. It even has its own airport in Hillsboro, Oregon from where it operates regular flights to its Santa Clara headquarters for its employees. Several flights that carry up to 40 Intel employees operate every day. The hotel I stayed in at Hillsboro told me that on any given day, about 70% of the people it serves are Intel-related.
Apart from Intel almost hijacking Oregon with its presence, the state is also home to many datacentre facilities. Facebook (Prineville datacentre), Google (Its first datacentre – The Dalles), Amazon (Boardman), Apple (also Prineville), and Fortune Datacentres (Hillsboro) – they all have large facilities in Oregon.
One of the primary reasons many tech giants consider Oregon as the home to their datacentres is cheaper costs. Oregon does not have sales tax and this means computer products, building materials and services are cheaper than elsewhere in the US. In addition, power – which is a main datacentre money-guzzler – is cheaper in Oregon. Furthermore, the local government lures tech giants by providing incentives such as tax breaks and subsidies. All these factors attract datacentre investment here.
Prineville, Oregon (Photo credit: Wikipedia)
Because of the tech culture of the region, many professionals develop server management and virtualisation skills. The emphasis on IT skills in the universities and Silicon Valley’s investment in regular training workshops make the workforce in the area more talented and skilled for datacentre management.
Oregon’s weather is comparatively mild. This makes the tricky task of datacentre cooling a little bit easier. It is simpler to devise cooling strategies for a facility when the ambient temperature does not vary highly. Oregon does not get baking hot like Texas or Kansas in the summers nor does it get overwhelmingly snowed under in winters.
The vast stretches of fibre optics cable that run even across Oregon’s mountains, lakes and deserts provide fast connections and latency of milliseconds. Its proximity to Silicon Valley is another puller of datacentre investment.
Geography, stability and security
Big cloud and IT service providers love political and economic stability, and physical security and Oregon gives them that. The region is not too prone to natural disasters such as volcano eruptions, earthquakes or hurricanes – which acts as another big attraction to datacentre builders. Take Iceland for instance, despite its promise of 100% green geothermal energy and fibre optics connections to mainland Europe, many IT providers hesitate to set up datacentres there because of its vulnerability to natural disasters.
Oregon has seismically stable soil and as part of the west coast, it has little to no lightning risk – one of the major cause of outages in the US.
As Google, which opened The Dalles in 2006 by investing $1.2bn, says, Oregon has the “right combination of energy infrastructure, developable land, and available workforce for the datacentre”.
I wonder what Oregon’s equivalent in Europe would be?
It may be too early to conclude that the party at AWS towers is over but the cloud provider is definitely feeling the heat of the competition and the commodity cloud price wars – its quarterly earnings report showed.
Beautiful Bride Barbie – OOAK reroot (Photo credit: RomitaGirl67)
I still remember how Amazon founder Jeff Bezos, at the first ever (2012) AWS re:Invent conference in Vegas, said that ahigh-margin business is not the right one for AWS.
“Operating a low-margin business is harder,” he said adding that the AWS business model is very similar to the retailer’s Kindle business model – where the money is not made when the device is sold, but when people use it and keep buying services for it.
But the price cuts – which are becoming more frequent and deeper (65% cheaper) and driven more by market forces than by internal decisions – is becoming its biggest problem. Since 2008, AWS has slashed cloud services prices 42 times.
AWS has been leading the public cloud price-war, almost over-zealously but other behemoths including Microsoft and Google who have equally deep pockets have been quick to undercut one another in the race to the bottom in pricing for cloud services.
Although the cloud market is still growing rapidly, AWS is finding that its share of the larger pie is shrinking, even while its user number is still growing. It looks like the growth is not enough to offset the price cuts – and this must be where the problems lie. Customers love discounts and price cuts but investors don’t.
With Microsoft and Google apparently now serious about this market, AWS finally has credible competitors,” says Gartner’s public cloud expert Lydia Leong.
In May 2014, Synergy Research Group explained how Microsoft has grown its cloud infrastructure services “remarkably in the last year and is now pulling away from the pack of operators chasing Amazon“.
“AWS is likely to continue to dominate this market for years, but the market direction is no longer as thoroughly in its control,” Leong says.
AWS is no longer the only pretty one in the room. It is having to make space for Google Cloud Platform, Microsoft Azure, OpenStack, and IBM SoftLayer and also for the ferociously emerging players such as Digital Ocean and Profitbricks.
Satya Nadella is going to be a happy man as his “mobile-first” “cloud-first strategy” is gathering momentum. Microsoft’s cloud business has reported a triple-digit YoY growth, the company’s earnings report for Q4 ended June 30, 2014 showed.
“There was good news in enterprise business — from SQL Server, from “All-up Dynamics” growth, with CRM nearly doubling, and with a commitment to expand Azure footprint and capacity, launch new services and deliver more hybrid cloud tiering,” Merv thinks.
The Cabinet Office and GDS (Government Digital Service) have issued a service contract notice seeking a private partner that can provide datacentre colocation services to handle UK government’s information classified as “official”, “secret” and “top secret”.
The government has earmarked up to £700m for the four-year datacentre infrastructure agreement.
“The operating environment is to be capable of housing computer infrastructure that initially handles information with UK Government security classification ‘official’ but there may be a future requirement for Data Centre Colocation Services that handle information with ‘secret’ and ‘top secret’ security classification,” the government document read. “The provision of secret and top secret [information] would be subject to separate security accreditation and security classification,” it added.
The facilities partner must be able to subscribe for a majority shareholding (up to 75% less one share) in the new private company limited established by the Cabinet Office to provide Data Centre Colocation Services – DatacentreCo.
But under government’s Cloud First policy, many existing and new applications will move to the public cloud over the next few years. The Cabinet Office’s cloud-first strategy, announced last year, meant that the cloud will be mandated as the first choice for all new IT purchases in government.
The new potentially £700m datacentre will host ‘legacy’ applications “not suitable or not ready for cloud hosting or for which conversion to cloud readiness would be uneconomic,” the document read.
Cabinet Office, 70 Whitehall, London (next to Downing Street) (Photo credit: Wikipedia)
The Cabinet Office wants the full spectrum of datacentre services – rack space, power facilities, network and security. The datacentre hosting the official and secret information will be spread across an area of 350 sq. metres hosting 150 standard 42u racks. This sounds like a modular datacentre requirement.
And it wants “at least two separate [facility] locations subject to appropriate minimum separation requirements”.
Also on the government wish-list are datacentre compliance with security requirements, scalability, proven track-record in the last three years, performance certificates and specific latency performance requirements (less than 0.5 milliseconds) – to cater to the requirements of initial users — Department of Work and Pensions, the Home Office and the Highways Agency.
The main aim is to have a datacentre facility that is high-quality, efficient, scalable, transparent, service-based (‘utility’) models – basically cloud-like but not the cloud.
How long do you reckon we’ll have to wait before the government declares “serious over-capacity in datacentres” like it did in 2011?
For those still wondering if cloud computing is really mainstream – even Hollywood thinks so. Cameron Diaz’s rom-com Sex Tape releasing next Friday is all about the dangers of the cloud.
Cameron diaz (Photo credit: Wikipedia)
The movie stars Diaz and Jason Segel as a couple making a sex tape in an attempt to spice up their boring lives. The video inevitably makes it to the cloud through Segel’s iPad on which it was filmed. The movie tracks how the couple desperately tries to get the video off the cloud while embarrassingly juggling comments from their parents, bosses and even mailman who all see it.
Here’s some of the dialogues between Diaz (as Annie)and Segel (Jay):
Annie: (walks in) Honey, that sounds familiar, is that our…
Jay: You know the Cloud?
Annie: Stares ominously before yelling F@#$.
Jay: It went up! It went up to the cloud
Annie: And you can’t get it down from the cloud?
Jay: Nobody understands the cloud. It’s a f#$@ing mystery.
Whether they succeed to wipe their content off the cloud or not, we’ll know only on 18thJuly. But it looks like a big struggle with Jay and Annie taking desperate measures like nicking devices belonging to their friends and families and even breaking network infrastructures to get the tape off the cloud.
Maybe Jay and Annie are showing in satirical manner how cloud is a one way street – easy to get it up (even inadvertently) but damn hard to get it off!
Here’s the trailer of Sex Tape starring Cameron Diaz, Jason Segel and The Cloud:
Almost 13 years after Microsoft launched the first version of SharePoint, Amazon has launched its version of file sharing and collaboration tool Zocalo at AWS Summit in New York today. Some AWS Summit followers have billed Zocalo as Google Drive and Dropbox killer on Twitter.
Yes, it’s called Zocalo which, according to Wikipedia, is the main plaza or meeting-point in the heart of the historic centre of Mexico City.
A late entrant in the document sharing space (Dropbox took off in 2007), Amazon will offer Zocalo for $5 per user per month for 200GB of storage (Dropbox costs $15) or even free (only 50GB) with AWS WorkSpaces – the desktop computing service in the public cloud.
According to Amazon, “document sharing and collaboration is a challenge in today’s enterprise”. Take that SharePoint and Google Drive or even Office 365.
Zocalo has some pretty nifty features such as multi-device support, offline usage, word and powerpoint collaboration, and it integrates with existing corporate directory (Active Directory). But there’s a catch and it’s about vendor lock-in – Users will have to put their data first into Amazon S3.
Mexico City Zocalo (Photo credit: Wikipedia)
Will Zocalo really tempt users out of Evernotes, SharePoint, Google Drive, Box and Dropbox? I don’t know about that but it is a pretty clear indication of SaaS, PaaS and IaaS convergence in the cloud segment – Zocalo – a purely SaaS service from a primarily IaaS provider. And it is also proves how Amazon wants to provide everything that enterprise IT needs (scary?).
Organisations turning to the cloud with a sole intention of cost savings are the ones that are least happy with their cloud infrastructure and are the ones that are most likely to give up cloud adoption.
A recent Cloud Industry Forum research found that in the UK, large enterprises showed highest rates of adoption, at just over 80% followed by small and medium businesses. But public sector’s cloud adoption lagged at around 68%.
The study also explored the drivers of cloud adoption and found that the flexibility of cloud as a delivery model was the primary reason for adoption in the private sector while operational cost savings were the main motive for the public sector.
It reminds me of an interesting conversation I had at Cloud World Forum a month ago with Photobox CIO Graham Hobson. Photobox was one of the early adopters of public cloud services – AWS. “When we started, cloud cost was just a fraction (20%) of our total IT spend. Today it is almost equal and I won’t be surprised if our cloud costs overtake our on-premises spend soon,” Hobson told me.
But that doesn’t worry Hobson. In fact he says that public cloud has yielded several benefits in terms of scalability, IT responsiveness and efficiencies for Photobox. “If I was starting a company today, I would have adopted more cloud services than I did a few years ago,” he said.
Cloud services operate on a pay-as-you-go model and although it may look attractively low cost at the beginning, if your IT requires constant high capacity and high performance, your cloud bill can soar.
Like I have argued before, cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the main aim of cloud use should be to grow the business and enable newer revenue-generating opportunities.
Just as Netflix or CERN or BP did.
The main advantage of cloud computing isn’t always cost saving. If anything, cost saving is usually the byproduct of IT efficiencies found by running IT in cloud.