I am just back from Oregon where I attended a workshop at Intel’s Hillsboro campus. What amazed me the most – apart from the most delicious Peruvian cuisine I had at Portland of course – is Intel’s large presence in the area and the number of big datacentres in Oregon.
Intel is the biggest employer of the region and has multiple, vast campuses there. It even has its own airport in Hillsboro, Oregon from where it operates regular flights to its Santa Clara headquarters for its employees. Several flights that carry up to 40 Intel employees operate every day. The hotel I stayed in at Hillsboro told me that on any given day, about 70% of the people it serves are Intel-related.
Apart from Intel almost hijacking Oregon with its presence, the state is also home to many datacentre facilities. Facebook (Prineville datacentre), Google (Its first datacentre – The Dalles), Amazon (Boardman), Apple (also Prineville), and Fortune Datacentres (Hillsboro) – they all have large facilities in Oregon.
One of the primary reasons many tech giants consider Oregon as the home to their datacentres is cheaper costs. Oregon does not have sales tax and this means computer products, building materials and services are cheaper than elsewhere in the US. In addition, power – which is a main datacentre money-guzzler – is cheaper in Oregon. Furthermore, the local government lures tech giants by providing incentives such as tax breaks and subsidies. All these factors attract datacentre investment here.
Prineville, Oregon (Photo credit: Wikipedia)
Because of the tech culture of the region, many professionals develop server management and virtualisation skills. The emphasis on IT skills in the universities and Silicon Valley’s investment in regular training workshops make the workforce in the area more talented and skilled for datacentre management.
Oregon’s weather is comparatively mild. This makes the tricky task of datacentre cooling a little bit easier. It is simpler to devise cooling strategies for a facility when the ambient temperature does not vary highly. Oregon does not get baking hot like Texas or Kansas in the summers nor does it get overwhelmingly snowed under in winters.
The vast stretches of fibre optics cable that run even across Oregon’s mountains, lakes and deserts provide fast connections and latency of milliseconds. Its proximity to Silicon Valley is another puller of datacentre investment.
Geography, stability and security
Big cloud and IT service providers love political and economic stability, and physical security and Oregon gives them that. The region is not too prone to natural disasters such as volcano eruptions, earthquakes or hurricanes – which acts as another big attraction to datacentre builders. Take Iceland for instance, despite its promise of 100% green geothermal energy and fibre optics connections to mainland Europe, many IT providers hesitate to set up datacentres there because of its vulnerability to natural disasters.
Oregon has seismically stable soil and as part of the west coast, it has little to no lightning risk – one of the major cause of outages in the US.
As Google, which opened The Dalles in 2006 by investing $1.2bn, says, Oregon has the “right combination of energy infrastructure, developable land, and available workforce for the datacentre”.
I wonder what Oregon’s equivalent in Europe would be?
It may be too early to conclude that the party at AWS towers is over but the cloud provider is definitely feeling the heat of the competition and the commodity cloud price wars – its quarterly earnings report showed.
Beautiful Bride Barbie – OOAK reroot (Photo credit: RomitaGirl67)
I still remember how Amazon founder Jeff Bezos, at the first ever (2012) AWS re:Invent conference in Vegas, said that ahigh-margin business is not the right one for AWS.
“Operating a low-margin business is harder,” he said adding that the AWS business model is very similar to the retailer’s Kindle business model – where the money is not made when the device is sold, but when people use it and keep buying services for it.
But the price cuts – which are becoming more frequent and deeper (65% cheaper) and driven more by market forces than by internal decisions – is becoming its biggest problem. Since 2008, AWS has slashed cloud services prices 42 times.
AWS has been leading the public cloud price-war, almost over-zealously but other behemoths including Microsoft and Google who have equally deep pockets have been quick to undercut one another in the race to the bottom in pricing for cloud services.
Although the cloud market is still growing rapidly, AWS is finding that its share of the larger pie is shrinking, even while its user number is still growing. It looks like the growth is not enough to offset the price cuts – and this must be where the problems lie. Customers love discounts and price cuts but investors don’t.
With Microsoft and Google apparently now serious about this market, AWS finally has credible competitors,” says Gartner’s public cloud expert Lydia Leong.
In May 2014, Synergy Research Group explained how Microsoft has grown its cloud infrastructure services “remarkably in the last year and is now pulling away from the pack of operators chasing Amazon“.
“AWS is likely to continue to dominate this market for years, but the market direction is no longer as thoroughly in its control,” Leong says.
AWS is no longer the only pretty one in the room. It is having to make space for Google Cloud Platform, Microsoft Azure, OpenStack, and IBM SoftLayer and also for the ferociously emerging players such as Digital Ocean and Profitbricks.
Satya Nadella is going to be a happy man as his “mobile-first” “cloud-first strategy” is gathering momentum. Microsoft’s cloud business has reported a triple-digit YoY growth, the company’s earnings report for Q4 ended June 30, 2014 showed.
“There was good news in enterprise business — from SQL Server, from “All-up Dynamics” growth, with CRM nearly doubling, and with a commitment to expand Azure footprint and capacity, launch new services and deliver more hybrid cloud tiering,” Merv thinks.
The Cabinet Office and GDS (Government Digital Service) have issued a service contract notice seeking a private partner that can provide datacentre colocation services to handle UK government’s information classified as “official”, “secret” and “top secret”.
The government has earmarked up to £700m for the four-year datacentre infrastructure agreement.
“The operating environment is to be capable of housing computer infrastructure that initially handles information with UK Government security classification ‘official’ but there may be a future requirement for Data Centre Colocation Services that handle information with ‘secret’ and ‘top secret’ security classification,” the government document read. “The provision of secret and top secret [information] would be subject to separate security accreditation and security classification,” it added.
The facilities partner must be able to subscribe for a majority shareholding (up to 75% less one share) in the new private company limited established by the Cabinet Office to provide Data Centre Colocation Services – DatacentreCo.
But under government’s Cloud First policy, many existing and new applications will move to the public cloud over the next few years. The Cabinet Office’s cloud-first strategy, announced last year, meant that the cloud will be mandated as the first choice for all new IT purchases in government.
The new potentially £700m datacentre will host ‘legacy’ applications “not suitable or not ready for cloud hosting or for which conversion to cloud readiness would be uneconomic,” the document read.
Cabinet Office, 70 Whitehall, London (next to Downing Street) (Photo credit: Wikipedia)
The Cabinet Office wants the full spectrum of datacentre services – rack space, power facilities, network and security. The datacentre hosting the official and secret information will be spread across an area of 350 sq. metres hosting 150 standard 42u racks. This sounds like a modular datacentre requirement.
And it wants “at least two separate [facility] locations subject to appropriate minimum separation requirements”.
Also on the government wish-list are datacentre compliance with security requirements, scalability, proven track-record in the last three years, performance certificates and specific latency performance requirements (less than 0.5 milliseconds) – to cater to the requirements of initial users — Department of Work and Pensions, the Home Office and the Highways Agency.
The main aim is to have a datacentre facility that is high-quality, efficient, scalable, transparent, service-based (‘utility’) models – basically cloud-like but not the cloud.
How long do you reckon we’ll have to wait before the government declares “serious over-capacity in datacentres” like it did in 2011?
For those still wondering if cloud computing is really mainstream – even Hollywood thinks so. Cameron Diaz’s rom-com Sex Tape releasing next Friday is all about the dangers of the cloud.
Cameron diaz (Photo credit: Wikipedia)
The movie stars Diaz and Jason Segel as a couple making a sex tape in an attempt to spice up their boring lives. The video inevitably makes it to the cloud through Segel’s iPad on which it was filmed. The movie tracks how the couple desperately tries to get the video off the cloud while embarrassingly juggling comments from their parents, bosses and even mailman who all see it.
Here’s some of the dialogues between Diaz (as Annie)and Segel (Jay):
Annie: (walks in) Honey, that sounds familiar, is that our…
Jay: You know the Cloud?
Annie: Stares ominously before yelling F@#$.
Jay: It went up! It went up to the cloud
Annie: And you can’t get it down from the cloud?
Jay: Nobody understands the cloud. It’s a f#$@ing mystery.
Whether they succeed to wipe their content off the cloud or not, we’ll know only on 18thJuly. But it looks like a big struggle with Jay and Annie taking desperate measures like nicking devices belonging to their friends and families and even breaking network infrastructures to get the tape off the cloud.
Maybe Jay and Annie are showing in satirical manner how cloud is a one way street – easy to get it up (even inadvertently) but damn hard to get it off!
Here’s the trailer of Sex Tape starring Cameron Diaz, Jason Segel and The Cloud:
Almost 13 years after Microsoft launched the first version of SharePoint, Amazon has launched its version of file sharing and collaboration tool Zocalo at AWS Summit in New York today. Some AWS Summit followers have billed Zocalo as Google Drive and Dropbox killer on Twitter.
Yes, it’s called Zocalo which, according to Wikipedia, is the main plaza or meeting-point in the heart of the historic centre of Mexico City.
A late entrant in the document sharing space (Dropbox took off in 2007), Amazon will offer Zocalo for $5 per user per month for 200GB of storage (Dropbox costs $15) or even free (only 50GB) with AWS WorkSpaces – the desktop computing service in the public cloud.
According to Amazon, “document sharing and collaboration is a challenge in today’s enterprise”. Take that SharePoint and Google Drive or even Office 365.
Zocalo has some pretty nifty features such as multi-device support, offline usage, word and powerpoint collaboration, and it integrates with existing corporate directory (Active Directory). But there’s a catch and it’s about vendor lock-in – Users will have to put their data first into Amazon S3.
Mexico City Zocalo (Photo credit: Wikipedia)
Will Zocalo really tempt users out of Evernotes, SharePoint, Google Drive, Box and Dropbox? I don’t know about that but it is a pretty clear indication of SaaS, PaaS and IaaS convergence in the cloud segment – Zocalo – a purely SaaS service from a primarily IaaS provider. And it is also proves how Amazon wants to provide everything that enterprise IT needs (scary?).
Organisations turning to the cloud with a sole intention of cost savings are the ones that are least happy with their cloud infrastructure and are the ones that are most likely to give up cloud adoption.
A recent Cloud Industry Forum research found that in the UK, large enterprises showed highest rates of adoption, at just over 80% followed by small and medium businesses. But public sector’s cloud adoption lagged at around 68%.
The study also explored the drivers of cloud adoption and found that the flexibility of cloud as a delivery model was the primary reason for adoption in the private sector while operational cost savings were the main motive for the public sector.
It reminds me of an interesting conversation I had at Cloud World Forum a month ago with Photobox CIO Graham Hobson. Photobox was one of the early adopters of public cloud services – AWS. “When we started, cloud cost was just a fraction (20%) of our total IT spend. Today it is almost equal and I won’t be surprised if our cloud costs overtake our on-premises spend soon,” Hobson told me.
But that doesn’t worry Hobson. In fact he says that public cloud has yielded several benefits in terms of scalability, IT responsiveness and efficiencies for Photobox. “If I was starting a company today, I would have adopted more cloud services than I did a few years ago,” he said.
Cloud services operate on a pay-as-you-go model and although it may look attractively low cost at the beginning, if your IT requires constant high capacity and high performance, your cloud bill can soar.
Like I have argued before, cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the main aim of cloud use should be to grow the business and enable newer revenue-generating opportunities.
Just as Netflix or CERN or BP did.
The main advantage of cloud computing isn’t always cost saving. If anything, cost saving is usually the byproduct of IT efficiencies found by running IT in cloud.
Last Thursday, I met AWS to learn how users are building supercomputers in the cloud and also to see one being created right in front of me!
Unfortunately, the demo didn’t succeed. I don’t know if it was a buggy code or what but Ian Massingham, technology evangelist at AWS wasn’t able to create a supercomputer and he was as disappointed about it as I was if not more.
But Ian had created one the previous evening — “Just ran up my first HPC cluster on AWS using the newly released cfncluster demo,” read Ian’s tweet from the previous day. The link to a demo video AWS sent me subsequently also showed how to get started on cfncluster in 5 minutes.
Amazon cfncluster is a sample code framework, available for free – – to help users run high performance computing (HPC) clusters on AWS infrastructure.
I got to hear how enterprise customers, pharmaceutical companies, scientists, engineers and researchers are build cluster computers on AWS to do some pretty serious tasks such as research on medicine, assessing financial standing of the companies etc, all while saving money (My feature article on how enterprises are exploiting public cloud capabilities for HPC will appear on ComputerWeekly site soon).
And having spent the last two days at the International Supercomputing conference (ISC 2014) in Leipzig, I feel that high-performance computing, hyperscale computing and supercomputers are the fastest growing sub-set of IT. HPC is no more restricted to just science labs but even enterprises such as Rolls Royce and Pfizer are building supercomputers – to analyse jet engine compressors and to research around diseases respectively.
tianhe-2 (Photo credit: sam_churchill)
Take Tianhe-2, a supercomputer developed by China’s National University of Defense Technology for research which retained its position as the world’s biggest supercomputer. It has 3120000 cores, runs a performance of 33.86 petaflops/s (quadrillions of calculations per second) and uses 17808kW of power. Or the US DoE’s Titan – a Cray XK7 system running more than 560,000 cores or any of the UK’s top 30 supercomputers. They are all mind-boggling in their size, compute performance and uses.
Whether on the cloud or on-premises, I didn’t hear a single HPC use-case in the last two days that wasn’t cool or awe-inspiring. Imperial College London, Tromso University, Norway, US Department of Energy, Edinburgh University and AWE all use supercomputers to do research and computation around things that matter to you and me. As one analyst told me, “From safer cars to shinier hair, supercomputers are used to solve real-life problems”.
Now I know why Ian was having a hard time to pick his favourite cloud HPC project – they’re all cool.
The sixth annual Cloud World Forum wrapped up yesterday and here’s what the event tells us about the state of cloud IT in the enterprise world.
OpenStack is gaining serious traction
OpenStack’s big users and providers claimed the cloud technology is truly enterprise-ready because of its freedom from vendor lock-in and portability features. Big internet companies such as eBay are running mission-critical workloads on OpenStack cloud. Even smaller players such as German company Centralway is using open source cloud to power its infrastructure when TV adverts create load peaks.
HP says it is “all in” when it comes to OpenStack. It is investing over $1bn in cloud-related products and services, including an investment in the open-source cloud. RedHat has just acquired eNovance, a leader in the OpenStack integration services for $95m. Rackspace and VMware are ramping up their OpenStack services and IBM has built its cloud strategy around OpenStack.
Skills shortage around developing OpenStack APIs into a cloud infrastructure seems to be the only big barrier hindering its widescale adoption.
Rise of the cloud marketplace
Cloud marketplace is fast becoming an important channel for cloud transactions. According to Ovum analyst Laurent Lachal, company JasperSoft gained 500 new customers in just six months with AWS marketplace. Oracle, Rackspace, Cisco, Microsoft and IBM have all recently launched cloud services marketplaces.
What it means to the users? Browsing the full spectrum of cloud services will become as easy for customers as browsing apps in the Apple App Store or Google Play. “As cloud matures, established marketplace seems like a logical evolution. It is a new trend but it gives users a wealth of options in a one-stop-shop kind of way,” said Lachal.
Vendor skepticism on the rise
Bank of England CIO John Finch, in his keynote, warned users of “pesky vendors” and cloud providers’ promises around “financial upside of using the cloud”. Legal experts and top enterprise users urged delegates to understand the SLAs and contract terms very clearly before shaking hands with the cloud providers.
Changing role of CIOs
Cloud is leading to the rise of Shadow IT and CIOs must don the role of becoming the broker of technologies and educating enterprise users on compliance and security, it became apparent at the event. Technology integration, IT innovation and service brokerage are some of the skills CIOs need to develop in the cloud era.
Questions around compliance, data protection, security on the cloud remain unanswered
Most speakers focusing on the challenges around cloud adoption mentioned security, data sovereignty, privacy, compliance and vendor-friendly SLAs as its biggest barriers
Not all enterprises using cloud are putting mission-critical apps on public cloud
Lack of trust seems to be the main reason why enterprises are not putting mission critical workloads on public cloud. Bank of England’s Finch just stopped short of saying “never” to public cloud. Take Coca Cola bottling company CIO Onyeke Nchege for instance – he’s planning to put mission critical ERP systems on the cloud but private cloud. EBay runs its website on the OpenStack cloud – but a private version it built for itself. One reason customers cite is that mission critical apps seem to be more static and don’t need fast-provisioning or high scalability.
“It is not always about the technology though. In our case our metadata is not sophisticated enough for us to take advantage of public cloud,” said Charles Ewan, IT director at the Met Office.
But there are some enterprises such as Astra Zeneca (running payroll workloads on public cloud) or News UK that manages its flagship newspaper brands on AWS cloud.
Urgent need for cloud standards in the EU
Lack of standards and regulations around cloud adoption, data protection and sovereignty and cloud exit strategies is making cloud adoption messy. Legal technology experts urged users to be “wise” in their cloud adoption until such time that regulations are developed. But regulators and industry bodies including the European Commission, the FCA and Bank of England are inching closer to developing guidelines and regulatory advice to protect cloud users.
Everyone’s trying to get their stamp on the cloud
The more crowded than ever Cloud World Forum saw traditional heavyweights (IBM, HP, Dell, Cisco) rub shoulders with a slew of new, smaller entrants as well as public cloud poster-boys such as AWS, Google and Microsoft Azure. Technology players ranging from chip providers to datacentre cooling services sellers were all there to claim their place in the cloud world.
I was at a roundtable earlier this week discussing the findings of one enterprise cloud research. The findings are embargoed until June 24, but what struck me the most was the numbers around failed or stalled cloud projects.
And that led me to discuss it more with industry insiders. Here are a few reasons why cloud projects might fail:
- Using cloud services but not using them to address business needs
One joke doing rounds in the industry goes a bit like this – The IT head tells his team, “You lot start coding, I’ll go out and ask them what they want”.
But the issue of not aligning business objectives with IT is still prevalent. The latest study by Vanson Bourne found that as many as 80% of UK CIOs admit to significant gaps between what the business wants and when the IT can deliver it. While the average gap cited was five months, it ranged between seven to 18 months.
- Moving cloud to production without going through the SLAs again and again. And again
If one looks at the contracts of major cloud providers, it becomes apparent that the risk, is almost always pushed out on to the user and not on the provider – be it around downtime, latency, availability, and data regulations. It is one thing to test cloud services and quite another to put it out on actual production.
- Hasty adoption
Moving cloud to production hastily without testing and piloting the technology enough and without planning management strategies will also lead to failure or disappointment with cloud services.
- Badly written apps
If your app isn’t configured correctly, it shouldn’t be on the cloud. Just migrating badly written apps on to the cloud will not make them work. And if you are not a marque customer, your cloud provider will not help you with it either.
- Being obsessed with cost savings on the cloud
One expert says – those who adopt cloud for cost savings fail; those who use it to do things they couldn’t do in-house succeed. Cost-savings on the cloud comes over time as businesses get the hang of capacity management and scalability but the primary reason for cloud adoption should be to grow the business and enable newer revenue-generating opportunities. For example, News UK adopted cloud services with an aim to transform its IT and manage its paywall strategy. Its savings were a byproduct.
- Early adoption of cloud services… Or leaving it too late
Ironic as it may sound, if you are one of the earliest adopters of cloud, chances are that your cloud might be the earliest iteration and may not be as rich in features as the newer versions. It may even be more complex than current cloud services. For instance, there is a lot of technical difference between pre-OpenStack Rackspace cloud and its OpenStack version.
If you’ve left it too late, then your competitors are ahead of the curve and the other business stakeholders influence IT’s cloud buying decisions.
- Biased towards one type of cloud
Hybrid IT is the way forward. Being too obsessed with private cloud services will lead to deeper vendor lock-in and adopting too much public cloud will lead to compliance and security issues. Enterprises must not develop a private cloud or a public cloud strategy but use cloud elements that best solves their problems. Take Betfair for instance, it uses a range of different cloud services. It uses AWS Redshift warehouse service for data analytics but uses VMware vCloud for automation and orchestration.
- Relying heavily on reference architecture
Cloud services are meant to be unique to suit individual business needs. Replicating another organisation’s cloud strategies and infrastructure is likely to be less helpful.
- Lack of skills and siloed approach
Cloud may indeed have entered mainstream computing but the success of cloud directly depends on the skills and experience of the team deploying it. Hiring engineers and cloud architects with experience on AWS to build private cloud may backfire. Experts have also called on enterprises to embrace DevOps and cut down the siloed approach to succeed in cloud. British Gas hired IT staff with the right skills for its Hive project built on the public cloud.
- Viewing it as in-house datacentre infrastructure or traditional IT
Cloud calls for new ways of IT thinking. Just replacing internal infrastructure with cloud services but using the same IT strategies and policies to govern the cloud might result in cloud failure.
There may be other enterprise-related problems such as lack of budget or cultural challenges or legacy IT that may result in failed or stalled cloud project, but more often it is the strategy (or the lack of it) to blame than the technologies.