On the 8th day, the Internet was created. And with it came (basically) seamless ability to access and move information around the big tangled mess of Intertubes. Apparently those TCP/IP inventors were pretty smart. And apparently ever since that day, or whenever Cloud Day 1 was, people have had the belief they could coordinate to do something similar for applications, application-containers and all of their associated data. They like to draw analogies to how email works, or how your mobile phone roams from carrier to carrier. And hence, there is the dream of a Federated InterCloud of Hybridness.
This past week, the market rumbled about how Cisco was planning to spend $1B to create the InterCloud. Since $1B is the new starting point for making cloud announcements or investments, this caught people’s attention. Also, because some media misunderstood it to be that Cisco was actually going to get into the Cloud business and directly compete against AWS, GCP, Azure, Rackspace, etc. That’s not actually the case. This about selling equipment to lots of SPs, with the hope that they will interconnect and allow companies to freely move between them. InterCloud’ing at the network layer, with a means of interconnecting with public cloud APIs.
But Cisco is by no means the first to drive this concept. Let’s take a look at who else has been driving this.
One of the most interesting things about doing The Cloudcast (podcast) is the variety of topics we get to discuss and the perspectives from people across our industry. Over the past few weeks, we’ve done shows focused on the evolution of PaaS, trends with AWS and public cloud, and how large-scale web companies scale. Regardless of whether the perspective was about large Enterprises, using Public Cloud providers or running large-scale operations, every one of the guests repeatedly brought up the growing demand for DevOps skills.
Let me step back and clarify a little bit. Saying a company needs “DevOps skills” is sort of like saying that a company needs to buy “Cloud products”. Both Cloud and DevOps are terms used to describe operational models. And in most cases, they are intertwined. Cloud tends to be more focused on self-service, on-demand resources. DevOps is focused on agile development and agile operational models than are tightly integrated. Both are focused on increasing the velocity that applications can be deployed (and updated) to help a business reach their goals.
There were a few interesting/insightful comments from those shows that got me thinking quite a bit:
“While many companies will use tools like Puppet or NewRelic (or whatever) to deploy to AWS, they have alot of skepticism that their tools companies will be around in a few years. So these companies are encouraging their people to really understand the mechanics of how they deal with operations and deployments. Over the next 2yrs, DevOps skills will be the most highly in demand, especially for anyone that considers themselves in IT Ops today.” – Mat Ellis (Cloudability)
“The problems within an Enterprise are no different than the problems of a start-up, especially if you buy into the notion that every successful company over the next 10yrs will have to have a significant involvement with software (across any industry).” – Mark Imbriaco (GitHub) Continued »
When I got into the technical side of the IT industry back in the 1990s, there was a company just down the street that was starting to gain some traction – RedHat – and they were pushing this new variant of Unix that some of my more experienced colleagues thought was interesting. Most of them already had deep UNIX backgrounds, so they were excited about the idea of a free version that ran on x86 hardware. I was less interested at the time because I thought the things a Cisco IOS box could do were pretty interesting and gave me more than enough to study for my pending CCIE.
Flash forward 15+ years and all that LInux stuff is all over the place as more capabilities that used to be in dedicated devices has now moved out into various LInux-based products and open-source distributions. Whether it’s in the networking space (OpenDayLight, Open vSwitch, OpenStack Neutron, Cumulus Linux, Quagga) or the storage space (CEPH, OpenStack Swift/Cinder, RedHat Gluster, etc.) or the Cloud management space (CloudStack, OpenStack, Cloud Foundry, OpenShift, etc.) or the underpinnings of modern application development, Linux is behind the pace of change in Cloud Computing. Continued »
Before getting into any technology discussions/definition about Hybrid Cloud, it’s important to understand what value it potentially provides to businesses. In the most basic definition, “hybrid” means multiple things brought together to work as a unified system. For most businesses, this means that every function (Finance, Sales, Marketing, Engineering, etc.) will utilize whatever mix of resources provide them the most efficient combination of agility, cost and risk-management. IT should be not different. In fact, given the popularity of SaaS applications, most IT organizations have already deployed a variation of a “Hybrid Cloud” or at least “Hybrid IT”, where it makes sense for them to utilize outside resources for functionality that might not be core to their business (Core vs. Competency). From a business perspective, IT organizations that look for Hybrid Cloud solutions are seeking ways to maximize their ability to deliver technology solutions for their business, utilizing the most appropriate global resources available to them at any given time. IT organizations need this flexibility to be able to align to how the rest of their peer groups within an organization operate.
Evolving Hybrid Cloud Definition
- The industry needs a definition that isn’t just about Public+Private = Hybrid, but needs a definition where Hybrid is also about Platform2 + Platform3 apps. (Platform2 = Existing, traditional applications; Platform3 = modern, next-gen, web-scale, mobile, social, big data applications)
- The industry needs a definition that allows customer to Build+Buy Cloud (CAPEX+OPEX) to deliver the best overall cost (long-term apps = BUILD; short-term, variable apps = BUY)
- Businesses want to seamlessly manage both Platform 2 and Platform 3 applications, within an integrated framework of SW + HW
- Businesses want to mix and match HW-defined and SW-defined across clouds to deliver Hybrid Cloud services across any cloud
- Businesses want toSaaS simplicity with Enterprise security & control.
Earlier this week, Andi Mann (@andimann) posted a simple question on Twitter about how we should treat our data. It evolved out of the commonly used phase that “you should treat servers like cattle and not pets“, a reference to more modern applications that are modular and designed around the failure characteristics of commodity hardware.
[NOTE: There is some debate over who originates or popularized this phrase]
While choosing how to treat your servers can be closely aligned to the types of applications being used, it’s a little more complicated when trying to align an analogy to data. First of all, some people think data (by itself) isn’t very valuable, until you apply some context around it and turn it into information. The “Data Gravity” theories (podcast). Other primarily focus on how data is organized and manipulated within various databases as the critical element to address. Still others are focused on the complexity and variability of the storage mechanism of the data (lots of architectures and form factors).
So why would I say that we should treat the data like “grandparents”?
Let’s start with some basic analogies and comparisons.
Value of Data – Most people would not argue with the idea that their data is important and are willing to spend large sums of money to keep it protected from being lost (or corrupted). This is not only true of the storage industry, but also of the human life industry. In data, just as in life, we will often spend 3-10x the cost to retain and protect the data as we did to create it. Backups, Clones, Snapshots, Archieves, Cloud Storage, Home Storage, etc.. All to make sure that data is around for a long, long time.
The first one was an article by Ben Kepes (@benkepes) that called out a recent JPMC report that focused on the transformation of networking equipment. It highlighted that in comparisons between ‘whiteboard’ (or ‘bare-metal’) networking equipment and Cisco equipment, that support costs were adding a significant level of cost to the overall price.
“….However, one of the insights we come away with is that Cisco’s services fees and licenses significantly inflate their pricing. Over time we would expect these fees to come under the most pressure as Cisco is forced to compete with lower cost solutions….” – JPMC
The second one came from EMC with their announcement that the ViPR storage software would be offered for free, with community support. EMC is obviously not the first company (or community project) to release free software, but this is definitely a different path for a traditional Enterprise vendor to take with a new product. Make it easy to download, trial and use the technology. Update the wiki-based documentation as problems are found. Consult with others using the software, in real-time, on the communities. It’s not your grandfather’s Enterprise vendor engagement model anymore.
Still other activities are starting to take the place of traditional tech-support. Sites like StackOverflow are there to help software developers. Projects like OpenStack are driving community efforts around documentation and training, to augment existing tools like IRC to drive community-based support. And programs like #EMCElect, #CiscoChampions, and VMware vExperts are trying to identify their best and brightest, with some people taking this a step further and setting up various fee-based activities to capitalize on their knowledge and community building.
Instead of continuing to build (or outsource) large tech-support organizations, will we begin to see Support-as-a-Service groups start to emerge? For fee, the best-of-the-best, knowledge repositories and discussions to supplement or replace support over time? Can this be done in real-time as tools/software/products report their status and intelligent analytics systems seek rememdiation in real-time?
What do you see as the future of support in the Enterprise?
Many years ago, NIST set out to define Cloud Computing. In doing so, they laid out a stacked model of IaaS, PaaS and SaaS. The leading public provider is lumped into the IaaS category, but offers many PaaS-like services. Some people are saying that maybe PaaS is dead, or at least needs to be redefined.
But recently, we’ve seen more and more people saying that maybe there will be a layer in between the IaaS and PaaS defintions. As the ability to offer “everything as a service” has evolved, so too has the thinking around the concept of “IaaS+”. IaaS+ is the idea that it’s more than just provisioning VMs and the underlying infrastructure, but also a rich set of services that can be used in conjunction with the apps in those VMs. And it implies a more modular approach than the platform/frameworks of PaaS.
- Gartner talks about IaaS+: http://cloudpundit.com/2014/01/28/the-end-of-the-beginning-of-cloud-computing/
- Moving from IaaS to IaaS+: http://www.cumulogic.com/making-the-move-from-iaas-to-iaas/
- Defining why developers like AWS: “It’s the Services”: http://blog.ingineering.it/post/47329908147/why-aws-is-so-far-ahead
There are already companies like Cumulogic (podcast), Transcend Computing and others that are building these IaaS+ layers on top of various IaaS stacks. We’re also seeing projects like OpenStack start to consider this approach with projects like Trove, Marconi, and Savanna.
“To me, the main problem is that IT assumes that VMs “as a service” is enough for developers. But watch AWS, Google and MS. What “cloud services” are they innovating with these days? They’re focused on more than just VMs, that’s for sure. DB services, Queues, CDN, etc… Developers want these higher value services, and they don’t get that from basic IaaS (or even worse, virtualization farms called “clouds”). Until IT departments (1) understand the difference and (2) find a way to deliver these services, the developers will continue to use the clouds that *do* provide them.” – Chip Childers
There are obvious trade-offs between using a platform-centric approach (PaaS) and a decoupled-services-centric approach like IaaS+, especially for the groups that have to operate and maintain the systems. It will be interesting to see which approach accelerates and emerges over the next few years, especially as more of this technology gets adopted by Enterprises for new and modern applications.
Will an OpenStack Leader Emerge?
We’ve all watched the evolution of OpenStack for the past 3+ years, from incubation to massive open-source trend. Sometimes it’s been good and others times less so. Lots of commentary on how much code has been contributed and which vendors are part of the OpenStack Foundation. But will a true leader emerge in 2014, like RedHat did from the Linux wars of the late 1990s? Will we see strategic consolidation (eg. companies like Piston Cloud or Cloudscaling acquired)? Will someone buy the experience of Mirantis? Will an OpenStack powered cloud begin to challenge AWS in services, scale and pricing? Will Oracle or VMware be the catalysts to bring OpenStack to the mainstream Enterprise, or mostly just make announcements?
What Segment of SDN will Emerge?
“SDN” has been through so many evolutions over the last 3-4 years that sometimes it’s hard to even know what the terminology means anymore. Recently articles from various vendors highlight this (here, here). But just like the networking industry, SDN will have applicability in many segments of the network, and many layers of a network stack. Beyond the L2 “virtual switch” in servers, virtual networking still hasn’t established a strong foothold yet. Does it begin to emerge in the L4-L7 services space, or L2-L3 overlays in the datacenter? Do the NFV-driven business motivations of Service Providers occur faster than Enterprise IT changes their operational models? Or will Cisco’s market dominance prevent SDN from gaining significant market traction?
What has a Bigger Impact on Storage – HyperConvergence, Flash or Software-Defined?
Pure Storage gets $150M, Nutanix gets $100M, Nimble has a successful IPO and EMC/VMware launches ViPR, ScaleIO, XtremIO and VSAN. That’s a lot of VC funding for emerging start-up companies, balanced against the market leaders driving disruptive models of both the industry and their own portfolios.
Do Clouds become the new IT Silos?
In the past, Enterprise data centers were filled with application-centric silos. They were consolidated into a small number of locations, but they were still often unique, segmented and isolated. And then IaaS offerings like AWS began showing the market how to build environments that could host many types of applications on shared infrastructure environments. Within the last year, Adobe, Apple, Cisco UC, VMware, Microsoft, SAP and Oracle have all built their own Cloud offerings. Do Enterprise customers continue to build Private Clouds and leverage AWS, or do they flock to the vendor-specific public clouds? Or does better technology emerge to help IT organizations manage the hybrid offerings across all these clouds?
How do we measure PaaS Success?
Only a few years into it’s existence and some people are already calling it “dead” (or not dead), or misdefined. But yet, we’re still seeing large companies publicly put their support behind PaaS initiatives, both in the public cloud and within the Enterprise space. Or maybe PaaS is really old and we should be doing a better job keeping a scorecard of its progress. Will we see PaaS become a trackable market segment in 2014?
Another year, another set of predictions and opinions about Private Cloud. The list below is just a small spectrum of how believe the market is going to vote with their actions. The reality is that this was never really a debate. Even before the term “Private Cloud” got introduced in 2009-2010, that game was over. IT organizations were already using a broad mix of internal and external resources to deliver technology to their business. It was SaaS applications and packaged applications and homegrown applications. Inside the firewall and outside the firewall.
And it’s true that lots of IT organizations either have built or will build a variation on what they believe a Private Cloud should be. For some it’s about efficiency and cost-reductions.For others it’s about agility and ever-growing business demands. But the reality is that no IT organization is going to only have a Private Cloud strategy. They never did before and the competitive world won’t let them going forward.
The reality is we live in a hybrid world. When the CIO sits down at C-suite meetings and looks around the table, they find that every other organization has already adopted a hybrid model. Finance, Marketing, HR, Sales, Engineering. Every one of those organizations has already optimized their segment of the business to leverage the best resources for each departmental requirement, whether they are internal resources or external resources. In essence, they have already build a Hybrid Cloud organization, just not with servers, storage and networks.
The real challenge for IT organizations going forward is how to best manage the Hybrid Clouds they have created.
- How to continue to have visibility across clouds?
- How to continue to reduce friction between business ideas and execution?
- How to create flexibility so that they are at least marginally prepared for the next technology shift (what comes after mobile?)
You hear that?
It’s the sound of frustrated developers and SysAdmins. The groups that are on the shop floors of 21st century bit factories, trying to keep things running more quickly and more smoothly. The groups that have been in the trenches of the evolution known as DevOps. And they are frustrated. Not at the tools that are available to help them do their jobs. Those are plentiful and improving all the time (Github, Jenkins, Vagrant, Docker, Puppet, Chef, Ansible, Zookeeper, etc.). Not at the people that have been in the trenches sharing their experiences (Jez Humble, Patrick Dubois, Gene Kim, etc.). They are frustrated with the marketers that are trying to manipulate DevOps and turn it into another round of Cloudwashing.
It wasn’t great when the great Cloudwashing era of 2009-2012 came over us. Cloud-in-a-Box, Personal Clouds and all the other variations of “whatever we sell is now Cloud”. But at least IT Admins could point to basic elements of virtualization or automation and prove that some things in IT cost less and could be accomplished faster than before. It wasn’t really Cloud, but it was an improvement.
The same thing won’t happen for those trying to push Big DevOps or New DevOps or DevOps is the new Orange. Developers don’t suffer fools. They will ignore you. They will shun you and shame you.
Leave DevOps alone. The developers and sysadmins are making progress, learning on the fly and continuously improving. They don’t need your DevOps training wheels.