On-Demand, Self-Service access to IT resources. That’s the promise of Cloud Computing. It’s a reality that has been coming together for the past 5-7 years. At times we marvel at the pace of change that cloud computing has had on the entire IT industry, and other times we grow frustrated that broad adoption and availability of new services aren’t happening fast enough. So as I think about the year ahead, these are some areas where I believe we need to see improvements in 2014 to keep this industry shift moving forward at the pace we’re all expecting.
Move from Building to Buying Private Cloud
We all lived though the “Cloud in a Box” era of HW+SW packaging (sometimes just HW) that was targeting Private Cloud. While this market has continued to grow rapidly (and be renamed “Converged Infrastructure”), it’s only a piece of the puzzle. Those are building blocks. What we need to see is an evolution to “buying blocks”, where the entire solution can be purchased in a way that better aligns to the services that will be consuming those underlying resources. This improvement will require some basic assumptions (or economic changes):
- We may need to have consumption-based pricing of on-premise HW+SW
- We will need to see on-premise resources priced in a way that makes comparisons with public cloud services fairly straight-forwards (hourly, daily, monthly)
Enable the IT Product Manager by Default
While there is still some debate about what the appropriate (or inappropriate) HW+SW to use to build a cloud computing environment, most people agree that making changes to the associated people and process is the more difficult side of the equation. One way to improve this is to enable the IT-as-a-Service “Product Management” function by default. The service catalogs and portals need to be on by default. The basics of the menu need to be delivered as templates, inclusive of the service offering, the pricing models and the basic policies for usage/security/etc. Not every one can become a master chef, but many people can successful learn to make acceptable (and often delicious) meals by giving them a good cookbook filed with recipes. This is also needed for the groups that will be operating the new cloud services. Don’t just give them the tools, but also give them the recipes to enable successful consumption and usage of the services. Continued »
Until a few years ago, you could almost guarantee that every technology vendor pitch would include a claim about how they could help a customer avoid lock-in. It could be through open-standards or open-APIs or an SDK that would allow customization. And then the industry changed and avenues to acquire technology expanded to include public cloud computing and the mainstream use of open-source technologies. These new “vendors” weren’t the same names that had dominated previous generations of IT, so they could use vendor lock-in as FUD to highlight why their new capabilities we worth a look from the IT department (or directly by the lines of business).
Back in the day, technologists would wait until the committees at IETF, IEEE, SNIA, W3C or some other standards body had decided what was officially a standard. Standards were intended as a way to manage interoperability, but also control costs so that no single vendor could dictate a technology and hence hold a market hostage on prices and profit margins. It was the leverage that customers had against the technology vendors. Interoperability was customer’s leverage against lock-in.
Somewhere along the way, may people seem to have forgotten what lock-in really means. They have forgotten that it happens every time any new technology is downloaded, deployed or implemented. Lock-in is the cost of making an initial decision + the cost of learning that new technology + the cost of maintaining that technology + some (potential) future cost of changing that technology. There are other “opportunity costs” that can also be factored in, such CAPEX vs OPEX (eg. public cloud) or agility costs of being able to modify the technology directly (eg. open-source), but those are really just variations on maintenance costs. Continued »
It’s that time of year. Time for that time-honored technology tradition of pontificating bloggers making their brilliant 2014 predictions. The time when we dazzle you with our prognostication skills and crystal clear view of the future.
But the big challenge is we’ve all heard these before. This same problem happened to everyone’s favorite game show, Wheel of Fortune, as people got tired of contestants guessing the same letters in the Final Bonus Round. So to spice things up, they automatically gave the contestants the letters R-S-T-L-N-E, allowing the viewers to bypass the monotony of repetition.
For you, my loyal readers, I’m going to give you the R-S-T-L-N-E of 2014 predictions, so you can bypass all those other blogger predictions. You can thank me later…
R – 2014 will be the Year of VDI. Write it down. Tattoo it on your back. The technology has finally come around and it’s ready to move the ball to the finish line.
S – <Insert_Blogger’s_Company_Focus> will lead the industry in a radical technology shift. In the process, the legacy competitor will have no way to keep up with their world-class technology and overall company nimbleness.
T – Software-Defined OpenAwesomesauce will signal the death of 2005’s leading technology. It’s finally dead. Carve the tombstone.
L – Open Source will win. It’s free, like beer, or puppies, or other free stuff like lollipops at the dry cleaners, and free always wins. And if your company doesn’t open-source all of it’s technology then you will lose.
N – There’s a 50/50 chance that your company will abandon ITIL and embrace DevOps, while converting all of your infrastructure to whitebox hardware and you’ll only hire people in IT that can write production code on Day 1, just like they do at Facebook. All of your internal company meetings will be replaced by hackathons.
E – Adrian Cockcroft (@adrianco, Netflix) will be highlighted (or keynote) at many Cloud Computing events. This will be balanced against Oracle announcing that they are now supporting a technology or partnership that they condemned just 2 years ago. Continued »
After 10+ years of limited change in the networking industry, we entered 2013 with a great amount of fanfare and promise with a new set of networking technologies to address the challenges brought about by server virtualization and web-scale data centers. The $1.2B acquisition of Nicira by VMware, considered one of the leading pioneers of SDN, got people thinking that maybe there was a chance that leading architectures and marketshare could be changing over the next few years. It also signaled that major companies believed in this new networking paradigm and were willing to put significant dollars behind the SDN trend.
Then things got interesting…
Big Switch Networks had attempted to capture the early marketshare by open-sourcing their Floodlight controller, but this was muted when the OpenDaylight project was launched by the Linux Foundation. Major vendors including Brocade, Cisco, Citrix, Ericsson, IBM, Juniper Networks, Microsoft, NEC, Red Hat and VMware joined as founding members. Both the Floodlight and OpenDaylight controller were options, but since then Big Switch has dropped out of the project. Juniper eventually filled the gap of alternative technology when it contributed the OpenContrail project.
And then things got even more interesting… Continued »
Nothing creates more opinions in the Cloud Computing industry than OpenStack. More that Predictions about “Year of VDI” or “What is SDN?” or “Cloud vs. Cloudwashing”. Nothing. In my 20+ years in the IT industry, I’ve never seen anything like this before.
During the OpenStack Summit, the talk is almost unanimously positive. # of developers, # of committers, # of new projects, training projects, documentation projects and a growing list of customers from companies that you’ve actually heard of. On the surface, the OpenStack community appears to be well organized and making large strides in delivering a massive set of projects via a community model.
Then we start seeing commentary like this from the experienced OpenStack engineers. Soon after, we see commentary from Gartner (here, here) about several major challenges that the OpenStack community and OpenStack-centric companies face in 2014. Those two led to several well written responses, from both vendors and analysts.
- How RedHat is attempting to overcome the Enterprise Challenges – http://tentenet.net/2013/11/21/the-4th-tenet-of-openstack-open-source-projects-are-not-the-same-as-products/
- 451 Group – Setting Expectations for OpenStack – http://coteindustries.com/post/67662219186/theres-big-expectations-mis-alignment-in
- Viewpoints on where OpenStack fits (or is being deployed) – http://www.speakingofclouds.com/?p=353
The difference in tone between the week of OpenStack Summit and the following weeks was significant. It would have been one thing if the negativity had come from the CloudStack community, or AWS directly. It would have been understandable. But this was coming from within the OpenStack community, or analysts with deep understanding of Enterprise customers and the overall Cloud Management space.
Regardless of your opinion on OpenStack, it’s hard not to wonder where the truth really resides.
And then Solum happens…
Just when you thought the OpenStack community had begun to hit their stride in terms of hype vs. projects vs. execution, somebody swapped the “I” for a “P” and introduced a PaaS-like project (“Solum”) into the mix. OpenStack already had commitments and projects to work with both Cloud Foundry and OpenShift, well-defined PaaS projects. And then for some reason, a faction within the OpenStack community decided that it needed it’s own variation on a PaaS. Confused? I don’t blame you. You wouldn’t be alone. Continued »
One of the interesting slides from the keynote at AWS re:Invent was one that showed the pace of new features/capabilities over the past few years.When building on a platform, the ability to rapidly increase “learning curve” is tremendous, and AWS is obviously increasing the pace at which they can innovate. Collectively seeing customer problems; sharing best-practices for service delivery across their teams, and building a culture of services-led delivery for each new service.
We’re seeing this ability to rapidly innovate in the SaaS Management companies as well. They simplify the ability to consume their services and they are taking their day-to-day learnings and using it to constantly improve their product. From a software development perspective they are leveraging Agile principles to increase their pace of delivery, but they are increasing their operational learning-curve, which is equally important.
I’ve explored if more Cloud Management Platform companies should be moving to a SaaS model before. At the time, I was thinking that it made quite a bit of sense from a financial perspective. Allow companies to grow their cloud environment at a pace that makes sense for them. Moving to a cloud operations model (eg. on-demand, self-service) can often be challenging, involves more process change than technology change.
The more I think about it, the more I think the learning-curve is the most important part of the equation for both the cloud management vendors and their customers. As Bernard Golden wrote earlier this week, the core software platform that controls your cloud is “Magic”. When systems are complex, the ability to learnings about the system are critical.
Is there a need for a Hybrid/Private Cloud?
If a company wants to keep their data on-premise, wants to have network response times that allow them to create known SLAs, and they are still concerned about public cloud security – then this tends to lead to look at Private Cloud solutions. But then there’s the challenge of getting their people and process aligned to properly deliver cloud services. The operations team typically would need to get trained on new tools. They might need to get re-organized to allow them to focus on “service delivery” instead of just running technology silos (compute, storage, networking). They may even need assistance in how to think about “building services” (eg. create “products” that cloud users can get from self-service portals).
We’ve always talked about Hybrid Cloud as being a mix of Private Cloud resources and Public Cloud resources, potentially with a unified management framework that interconnects them. But what about the IT organization that wants Private Cloud characteristics for performance, security and data retention, but has been struggling to get re-organized or have a functioning Cloud Management Platform?
Isn’t this an opportunity that a CIO might consider looking at to bring agility to their IT organization and their business users? Isn’t it a win-win situation for both companies and vendors, to increase their learning-curve?
We’re still seeing survey data that says that most CIOs want to build Private Clouds (with or without additional Hybrid/Public Cloud resources), but the major success stories haven’t been significant – lots of virtualization, not as much self-service. Is 2014 the year where we begin to see vendors start bringing new options to fill these opportunities?
One of the biggest takeaways I had from the 2013 AWS re:Invent conference was the increasing number of companies that were delivering various forms of IT management (network monitoring, application monitoring, cost modeling, etc.) as SaaS applications. Back in March I wrote about how this is a challenging market for individual companies, especially with the rise of platforms. Regardless of this, the number of companies that are entering and succeeding continues to grow.
NOTE: It’s important to keep in mind that this isn’t a new phenomenon, as companies like Meraki and Aerohive have been doing this successfully for networking infrastructure for quite a while.
Where are the Focus Areas?
These companies tend to focus around a few core areas:
- Cloud Costs and Cost Management – (Cloudability, Cloudyn, Cloud Checkr, Rightscale) Trying to make sense of Cloud Computing costs, even within a single cloud can be complicated. On-Demand, Spot Instances, Reserved Instances, Inbound vs. Outbound Bandwidth, IOPs vs Provisioned IOPs. It’s not as simple as buying a server or some storage capacity. The best tools let companies do per group/project tracking; recommend when to best use on-demand vs. reserved instances; highlight the cost of adding redundancy across regions, and other advanced capabilities to help save current costs and better forecast future costs.
- Network Monitoring – (Thousand Eyes, Boundary) – It can often be difficult to map paths through a cloud network, or identify performance bottlenecks. These companies are focused on correlating network traffic with topologies, both for proactive and reactive monitoring. They are able to track traffic across multiple clouds (public and private) as well as link to application deployment activities (eg. new code deployed to web/database servers) to track how changes impact network traffic.
- Application Monitoring – (New Relic, Data Dog) Just as it can be complicated to monitor a cloud network, it can be equally complex to monitor applications in the cloud. How are resources shared with other applications/customers (compute, storage)? Do some code packages have known performance or security issues? How have things changed since new code was deployed, or a new level of redundancy added?
- Application Migration – (ElasticBox, Ravello Systems) – These companies are focused on helping companies take existing code or applications, package or encapsulate them, and migrate them (“as is”) to the cloud. These are great tools for companies looking to leverage public cloud without having to change existing applications. They are also great for ALM (Application Lifecycle Management), especially when it might span both public and private cloud.
- Security Monitoring – (CloudPassage, Adallom) – Whether this included the ability to add security functionality to IaaS cloud (eg. Firewall, IDS/IPS, Authentication) or monitoring traffic across multiple SaaS platforms, these platforms allow security to be as flexible as compute in the cloud.
In the past, many companies have tried to pull many or all of these areas together into a single, monolithic system. In talking to these companies, they say that many times their early customers had purchased those large systems in the past, but never got them fully installed and operational due to costs or complexities in integration.
Going Beyond Monitoring
Many of these vendors are taking monitoring a step farther by integrating with critical tools in the common workflow of IT operations or developers. For example, many integrate directly with GitHub to be able to track changes to applications or code, especially as it impact performance issues or new security threats. Others integrate collaboration models to allow group-based troubleshooting, correlation between cloud performance and patches/changes. Still others are integrating the ability to simplify how data/trends/maps are shared (just send a URL) so that multiple teams can see either real-time or historical information.
It’s been a couple hectic weeks since the AWS re:Invent conference, enough time to process what was announced and what has become one of the major cloud computing events in our industry (some would say “THE” event). The event has grown to ~9000 attendees and estimates have AWS now delivering $1B per quarter to Amazon. Considering that AWS delivers compute, storage, database, and analytics, that $4B annual run rate would make it a formidable competitor to many established IT vendors and Systems Integrators. Because Amazon does not actually break out AWS numbers, it’s difficult to known their actual profit margins, but estimates are anywhere from 40%-65% (gross margin).
I had a chance to speak about the show with the SpeakingInTech (@SpeakingInTech) podcast, hosted by The Register – http://www.theregister.co.uk/2013/11/27/speaking_in_tech_episode_86/ (AWS discussion starts at around the 24:00 mark)
AWS Wants to be the New CIO
Listening to both Andy Jassy and Werner Vogels keynote addresses, it was unusual to not hear them mention the term “CIO”. This is a mainstay of traditional IT tech conferences, but AWS made it clear that they want to disrupt the existing supply chain and just focus on business groups and developers coming right to the AWS service. They took it a step further and bifurcated the world into “IT” and “Cloud”, where “Cloud” is the thing that is focus on business growth.
They identified six common use-cases that bring customers to AWS:
- Dev/Test of Traditional Applications
- New Applications for the Cloud
- Supplement On-Prem with Off-Prem (Cloud) – typically analytics and batch processing – make daily adjustments, but not on production systems (overlapping utilization)
- Cloud Applications that reach back to On-Prem for services (eg. payment handling on-prem)
- Migrating traditional applications to the Cloud (websites, research simulations) – faster setup, faster performance, lower costs
- All-In (eg. NetFlix)
Nobody is Safe from AWS
By my guestimates, the attendance at the show was 33:33:33 (%) developers, systems-integrators, customers. If you deliver IT services (VARs, SIs, Service Providers), AWS is trying to change the supply chain and where you potentially fit (or no longer fit). If you build IT equipment, AWS is trying to change the pricing and consumption model of your customers by moving them from CAPEX to OPEX and long-budget-cycles to on-demand. At the 2013 event, they announced “services” that overlap VDI, Flash Storage, Monitoring, Backup, Disaster Recovery and Real-Time Analytics. Continued »
There’s really only one constant in IT (or technology) and that’s CHANGE. Technology changes, company strategies and partnerships change, and eventually best practices change. But we often get one concept wrong (or confused), because we tend to focus and obsess on the pace of change in the consumer space (see: AT&T Next). In the Enterprise, we also see perpetual change in technology, but it takes a long time before the “rules” of the technology industry change. Capital investments take time to depreciate. Technology skills and retraining can take many years to evolve. Sales channels are built over time, not to mention the maturity of the business models across various parts of the value-chain.
But we’re in the early innings of one of those significant rule changing shifts.
Technology is Changing
Whether directly or indirectly affected, cloud computing and open-source are having a significant impact on today’s IT technology. It may not be generating the direct multi-billion dollar revenues that Wall Street loves to see, but the open-source movement is having an impact in every area of technology. Whether it’s being used by companies like Google, Facebook or Amazon, or whether it’s driving projects like OpenStack, CloudStack, OpenDaylight, CloudFoundry, OpenShift, various NoSQL databases, etc., the shift in community involvement by individuals and companies is significant. It’s pushing the pace of innovation and it’s forcing companies to add developer resources towards these projects. But figuring out how or if open-source will disrupt your business or your competitors is still TBD.
But More Importantly, Business Models are Changing
Long standing partners are rethinking the value of those relationships. We saw this start happening a few years ago as Cisco and HP parted ways over servers and networks, and Dell and EMC over storage. But today’s changes aren’t just about vendors moving into new technoogy categories. This is about them not only disrupting their technology partners, but also their go-to-market partners and sometimes even themselves. Continued »
I’ve written before about how Cloud Computing can be confusing (here, here, here). New vendors, legacy vendors, cloudwashing, free software, automation skills to learn, etc. Whenever there is chaos and confusion, many people look for something familiar to give them a sense of direction and proximity to their existing world. And while many pundits like to talk about how Hardware and Software are becoming commoditized, or certain services (such as “Infrastructure as a Service, or IaaS”) are becoming commonplace and non-differentiated, we still have confusion about some of the most basic building block elements. Let me illustrate this with a couple examples of activities you might undertake soon.
Lesson 1 – Not all apples are created equal
This past week, a couple different groups (NetworkComputing, CloudSpectator) attempted to do baseline testing on various IaaS cloud services, in an attempt to compare them in an apples-to-apples format.
In 2013, if someone wanted to compare the cost, performance and features of a given IaaS service, you’d think that this would be a relatively simple task. Just pick a common unit of measure (CPU, RAM, Storage, maybe network bandwidth) and run some tests. Sounds simple enough, right? Think again.
The CloudSpectator report attempted to compare Performance and Price across 14 different IaaS providers. They used an entry-level “unit of measure” (1 VM, 2vCPU, 4Gb RAM, 50Gb Storage) and ran their benchmark tests. The results were shown both in terms of raw performance and in a performance/price metric. Across a set of 60+ tests, the results showed that some Cloud providers scored better than others. The results also showed that certain providers were optimized for certain types of tests much more so that other types of tests. Some of the results were hardware-centric while cloud architecture or the associated cloud-management software influenced others. Big deal you might say, that’s to be expected.
But what you might not expect is that not all of the Cloud providers even offered a 2+4 configuration. Some offered 1+4, 4+4 or slightly different variations, without the ability to customize. Still others only offered higher-performance “unit of measure” on systems with much larger CPU/RAM footprints. So now the arguments started about whether or not the results were skewed because the “correct” platform may not have been chosen for each Cloud provider to deliver optimal test results.
The arguments about whether Price/Performance is a relevant measurement for Cloud offerings are valid. Sometimes services are more important to applications than performance or infrastructure available. Sometimes they aren’t. It depends on the application; one size does not fit all. And as we saw, one size isn’t always available to all, so the end-users may have to do some re-calculations to compare Cloud services. Continued »