As the market for Cloud Computing products and services evolves, the stakes for success or failure (for companies, vendors, integrators, etc.) continue to rise. With that in mind, the amount of research that will come to market will continue to grow. For anyone analyzing this data, or using it to help make future strategic or tactical decisions, it’s important to keep several factors in mind. Being able to read between the lines and understand what might be below the surface can make the difference between leading, spotting trends or following the crowd.
- Audience – Who is the target audience of the survey? Are they IT professionals that currently work in IT operations, IT architecture or application developers? It’s especially important to understand if they come from IT, or they come from the groups trying to move around IT.
- Area of Focus – Do the survey results come from people focused on existing IT systems or future-looking systems (eg. Mobile, Big Data, SDN, Automation, Open-Source, etc.). IT silos can create unique viewpoints about what problems exist and how they can be solved.
- Decision-Making / Budget-Owner – Which group(s) within the organization have responsibility for IT budget? Which groups are able to obtain funding for IT services outside the existing IT organization?
- Length and Scope of Projects – Is the research focused on length or scope of projects? Long-term projects have a completely different framework (planning, strategic-alignment, project management, budgeting, etc.) that short-term projects, which are primarily driven by immediate needs. Continued »
Everyday we get bombarded by technical acronyms (BYOD, CoIT, MDM, APIs, IaaS, etc.) and vendor speak about new ways that IT can bring agility to business. IT organizations need to Mobile-enable their workforce to harness the power of Big Data to uncover new insights that will unlock differentiation and agility. And after a while, the market begins to turn off because the noise to signal ratio gets overwhelming.
Too often we hear technology vendors say that if all IT organizations would just operate like Google or Facebook or Twitter, then IT costs would be reduced and business productivity increased. Except this leaves many companies saying that they don’t have a “deliver digitals ads” problem, so how does that approach make sense for them?
Two years ago, I was introduced to Christian Reilly (@reillyusa), who is part of the IT organization at construction leader Bechtel. Bechtel had been looking at how to solve some massive business challenges (global workforce, complex projects, internal and external employees, etc.) by better leveraging their technology investment. It required them to transform how they thought about technology, as well as implementing a new set of technologies to enable new applications. As I quickly learned from Reilly, this set of changes wasn’t something they could buy shrink-wrapped in a box, but rather it was a multi-year transformation that involved people, process and technology changes.
It had been a while since I last caught up with Reilly, but this past week I saw a very interesting video that Bechtel jointly created with Apple about their iPad rollout. While the video is produced in typical high-production-value Apple manner, under the covers it highlights the implementation of tons of very interesting technology. Their solution is not being used to serve ads or update their social network, but instead is focused on things that aren’t sexy but are critical for Bechtel to solve their business challenges and bring value to their customers. Let take a look at some of the things behind the scenes. Continued »
One of the more interesting aspects of public Cloud Computing, beyond all the elements of on-demand (pricing, scaling, etc.), is the number of add-on services that have emerged from the ecosystem to add value around core platforms like Amazon AWS, Rackspace, Azure, Google Compute Engine, etc. Some of these services include Boundary, New Relic, enStratius, Rightscale, Cloudability, ShopForCloud, Cloud Checkr, Newvem, Cloudyn, CloudPassage and many others. These services are allowing customers to not only fill in gaps with the service offerings from those platforms, but also consume these add-on services in the same on-demand manner as the underlying IaaS, PaaS or SaaS platforms.
But an interesting thing tends to happen with software platforms, both on-premise and in the cloud. Over time, they tend to eat their ecosystems. We’ve all experienced it with platforms such as Windows, where things like TCP/IP stacks, web browsers, media players and all sort of other functionality used to require 3rd-party add-on capabilities. And now we’re beginning to experience it with Cloud Computing platforms. We saw it over the past couple weeks with announcements from Amazon AWS – the OpsWorks and TrustedAdvisor services. It’s a classic case of the platform provider wanting to deliver an end-to-end experience to the customer, as well as adding stickiness to the platform. For the 3rd-party tools vendors, it becomes a inflection-point where they have to decide if they now want to compete on price, features, unique technology, or just fold up shop. We discussed some of this on The Cloudcast Eps.77 (starting at 19:30 mark).
So if you’re a customer of any of these services, what should you do? Continued »
With the Spring version of the OpenStack Summit coming up in just a few weeks, I’ve been thinking about the key indicators or questions that I have about OpenStack as 2013 continues.
1. Who are the major OpenStack customers?
While each OpenStack summit highlights a new set of users or use-cases, the majority of them are either small-scale or only using a limited number of OpenStack services. This would align to the modular nature of the projects, and to some extent their competitive goal vs. AWS, but it doesn’t align to a complete “stack” solution. When is it realistic to see Enterprise customers that were previously VMware-centric move to a complete OpenStack environment?
2. Are there already too many distributions? Should they be considered competitive, similar to Linux distributions in 1990-2000s?
For a project that is three years old, what is a reasonable number of distributions to have appeared on the market? How are customers supposed to be able to keep track of all the variations? Does the OpenStack community expect this number to grow (limited / significant) before it begins to pare down?
- Rackspace (2 versions – Private, Public)
- HP Cloud (public cloud)
- Piston Cloud
- Nebula (shipping details TBD)
- Red Hat
- IBM (shipping details TBD)
- Various Linux distributions
3. What is the “Open” goal for OpenStack these days? (open-source, multi-cloud)
One of the main goals of OpenStack is to allow open, interoperability between clouds to (potentially) facilitate open movement for applications or data. We’re already seeing the early Service Providers (Rackspace, HP Cloud) having incompatible versions. Is open cloud still a goal, or have market priorities made that almost impossible? Continued »
For the past 3-4 years, we’ve seen tremendous growth in the level of virtualization that has been adopted within Enterprise and Mid-Market data centers. Statistics show that we reached the tipping point for Virtual Machines vs. Physical Machines in 2009, with that lead expected to grow to nearly 2x by end of this year.
And as VMware CEO told us during his VMworld 2012 keynote, virtualized workloads now account for 60% all workloads in the data center.
So we have lots and lots of VMs being created, but yet we seem to be somewhat stuck in terms of which applications are getting virtualized. And in case it’s not clear which applications make up the “other 40%”, it’s those business-critical ones. ERP, CRM, HCM, Exchange, and a bunch of other nasty applications that cost a lot of money to operate and which don’t immediately save money when they get consolidated.
VMware has been going after this market for the last couple years, by adding advancements to their ESX hypervisor to handle larger VMs (more RAM, more vCPUs, new clustering and HA mechanisms) and more granular I/O capabilities (Storage I/O Control, Network I/O Control, QoS). It would appear, on the surface, that the pieces should be in place to virtualize those next 40% of applications. So what’s holding this back from gaining mainstream adoption?
Here’s a list of considerations: Continued »
Almost every aspect of both our personal and professional lives have evolved to the point where a variety of choice is the expected norm. We buy things how we want; we work where it makes the most sense; we personalize how we appear and communicate; and we’re partnering with a greater number of organizations than every before. Just look at how many apps are on your smartphone or open tabs on your browser, and it doesn’t take long to realize that we have internalized how to find the right fit for each challenge.
When it comes to IT organizations, we haven’t been nearly as flexible. While SaaS adoption has grown for many non-differentiated services, the adoption of Cloud Computing is often considered the 3rd-option after internal data-center resources or outsourcing contracts. But this way of thinking is beginning to change. We’re starting to see large organizations become frustrated with their outsourcing contracts (here, here). We’re quickly seeing a significant change in the companies identified as leaders and visionaries (2010, 2011, 2012) in the cloud service provider market, especially towards those that offer differentiated services. Throw in the emergence of several viable PaaS platforms (Heroku, CloudFoundry, Apprenda, etc.) and we’re on the cusp of the 3rd option, variations of Hybrid Cloud, becoming more and more mainstream for IT organizations.
So when is the right time to consider either migrating existing applications, or beginning a journey with new application models? Here are some triggers to consider:
- The end of existing outsourcing contracts that haven’t kept up with technology trends, especially those longer than 3yrs.
- Uncertainty over the longevity of existing/legacy hardware platforms, such Itanium or RISC-based servers.
- Uncertainty about the longevity of existing/legacy hardware providers, such as Dell or HP.
- The opportunity to truly change the economics of business-critical applications by moving to both a virtualized environment and OPEX-based cloud deployment model.
- Shifting business environments, driven by mergers, globalization, or evolving industry regulation (HIPPA, FedRAMP, PCI-DSS, etc.).
For the past few years, there has been greater recognition that a few major trends are invading the IT landscape – smarter business users, challenging IT budgets (here, here), and greater availability of Cloud Computing services (especially IaaS and SaaS). Unfortunately, in parallel to those realizations, there is a growing desire by some to classify this as “Shadow IT“, as if this new desire to drive productivity is the equivalent to an illegal black market.
As analyst Ben Kepes points out, there is quite a bit of demand from end-users to leverage new services to help them drive productivity and better compete in their markets.
So who are the good guys in the Shadow IT discussion?
Who are the bad guys?
And does it do anyone any good to draw a definitive line between productivity and risk?
Does it do IT organizations any good to not consider leveraging every potential resource that can help give their business an advantage, the same way every other direct-report to the CEO does? Does it do the lines of business owners any good to not consult their technology experts?
If we didn’t work in IT and one of our employees came up to us with a great idea about how to drive productivity, would you call it “Shadow Worker Productivity”? I doubt it.
I completely understand that this evolution of IT service, delivered in-house or via Cloud Service Providers, introduces a whole new set of technology, process and cultural changes. But they are being driven by productivity. They are being driven by risk management (time-to-market vs. following existing rules). And they are being driven by excess DEMAND for the use of technology to solve business problems.
In all reality, “Shadow IT” has very little to do with traditional IT. It’s Economics 101 – Supply and Demand stuff. Traditional IT isn’t structured or funded to keep with today’s new demand models. But that demand isn’t a black market.It’s not illegal goods and services. It is an opportunity. Actually, it’s many opportunities.
But if our industry keeps calling it “Shadow IT”, keeps trying to make it about Good Guys vs. Bad Guys, then we’ll miss the opportunity to actually define how impactful technology can be in accelerate the cycle from great idea to great execution.
Every few months (or weeks), the Cloud Computing industry seems to pick a topic and beat it to death from a technology or religious point of view. The concept of “Cloud SLAs” has been doing the rounds lately. Conveniently, these particular discussions came up after a few well publicized Public Cloud outages.
Lydia Leung (Gartner, @cloudpundit) recently got the pot stirring with her piece about HP and Amazon AWS SLAs. Lydia is very well respected in the industry and she does a nice job of digging into the details of various vendors SLAs. She obviously has a deep understanding of this space, especially as it relates to Enterprise customers, as she leads the Gartner IaaS Magic Quadrant program.
There is some interesting back and forth in the comments about what is a proper definition of an SLA. That would be all well and good if Cloud Computing used lawyers or auditors to solve business problems. But it doesn’t. It uses technology. And quite honestly, the business leaders that are paying for various Cloud Computing services don’t care about the legalese or the underlying technology. They care about the business. They care about moving the business forward and managing business risks. Cloud SLAs, in their current form (in most cases), don’t align the business risk and the technology risk very well.
Let’s step back a second and look at this in a slightly different context…
While the discussion about Cloud Computing has evolved over the past few years, far too often it still devolves into a semi-religious debate about Private Cloud vs. Public Cloud. Traditional IT viewpoints say that security and reliability should rule the day, while more progressive viewpoints argue that this old thinking is slowing innovation and the pace of business growth. Not surprisingly, these viewpoints tend to align to either a Private or Public slant.
What is somewhat surprising is that IT organizations have not followed a strategy that has been proven over many years and against various ROI calculations. The practice of “tiering” their applications. In the past this meant applying various levels of resources (typically faster CPUs, more RAM, faster network, various levels of redundancy, etc.) to different classes of applications. While some will say that dragging along any legacy concepts into the new cloud world is a disaster waiting to happen, the reality is that most Enterprises have a huge variety of application needs and application types. Expecting them all to run in a similar manner, with similar SLAs or costs is not realistic. It would be like saying that everyone can wear a suit and tie to the office, so they should all be paid executive compensation. Continued »
Even though I have deeply ingrained networking DNA from having worked many years at Cisco, I’ve tried to avoid writing about SDN too much. Does it get a lot of hype? Yes. Is it still in the early stages with lots of room for innovation and new ideas? Yes.
But over the past few weeks, I’ve come across a few “SDN Use-Cases” that are pretty straight forward, so I thought I’d write about them. Now keep in mind, this won’t be your typical blog about SDN, because I promise to break all these guidelines:
- Discuss why SDN means the death of Cisco &/or Juniper
- Discuss why SDN will immediately build networks using commodity x86 boxes, because they have fast chips (btw: listen to Packet Pushers #88 if you want good insight into why x86 servers don’t work exactly like switches/routers)
- Discuss how SDN is only applicable to “web-scale” networks and “web 2.0 scale-out, share-nothing” applications
- Mention “OpenFlow” (in a good way or bad way)
- Make a list of which SDN start-ups will get acquired in 2013
Backstory: Due to economic uncertainty, new regulations, and maturity of the Cloud SP markets, 2013 and 2014 are expected to see a significant rise in the number of applications, both existing and new, that are run in SP Cloud environments.
These businesses are going to be looking for flexibility in how they onboard applications, how applications are protected, and how they an add, remove or change the environments.
So if you’re one of these viable Cloud SPs, you’re going to have a couple use-cases that need fairly immediate attention.