From Silos to Services: Cloud Computing for the Enterprise

Page 8 of 10« First...678910

March 11, 2013  6:11 PM

Cloud Computing – Platforms vs. APIs vs. Tools vs. Features

Brian Gracely Brian Gracely Profile: Brian Gracely

One of the more interesting aspects of public Cloud Computing, beyond all the elements of on-demand (pricing, scaling, etc.), is the number of add-on services that have emerged from the ecosystem to add value around core platforms like Amazon AWS, Rackspace, Azure, Google Compute Engine, etc. Some of these services include Boundary, New Relic, enStratius, Rightscale, Cloudability, ShopForCloud, Cloud Checkr, Newvem, Cloudyn, CloudPassage and many others. These services are allowing customers to not only fill in gaps with the service offerings from those platforms, but also consume these add-on services in the same on-demand manner as the underlying IaaS, PaaS or SaaS platforms.

But an interesting thing tends to happen with software platforms, both on-premise and in the cloud. Over time, they tend to eat their ecosystems. We’ve all experienced it with platforms such as Windows, where things like TCP/IP stacks, web browsers, media players and all sort of other functionality used to require 3rd-party add-on capabilities. And now we’re beginning to experience it with Cloud Computing platforms. We saw it over the past couple weeks with announcements from Amazon AWS – the OpsWorks and TrustedAdvisor services. It’s a classic case of the platform provider wanting to deliver an end-to-end experience to the customer, as well as adding stickiness to the platform. For the 3rd-party tools vendors, it becomes a inflection-point where they have to decide if they now want to compete on price, features, unique technology, or just fold up shop. We discussed some of this on The Cloudcast Eps.77 (starting at 19:30 mark).

So if you’re a customer of any of these services, what should you do? Continued »

February 26, 2013  11:11 PM

10 Questions for the 2013 OpenStack Summit in Portland

Brian Gracely Brian Gracely Profile: Brian Gracely

With the Spring version of the OpenStack Summit coming up in just a few weeks, I’ve been thinking about the key indicators or questions that I have about OpenStack as 2013 continues.

1. Who are the major OpenStack customers?

While each OpenStack summit highlights a new set of users or use-cases, the majority of them are either small-scale or only using a limited number of OpenStack services. This would align to the modular nature of the projects, and to some extent their competitive goal vs. AWS, but it doesn’t align to a complete “stack” solution. When is it realistic to see Enterprise customers that were previously VMware-centric move to a complete OpenStack environment?

2. Are there already too many distributions? Should they be considered competitive, similar to Linux distributions in 1990-2000s?

For a project that is three years old, what is a reasonable number of distributions to have appeared on the market? How are customers supposed to be able to keep track of all the variations? Does the OpenStack community expect this number to grow (limited / significant) before it begins to pare down?

  • Rackspace (2 versions – Private, Public)
  • HP Cloud (public cloud)
  • Piston Cloud
  • Nebula (shipping details TBD)
  • Cloudscaling
  • Cisco
  • Dell
  • Red Hat
  • IBM (shipping details TBD)
  • Various Linux distributions

3. What is the “Open” goal for OpenStack these days? (open-source, multi-cloud)

One of the main goals of OpenStack is to allow open, interoperability between clouds to (potentially) facilitate open movement for applications or data. We’re already seeing the early Service Providers (Rackspace, HP Cloud) having incompatible versions. Is open cloud still a goal, or have market priorities made that almost impossible? Continued »


February 17, 2013  3:57 PM

Is a New Journey Needed for Business-Critical Applications?

Brian Gracely Brian Gracely Profile: Brian Gracely

For the past 3-4 years, we’ve seen tremendous growth in the level of virtualization that has been adopted within Enterprise and Mid-Market data centers. Statistics show that we reached the tipping point for Virtual Machines vs. Physical Machines in 2009, with that lead expected to grow to nearly 2x by end of this year.

And as VMware CEO told us during his VMworld 2012 keynote, virtualized workloads now account for 60% all workloads in the data center.

So we have lots and lots of VMs being created, but yet we seem to be somewhat stuck in terms of which applications are getting virtualized. And in case it’s not clear which applications make up the “other 40%”, it’s those business-critical ones. ERP, CRM, HCM, Exchange, and a bunch of other nasty applications that cost a lot of money to operate and which don’t immediately save money when they get consolidated.

VMware has been going after this market for the last couple years, by adding advancements to their ESX hypervisor to handle larger VMs (more RAM, more vCPUs, new clustering and HA mechanisms) and more granular I/O capabilities (Storage I/O Control, Network I/O Control, QoS). It would appear, on the surface, that the pieces should be in place to virtualize those next 40% of applications. So what’s holding this back from gaining mainstream adoption?

Here’s a list of considerations: Continued »


February 17, 2013  2:30 PM

Unlocking the 3rd Option – Hybrid Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely

Almost every aspect of both our personal and professional lives have evolved to the point where a variety of choice is the expected norm. We buy things how we want; we work where it makes the most sense; we personalize how we appear and communicate; and we’re partnering with a greater number of organizations than every before. Just look at how many apps are on your smartphone or open tabs on your browser, and it doesn’t take long to realize that we have internalized how to find the right fit for each challenge.

When it comes to IT organizations, we haven’t been nearly as flexible. While SaaS adoption has grown for many non-differentiated services, the adoption of Cloud Computing is often considered the 3rd-option after internal data-center resources or outsourcing contracts. But this way of thinking is beginning to change. We’re starting to see large organizations become frustrated with their outsourcing contracts (here, here). We’re quickly seeing a significant change in the companies identified as leaders and visionaries (20102011, 2012) in the cloud service provider market, especially towards those that offer differentiated services. Throw in the emergence of several viable PaaS platforms (Heroku, CloudFoundry, Apprenda, etc.) and we’re on the cusp of the 3rd option, variations of Hybrid Cloud, becoming more and more mainstream for IT organizations.

So when is the right time to consider either migrating existing applications, or beginning a journey with new application models? Here are some triggers to consider:

  • The end of existing outsourcing contracts that haven’t kept up with technology trends, especially those longer than 3yrs.
  • Uncertainty over the longevity of existing/legacy hardware platforms, such Itanium or RISC-based servers.
  • Uncertainty about the longevity of existing/legacy hardware providers, such as Dell or HP.
  • The opportunity to truly change the economics of business-critical applications by moving to both a virtualized environment and OPEX-based cloud deployment model.
  • Shifting business environments, driven by mergers, globalization, or evolving industry regulation (HIPPA, FedRAMP, PCI-DSS, etc.).

Continued »


February 10, 2013  11:43 PM

Shadow IT is not Good Guys vs. Bad Guys

Brian Gracely Brian Gracely Profile: Brian Gracely

For the past few years, there has been greater recognition that a few major trends are invading the IT landscape – smarter business users, challenging IT budgets (here, here), and greater availability of Cloud Computing services (especially IaaS and SaaS). Unfortunately, in parallel to those realizations, there is a growing desire by some to classify this as “Shadow IT“, as if this new desire to drive productivity is the equivalent to an illegal black market.

As analyst Ben Kepes points out, there is quite a bit of demand from end-users to leverage new services to help them drive productivity and better compete in their markets.

So who are the good guys in the Shadow IT discussion?

Who are the bad guys?

And does it do anyone any good to draw a definitive line between productivity and risk?

Does it do IT organizations any good to not consider leveraging every potential resource that can help give their business an advantage, the same way every other direct-report to the CEO does? Does it do the lines of business owners any good to not consult their technology experts?

If we didn’t work in IT and one of our employees came up to us with a great idea about how to drive productivity, would you call it “Shadow Worker Productivity”? I doubt it.

I completely understand that this evolution of IT service, delivered in-house or via Cloud Service Providers, introduces a whole new set of technology, process and cultural changes. But they are being driven by productivity. They are being driven by risk management (time-to-market vs. following existing rules). And they are being driven by excess DEMAND for the use of technology to solve business problems.

In all reality, “Shadow IT” has very little to do with traditional IT. It’s Economics 101 – Supply and Demand stuff. Traditional IT isn’t structured or funded to keep with today’s new demand models. But that demand isn’t a black market.It’s not illegal goods and services. It is an opportunity. Actually, it’s many opportunities.

But if our industry keeps calling it “Shadow IT”, keeps trying to make it about Good Guys vs. Bad Guys, then we’ll miss the opportunity to actually define how impactful technology can be in accelerate the cycle from great idea to great execution.


January 31, 2013  10:48 AM

Most Cloud SLAs don’t Cover Your *aaS

Brian Gracely Brian Gracely Profile: Brian Gracely

Every few months (or weeks), the Cloud Computing industry seems to pick a topic and beat it to death from a technology or religious point of view. The concept of “Cloud SLAs” has been doing the rounds lately. Conveniently, these particular discussions came up after a few well publicized Public Cloud outages.

Lydia Leung (Gartner, @cloudpundit) recently got the pot stirring with her piece about HP and Amazon AWS SLAs. Lydia is very well respected in the industry and she does a nice job of digging into the details of various vendors SLAs. She obviously has a deep understanding of this space, especially as it relates to Enterprise customers, as she leads the Gartner IaaS Magic Quadrant program. 

There is some interesting back and forth in the comments about what is a proper definition of an SLA. That would be all well and good if Cloud Computing used lawyers or auditors to solve business problems. But it doesn’t. It uses technology. And quite honestly, the business leaders that are paying for various Cloud Computing services don’t care about the legalese or the underlying technology. They care about the business. They care about moving the business forward and managing business risks. Cloud SLAs, in their current form (in most cases), don’t align the business risk and the technology risk very well.

Let’s step back a second and look at this in a slightly different context…

Continued »


January 22, 2013  9:50 AM

Extending the Concept of “Application Tiering” to Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely

While the discussion about Cloud Computing has evolved over the past few years, far too often it still devolves into a semi-religious debate about Private Cloud vs. Public Cloud. Traditional IT viewpoints say that security and reliability should rule the day, while more progressive viewpoints argue that this old thinking is slowing innovation and the pace of business growth. Not surprisingly, these viewpoints tend to align to either a Private or Public slant.

What is somewhat surprising is that IT organizations have not followed a strategy that has been proven over many years and against various ROI calculations. The practice of “tiering” their applications. In the past this meant applying various levels of resources (typically faster CPUs, more RAM, faster network, various levels of redundancy, etc.) to different classes of applications. While some will say that dragging along any legacy concepts into the new cloud world is a disaster waiting to happen, the reality is that most Enterprises have a huge variety of application needs and application types. Expecting them all to run in a similar manner, with similar SLAs or costs is not realistic. It would be like saying that everyone can wear a suit and tie to the office, so they should all be paid executive compensation.  Continued »


January 22, 2013  1:01 AM

Some Simple SDN Use-Cases

Brian Gracely Brian Gracely Profile: Brian Gracely

Even though I have deeply ingrained networking DNA from having worked many years at Cisco, I’ve tried to avoid writing about SDN too much. Does it get a lot of hype? Yes. Is it still in the early stages with lots of room for innovation and new ideas? Yes.

But over the past few weeks, I’ve come across a few “SDN Use-Cases” that are pretty straight forward, so I thought I’d write about them. Now keep in mind, this won’t be your typical blog about SDN, because I promise to break all these guidelines:

  • Discuss why SDN means the death of Cisco &/or Juniper
  • Discuss why SDN will immediately build networks using commodity x86 boxes, because they have fast chips (btw: listen to Packet Pushers #88 if you want good insight into why x86 servers don’t work exactly like switches/routers)
  • Discuss how SDN is only applicable to “web-scale” networks and “web 2.0 scale-out, share-nothing” applications
  • Mention “OpenFlow” (in a good way or bad way)
  • Make a list of which SDN start-ups will get acquired in 2013

Backstory: Due to economic uncertainty, new regulations, and maturity of the Cloud SP markets, 2013 and 2014 are expected to see a significant rise in the number of applications, both existing and new, that are run in SP Cloud environments.

These businesses are going to be looking for flexibility in how they onboard applications, how applications are protected, and how they an add, remove or change the environments.

So if you’re one of these viable Cloud SPs, you’re going to have a couple use-cases that need fairly immediate attention.

Continued »


January 6, 2013  1:16 PM

The Dream of Multi-Cloud is Alive in Portland?

Brian Gracely Brian Gracely Profile: Brian Gracely

As I was getting back into the swing of things after the holidays and reviewing a bunch of news sources, a couple of things caught my eye. The first was the return of Portlandia, which made me think of one of my favorite lines, “Portland is a city where young people go to retire.

The next item I noticed was the announcement of the OpenStack Summit 2013 (Spring), being held in Portland. OpenStack is the collection of open source cloud computing projects with the goal of creating an open alternative to existing cloud computing environments. It promises to allow customers to be able avoid lock-in and support multi-cloud environments. So coincidently, the dream of the open multi-cloud is going to be alive in Portland. Tattoos and Clowning is optional.

Or is it?

I was somewhat surprised to see that Dell claims OpenStack is being “dramatically forked”, causing them to delay their public cloud offering until late 2013 or 2014. This came only a few weeks after they announced support for OpenStack for their Private Cloud offering.  While this is only one data point, it also appears to be confirmed by Dell’s OpenStack lead Rob Hershfeld. These comments don’t imply any significant problems with OpenStack, but it does bring up the question about how various companies will balance the tradeoffs of open projects and quarterly revenue demands. Innovation vs. Operations vs. Implementation. How much of the multi-cloud interoperability burden will be placed on customers, and how easily will it be for them to know where multi-cloud is possible? And does any vendor or Cloud Provider really have any incentive to help customers move their application workloads from one cloud to another?

The last item I saw was a knowledgable SysAdmin questioning if OpenStack actually prevents lock-in. He reinforced my prior statement that “open” doesn’t always make things less expensive or less complicated. Whether it’s IT Operations or Developers, almost any decision made about technology comes with some level of cost (people, hardware, software, licenses, integration), so “lock-in” is nothing more than the level of risk of making a decision plus on-going costs plus the cost of future changes.

How important is multi-cloud interoperability to your future IT plans?

How confident are you that multi-cloud technology will be available to your business when the time comes for those capabilities?


December 24, 2012  8:00 AM

Be Aware of Your Cloud Computing Costs

Brian Gracely Brian Gracely Profile: Brian Gracely

Several years ago, it wasn’t unusual to hear people say that public cloud computing was significantly cheaper that trying to build a private cloud computing environment. This was mostly because people would see the cost/hour (in USD pennies) and immediately think this was significantly less than the large CAPEX bill they recently paid for racks of equipment in their own data center.

But over time, as more applications were run in cloud computing environments, people began to understand a few basic cost principles:

  • For short-term projects, public costs are often less.
  • For high-capacity projects (100-1000s servers), or highly variable projects, public costs are often less.
  • For long-running projects, private costs are often less.
  • For limited variability projects, private costs are often less.

There will be plenty of people that can come back with examples of similar projects where the costs are higher (or lower) in one environment or the other. In fact, what is often found by people using cloud computing is that it doesn’t significantly reduce costs (over time). Instead, the prevailing ROI is beginning to be measured on levels of agility, better application performance, or time to market for new ideas (or applications).

Regardless of whether a company runs their applications in public clouds or private clouds, it’s important to understand how costs are incurred. Today, it’s still difficult to make an apples-to-apples comparison between environments as there is not a consistent unit of measurement, or a consistent list of what costs are included. [See this video for a short explanation of cloud computing still isn't priced like commodity markets, or listen to this podcast] Continued »


Page 8 of 10« First...678910

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: