From Silos to Services: Cloud Computing for the Enterprise

Page 9 of 10« First...678910

January 6, 2013  1:16 PM

The Dream of Multi-Cloud is Alive in Portland?

Brian Gracely Brian Gracely Profile: Brian Gracely

As I was getting back into the swing of things after the holidays and reviewing a bunch of news sources, a couple of things caught my eye. The first was the return of Portlandia, which made me think of one of my favorite lines, “Portland is a city where young people go to retire.

The next item I noticed was the announcement of the OpenStack Summit 2013 (Spring), being held in Portland. OpenStack is the collection of open source cloud computing projects with the goal of creating an open alternative to existing cloud computing environments. It promises to allow customers to be able avoid lock-in and support multi-cloud environments. So coincidently, the dream of the open multi-cloud is going to be alive in Portland. Tattoos and Clowning is optional.

Or is it?

I was somewhat surprised to see that Dell claims OpenStack is being “dramatically forked”, causing them to delay their public cloud offering until late 2013 or 2014. This came only a few weeks after they announced support for OpenStack for their Private Cloud offering.  While this is only one data point, it also appears to be confirmed by Dell’s OpenStack lead Rob Hershfeld. These comments don’t imply any significant problems with OpenStack, but it does bring up the question about how various companies will balance the tradeoffs of open projects and quarterly revenue demands. Innovation vs. Operations vs. Implementation. How much of the multi-cloud interoperability burden will be placed on customers, and how easily will it be for them to know where multi-cloud is possible? And does any vendor or Cloud Provider really have any incentive to help customers move their application workloads from one cloud to another?

The last item I saw was a knowledgable SysAdmin questioning if OpenStack actually prevents lock-in. He reinforced my prior statement that “open” doesn’t always make things less expensive or less complicated. Whether it’s IT Operations or Developers, almost any decision made about technology comes with some level of cost (people, hardware, software, licenses, integration), so “lock-in” is nothing more than the level of risk of making a decision plus on-going costs plus the cost of future changes.

How important is multi-cloud interoperability to your future IT plans?

How confident are you that multi-cloud technology will be available to your business when the time comes for those capabilities?

December 24, 2012  8:00 AM

Be Aware of Your Cloud Computing Costs

Brian Gracely Brian Gracely Profile: Brian Gracely

Several years ago, it wasn’t unusual to hear people say that public cloud computing was significantly cheaper that trying to build a private cloud computing environment. This was mostly because people would see the cost/hour (in USD pennies) and immediately think this was significantly less than the large CAPEX bill they recently paid for racks of equipment in their own data center.

But over time, as more applications were run in cloud computing environments, people began to understand a few basic cost principles:

  • For short-term projects, public costs are often less.
  • For high-capacity projects (100-1000s servers), or highly variable projects, public costs are often less.
  • For long-running projects, private costs are often less.
  • For limited variability projects, private costs are often less.

There will be plenty of people that can come back with examples of similar projects where the costs are higher (or lower) in one environment or the other. In fact, what is often found by people using cloud computing is that it doesn’t significantly reduce costs (over time). Instead, the prevailing ROI is beginning to be measured on levels of agility, better application performance, or time to market for new ideas (or applications).

Regardless of whether a company runs their applications in public clouds or private clouds, it’s important to understand how costs are incurred. Today, it’s still difficult to make an apples-to-apples comparison between environments as there is not a consistent unit of measurement, or a consistent list of what costs are included. [See this video for a short explanation of cloud computing still isn't priced like commodity markets, or listen to this podcast] Continued »


December 17, 2012  10:31 PM

Is IT the Last Major Open-Source Project?

Brian Gracely Brian Gracely Profile: Brian Gracely

Traditionally, internal IT organizations have leveraged proprietary software and public Internet services have been built using open-source software. Some of this behavior was driven by legacy technology decisions; continuing to work with previous vendors due to existing applications. Some of this behavior was driven by availability and cost of technical skill-sets. And some of this was driven by the actual pace of change that could be managed by a large organization vs. an external provider. 

As I wrote last week, there is a looming interest from Enterprise IT organizations to consider using some of the open-source Cloud Management frameworks to help them deploy Private or Hybrid Cloud services for their organizations.

Whether or not IT organizations actually deploy any of those open-source frameworks for their Cloud Computing environments is yet to be seen. But regardless of if that happens, I might suggest that IT should look at their operational model in the same way that open-source projects are operated. By embracing many of the core principals of open-source projects, IT organizations gain the possibility of not only being able to deliver more responsive services to their business, but also be able to manage the improving technical skill-set of their user base.

I’ve discussed before how some of these functions will be skills of people in your IT organization; let’s look at some of those principles and see how they could be applied to IT operations: Continued »


December 11, 2012  12:06 AM

Enterprise Cloud Management – Still a Wide-Open Game

Brian Gracely Brian Gracely Profile: Brian Gracely

Sometimes it’s easy to get caught up in the Twitter echo chamber , believing that whatever is good for Google, Facebook or NetFlix must also be good for your Enterprise environment. All new applications, built to scale and running on commodity hardware. But the other end of the spectrum is the conversations that take place at Gartner Data Center, where Fortune 500 companies and less bleeding-edge IT organizations look for guidance on how to keep up with their business demands.

From those discussions, I noticed a couple of interesting polls/images.

The first one was posted by Chris Wolf (@cswolf, VP @ Gartner). While it only used about 100 data points, it does highlight that Enterprises are expecting their Cloud Management platform to come from someone other than the traditional “Big 4″ vendors (IBM, HP, CA, BMC).

The second one was summarized by Lori MacVittie (@lmacvittie). This asked the same question, but with a slightly different set of potential responses. It’s not completely unusual to see VMware leading, given their dominant market-share for the hypervisor (one of the new control points in the data center). But what was interesting was that almost 30% of the respondents said they were considering open-source options (OpenStack, CloudStack). This survey also showed even less interest in the incumbent vendors.

Neither of these surveys explicitly called out some of the start-up Cloud Management vendors, such as Cloudability, enStratus, Rightscale or ScaleExtreme, who tend to be focused on multi-cloud management, with public cloud being the primary deployment model. I suspect that is because the Development audience is still different than the IT Operation audience, and awareness of the multi-cloud management platforms is less well known. Shadow IT doesn’t typically become a management problem until the number of applications, or the amount of critical corporate data grows to larger levels.

I’ve written before that the Cloud Providers are coming, and that Enterprise IT organizations would have many, many choices when it comes to Cloud Management. Just looking at the frequency of changes in the Gartner 2012 IaaS Magic Quadrant (from Lydia Leung, @cloudpundit, VP @ Gartner) reinforces the pace of change happening in this market segment.

While both of these survey have small data-sets, they do highlight that the Enterprise Cloud Management game is still wide-open to many vendors and many open-source projects. IT organizations are still trying to sort through the solutions from all the noise, and they are trying to determine how they will best be able to deliver IT services from a broad range of available resources.


December 3, 2012  10:00 AM

Enterprise IT: The 80/20 Budget Dilemma (Part II)

Brian Gracely Brian Gracely Profile: Brian Gracely

[This is the second in a series of blogs focused on strategies to address the 80/20 Budget Dilemma in Enterprise IT]

Last week, I talked about the challenge of demand outpacing capacity and how IT departments could push business unit funded projects. That could be viewed as outsourcing IT, or leveraging all available resources. Either way, it’s letting the business get things done on their time schedule.

Today I want to explore an alternative that actually enables “Shadow IT“, while continuing to allow traditional IT departments to have visibility, governance and cost-awareness. I’ve used the phrase “Cloud Concierge” in the past, but the concept and technology continue to evolve and have potential.

The concept goes like this:

  • The IT department sets up a framework that allows any business unit to consume IT resources via a centralized portal. That portal might have a set of pre-packaged applications, or it could provide access to *aaS resources from multiple clouds. Companies like Cisco, VMware, Rightscale, enStratus and others provide these types of products.
  • The business units would be buying the services (internally or externally) with direct funding. This allows them to align costs to potential revenue streams.
  • The business units have access to (virtually) unlimited capacity, whether it’s internal or external cloud services, to drive their projects or just experiment on new innovations.
  • The business units can leverage new development models on top of those cloud resources, especially for new social, mobile or analytics applications.
  • The IT department has the option to enable virtual-private-cloud functionality (or “hybrid cloud“, depending on your definition) if the application requires access to internal data or authentication mechanisms.
  • Because the centralized portal, not the actual services, are maintained by the IT department, they retain a level of visibility into new projects and new resource demands. They can use it for future capacity planning, future budget planning, or future skills training needs. They can also monitor which resources might need additional assistance for governance or compliance.
  • For external services (eg. from public clouds), IT departments can educate the business units about additional services they should leverage to maintain security, cost-awareness or connect to 3rd-party applications.

Continued »


November 26, 2012  10:00 AM

Enterprise IT: The 80/20 Budget Dilemma (Part I)

Brian Gracely Brian Gracely Profile: Brian Gracely

[This is the first in a series of blogs focused on strategies to address the 80/20 Budget Dilemma in Enterprise IT]

It’s widely acknowledged that the majority of IT organizations spend about 80% of their budget to maintain existing applications, leaving only 20% of the budget left for new projects or innovations to improve the business.  In the past this was acceptable because the pace in which technology impacted business was slower, but over the last 5 years, this pace has radically increased. The flexibility, or inflexibility, of the 20% is often cited as the reason that many business leaders are growing increasingly frustrated with their IT organizations.

In today’s world, every new business project is an “IT project”, requiring new functionality that enables web, collaboration, mobility, analytics or social services.  Each of these new IT demands are built on a business model that expects the new IT services to be delivered in a short timeframe, often with the expectation that IT has infinite levels of available resources. These projects are leading to a surplus of demand, against the perception of limited supply.

Given that many organizations are looking at relatively flat budgets in 2013, what are some of the alternative strategies that IT organizations can explore to better address these challenges?

Business Unit Funded Projects

While an internal IT organization may be a finite resource (capacity, budget, people), this does not mean that IT (technology) resources are not available to address these shortcomings. In fact, the variety of public cloud computing services might be the ideal alternative once internal IT reaches its capacity to serve the business. Many public cloud resources can be obtained on-demand (OPEX-only), meaning that they can start when the business project requires it, and they can be directly aligned to the usage objectives of the business. These public cloud services could be Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS) offerings from various providers.

So how might this type of strategy help with the 80/20 budget dilemma? Continued »


November 22, 2012  12:21 AM

“Lock-In”: More than just Vendors, Standards and Open-Source

Brian Gracely Brian Gracely Profile: Brian Gracely

A funny thing happened on the way to the next wave in IT technology – Cloud Computing – people seemed to want combine the portability of a time-machine with the freedom to change of the 1960s. Not only do people want the benefit of unlocking IT service delivery from capital expense budgets, but more and more they also want the flexibility to move the source of those services to any location or any provider.

Then reality sets in and they are faced with the challenges of Data Gravity, multiple variations of APIs, multiple variations of projects, and other nuances that highlight that we’re still quite a long way from being able to switch between any cloud computing service as simply as we plug into any Ethernet port.

So given that we’re still in the early days of this next wave, what is an appropriate level of concern to have for “lock-in” vs. focusing on bringing new value to the business via technology? And are there specific areas where lock-in might be more of a concern that other areas?

In the past, we often looked to standards bodies (eg. IEEE, IETF, W3, etc.) to define interoperability. But in today’s world, there are as many best-practices being defined via open-source projects as in any standards-body, so companies now need to decide if they value pace-of-change over standardization. IT organizations have traditionally leaned towards risk-migration as the higher priority, which is often in conflict with their business-side counterparts that want results now and aren’t thinking about the management and maintenance of legacy systems years into the future.

In some cases, we’re seeing levels of standardization between defined by loosely defined committees (eg. Open Data Center Alliance; podcast) or by groups of providers looking to establish market-baselines (eg. Cloud Foundry Core; podcast). Some companies view efforts like this as a good thing as it moves them farther along towards levels of interoperability, but there are still some in the industry who are never satisfied with the level of “openness” or often seek to find alternative motives from committees or vendors.

With bleeding edge technologies, we’re often faced with the challenge of no existing standards since the technologies are so new that a focus on standardization has not been anyone’s priority. Technologies like Software-Defined Networking (SDN) fit this category today.

Ultimately the answer to the trade-off between complete standardization and keeping up with change should be driven by business needs. Waiting for a piece of technology to become completely standardized or be “completely open” means trading off business opportunity now. Demanding the freedom of portability, whether it’s at the application or provider level, typically means giving up some of the advanced functionality that made the technology unique for the business needs at the time.

Inevitably, change creates costs. Either opportunity costs, capital costs or operational costs. Whether that change is high in the stack (applications, development frameworks, middleware) or down in the technology weeds (management software, operational process, worker retraining, etc.), there is always going to be a cost. It’s critical that both the business and technology “costs” are evaluated and considered when trying to determine whether to focus on standardization or openness. Neither come for free.


November 12, 2012  11:00 AM

Why the Hype about SDN?

Brian Gracely Brian Gracely Profile: Brian Gracely

So far this year, we’ve seen a number of major announcements that have led with the term “Software-Defined Networking” (SDN).

After years of being a critical, albeit some would say boring “plumbing” technology, the industry seems to be abuzz about networking again. “Networking is sexy”. But why?

First, let’s start with the economics and then we’ll move to the technology.

There’s an old saying that, “One man’s profit margin is another man’s business model.” Loosely translated, many companies would like a piece of the business model that Cisco has been successful with for 20+ years, at very high margins.This strategy kicks in when we have major technology transitions (1G > 10Gb; Data Center consolidation; Big Data application, etc.).

Companies that tried to compete with them using customer-ASIC based solutions have failed. It’s too expensive to build your own ASICs unless you have the scale of Cisco. Other companies are trying to leverage “merchant silicon” (Broadcom, Intel) and add differentiation via software. This can reduce the acquisition cost of the hardware, especially when designs move from System-on-Chip (SoC) designs to full platform design.

But the real cost of networking has always been in the “intelligence”. In the past, this intelligence was always tied to specific vendor hardware. But this trend is beginning to change. SDN is about moving some (or all) of the intelligence into independent software functions that don’t reside on the networking hardware. And when the intelligence moves into software, it changes the economic model because it now becomes about flexible licensing or open-source instead of purely capital expense (CapEx) purchases.

So SDN is about changing the economics of networking, whether that’s for a new networking vendor or for companies that are looking to change the percentage of IT spending on networking hardware. But what about the technology side of the equation. Does SDN really do something different than previous “virtualized” networking technologies of the past?

This is where the industry is still debating many details and definitions. To start, we’ve had networking functionality in software for quite a while. Beyond the jungle of “networking appliances” (network functionality running on an x86 platform), there have been “virtual switches” running in hypervisor-enabled servers for almost a decade. VMware vSwitch, Open vSwitch, and Cisco Nexus 1000v are just a few of those “products”.

New networking technology wouldn’t be hype-worthy if it didn’t involve some new protocols. Protocols are the lifeblood of networking, so let’s add a few new ones to the stew. How about OpenFlow, VXLAN, NVGRE, STT, and LISP just to name a few (note: you can thank me later for the great sleep you’ll get after reading all those drafts). These new architectures and protocols allow IT organizations to overlay new logical networks over the existing physical networks to enable things like VM mobility. They also offer the ability to separate the control plane (“the intelligence”) from the data plane (“the packet plumbing”). In theory, this means that the control plane software could come from one vendor (or open-source project) and the data plane hardware from another vendor. This means the potential for customization and flexibility.

But ultimately, the question is if the hype is really hype-worthy. Does SDN solve problems that were previously unsolvable, or solve them in a way that makes a significant economic change to how IT delivers services to the business? This is an area that is still open for debate, and will be one of the uber-hot topics in 2013. If you’re currently a greenfield project, a very large Enterprise or warehouse-scale data center operator, then SDN is probably near the top of your list to explore both the technology and economics. If you’re a developer-centric IT organization and you’re looking to create better synergy between you development and operations groups by using more automated software technologies, then you’re probably exploring SDN. If you’re a company like VMware that wants to compete with public cloud services like AWS and believe that all the functionality around a VM (or application) must be centrally controlled and automated, then SDN is a top-of-mind strategy for survival. But for the rest of the industry, SDN is a new shiny rock that might be gold, or might just be a distraction from time spent on other business impacting technologies.

Software is beginning to eat the world and the evolution of the network is a natural extension to this trend. But how fast it will happen still has a long way to play out.


November 5, 2012  11:00 AM

Two Views of OpenStack – From Beginners to Experts

Brian Gracely Brian Gracely Profile: Brian Gracely

This past week I had the opportunity to have two very distinct conversations about OpenStack, the evolving open-source cloud computing project. One from the beginner’s perspective, and another one from the expert’s perspective. Or another way to look at it, one from the Enterprise perspective and one from the open-source community perspective. The first conversation was with Scott Lowe, a recognized VMware expert and noted author, who is beginning to shift his technology focus from traditional Enterprise tools to more emerging infrastructure technologies. The second conversation was with Jesse Andrews, one of the founding developers of the OpenStack project and an expert in deploying open-source software. There were several key points that I took away from these two conversations:

  1. Learning Curves  – With any new technologies, there are distinct learning curves to grasp new concepts, uncover unexpected problems (or deficiencies), and come up with best-practices for solving common problems. From the Enterprise perspective, OpenStack involves an understanding of different terminology from what is used by vendors like VMware, managing through a lack of documentation and getting a working environment up and running from the bits of multiple projects. From the open-source perspective, it was the evolution of the project from simple compute and storage projects to a series of loosely interconnected projects that needed to incorporate complex functionality for identity, networking, authentication and storage.
  2. Loosely Coupled vs. Tightly Coupled Architectures - While OpenStack contains the word “Stack”, which some might imply that to mean all the elements should be deployed together, this is not necessarily the case. Unlike some other cloud computing projects or products that are either a single entity, or tightly coupled and dependent, OpenStack is designed to be loosely coupled. While it is possible to deploy all the elements of OpenStack to deliver a set of services, it is not required*. In fact, some early users have deployed certain OpenStack elements as standalone services (eg. Storage as a Service). While this loose coupling means that it could be more complex to initially test or deploy, it also has the possibility for greater extensibility with external products and services in the future. It’s a classic design trade-off that will be interesting to watch as more companies decide which cloud computing architectures best solve their business challenges.
  3. Virtual Data Center vs. Cloud Computing – When you talk to people with experience in deploying hypervisor-centric architectures, such as VMware, you often hear them talk about deploying a “Private Cloud”. These architectures take existing applications, place them on a Guest OS within a Virtual Machine (VM), and then leverage the capabilities of the hypervisor to provide various levels of availability for the application. This aligns very well with traditional Enterprise architectures, where there is an application expectation that the infrastructure will be robust and highly available. On the other hand, when talking to people deploying open-source cloud computing software, they are often looking to do something with an application that wasn’t possible before. Their architectural mindset is that the application should be able to survive infrastructure failures. They are building new applications and are shifting the failure profile from infrastructure to application, expecting the infrastructure to use lower-cost hardware that can fail more frequently. This group often calls the hypervisor-centric approach “Virtual Data Center”, and the application-centric approach “Cloud Computing”.

I’ve been following the evolution of OpenStack for the past couple of years. It’s interesting to watch as technology experts from various backgrounds begin to explore if  these technologies make sense for their business. In some cases they are making an “all-in” bet, while others are exploring to see if it offers functionality they can’t already get from commercial offerings. I’m curious to hear if our readers have interest in OpenStack and how they view this emerging cloud computing project.

* OpenStack “distributions”: TryStack (online), DevStackRackspace “Alamo”, Piston Cloud “Airframe”, Ubuntu OpenStack, SUSE OpenStackRedHat OpenStack, Debian OpenStackFedora OpenStackCisco OpenStack, Cloudscaling (apologies if I missed any…)


October 29, 2012  9:00 AM

What 21st Century Bit Factories can learn from 20th Century Widget Factories

Brian Gracely Brian Gracely Profile: Brian Gracely

I’ve written before that in today’s digital economy, I consider data centers to be “21st Century Bit Factories”.  They are this century’s engine that drives knowledge, commerce, communications, education and entertainment. And as these modern factories become critical elements in our global economy, it’s important to look back at the evolution of these environments to see how companies can leverage public and private data centers (those “cloud” things) to drive greater business opportunity and competitive advantage.

[NOTE: I'd recommending watching the two videos from Simon Wardley before continuing, as it's important to understand that what we're seeing in the evolution of computing is not unique to the computing industry. It happens for most products and markets over time.]

I believe there to be four critical elements to consider in looking at the parallels between 20th and 21st century factories:

  1. Original Factory Designs – Flexible Containers and Assembly Lines
  2. Supply Chains – Getting In and Out of the Factory
  3. Evolving Factory Operations – Automation and Quality Improvement
  4. Evolving People-Centric Process – Skills and Feedback Loops

Just as manufacturing companies leveraged Demming’s teachings to drive process efficiency for cost and quality of their physical products, so are the modern data center operators to reduce cost and improve response times for their services. PUE is replacing JIT as the input metric. DevOps is replacing Lean as the organizing structure. Fail-Fast is replacing Six Sigma as the quality measurement.

Continued »


Page 9 of 10« First...678910

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: