From Silos to Services: Cloud Computing for the Enterprise

Page 7 of 10« First...56789...Last »

June 14, 2013  10:09 AM

Should Cloud Management be delivered as a SaaS application?

Brian Gracely Brian Gracely Profile: Brian Gracely

saas-platform-11At times I’m a little bit disjointed in how I collect and process information. A nugget here, a news story there, and then a comment to tie together a few fleeting thoughts.

A few weeks ago, Dell acquired Enstratius. They make excellent software for managing and governing multiple cloud environments (via the APIs), and we’ve had their leadership team on the podcast a few times (here, here). They primarily delivered their software as a SaaS application, although it could also be run on-premise. Since the acquisition, Dell has shifted their Public Cloud strategy and now Enstratius products are the core of a plan to let customers leverage resources from multiple cloud platforms. And I believe the fact that it can be delivered as a SaaS application was key to making that shift. It made it much simpler for customers to begin the process of consuming cloud resources, instead of having to setup tons of equipment (hardware/software/security) on-premise.

Last week, a friend that works quite a bit with VMware environments sent me this “BOM” (Bill of Materials) for a reasonable sized setup to create the full vCloud Suite (vCenter, vCloud Director, vChargeback, vCAC, vCNS, vCO). What jumped out at me was the breadth of things that had to be in place to get a Cloud environment up and running. Windows, Linux, Web Servers, multiple Databases. This isn’t uncommon for any Cloud Management Platform (CMP) – OpenStack, CloudStack, etc. – and would typically require teams with a variety of skills to coordinate putting getting this configured properly.

  • 20 Management servers: 2x vCenter, 2x DB servers for vCenter/vCloud/vChargeback, 2x vCloud Cells, 2x vCNS Manager, 2x DB for vCAC, 2x WebServers for vCAC, 2x vCAC DEM Orchestrators, 2x vCAC DEM Workers, 2x vCAC Agent Machines, 1x vChargeback server, 1x vCO
  • 8 Databases: 2x vCenter Update Service, 2x vCenter, 1 vCloud, 1 vCAC, 1 vChargeback, 1x vCO
  • 7 Management Interfaces: 2x vCenter, vCloud, vCNS, vChargeback, vCAC, vCO

And again last week, Gartner analyst Alessandro Perilli (@a_perilli) tweeted:

Perilli is one of many people at Gartner that covers the Cloud Management Platform space, so he gets to see the breadth of offerings in the market from many vendors.

So this ultimately begs the question, “Should Cloud Management be delivered as a SaaS application?Continued »

June 5, 2013  11:00 AM

Are Cloud Operations Transferable?

Brian Gracely Brian Gracely Profile: Brian Gracely

Remember “Cloud in a Box”? It has come in various iterations over the past 4-5 years:

  • “Pre-Defined” or “Pre-Validated” all-in-one racks of equipment
  • Hardware Reference Architectures
  • Hardware + Software Reference Architectures
  • Software-Only Design Docs that are “hardware agnostic”

Lots of vendors or systems integrators have promised to deliver a cloud to their customers in a pre-defined package, but it is typically missing one critical component – OPERATIONS. And when you think about it, the operational model is cloud computing, so it begs the question – are cloud operations transferable?

OperationsRecently GigaOm wrote about the latest round of Cloud Providers trying to bring their version of cloud to a new set of customers. Others have tried this or are currently using a similar strategy for both IaaS or PaaS platforms, including Apprenda, Joyent, Rackspace, Virtustream, and VMware (both CloudFoundry and vCloud).

While operations does include elements of technology, the vast majority is driven by people skills and company specific processes. This is why you’ll see cloud pioneers like Netflix open-source many of their internal tools, or Rackspace give up leadership in OpenStack to the OpenStack Community, because their expertise and learning-curve advantages are in the operations of their cloud environments. The AWS APIs are open (documented), but they don’t expose much of their internal operations.

But what makes the operations so difficult to transfer? Continued »


May 26, 2013  12:04 PM

Transforming IT or Transforming the Business?

Brian Gracely Brian Gracely Profile: Brian Gracely

An interesting discussion took place on Twitter yesterday, spurred by one of my favorite industry analysts (Simon Wardley, @swardley). I’ve written about his ideas, analysis and outstanding blog before.

Screen Shot 2013-05-26 at 11.07.40 AM

While it was an excellent discussion and did surface a few fringe industries that might fall into this category (undertaker, local hair-salon, etc.), the general consensus was that every business today is essentially a tech business.  While it’s fairly easy to highlight this with companies where technology is their core product (eg. Netflix, Facebook, etc.), it’s also not difficult to see how technology is core to businesses that don’t make technology-centric products (eg. tractors or farm equipment).

For example, if you’re John Deere (just an example, don’t have insider information on their operations), you obviously have a complex supply-chain in place to be able to pull together all the elements that make up a tractor. They also need sophisticated analytics systems to be able to forecast sales, costs of raw materials, currency exchanges and other macro-level things that could be effected by the economy, government policy, etc. Then within the tractors, there is an ecosystem building around tools and applications that can help farmers better manage their fields. Somewhere all the data being collected could be creating new “big data” knowledge that could be improving crop yields, fuel efficiency of tractors, etc.

butterflyBut in an example like this, what is being transformed by technology? Is this transformation of IT, or transformation of the product/ecosystem, or transformation of the business?

When I think about “transforming IT”, I tend to think about the adoption of new technologies, reducing costs and improving worker productivity.

When I think about “transforming the product/ecosystem”, I tend to think about making data accessible to open APIs, or expanding areas where value can be added to a product (customization, etc.).

When I think about “transforming the business”, I tend to think about using technology to eliminate an element of the existing supply chain. Netflix is a great example of this (remove the need to obtain physical media at a store or kiosk). iTunes is another great example (remember record stores??). Sports has been going through this for the last 10-15 years. But it’s harder to think of examples of these types of transformations where the business isn’t entirely based around technology. It does create some blurred likes, such as how Bechtel is using technology to enhance how they manage contractor relationships and project management.

So this is one of those areas where definitions might actually be more of a hinderance than a structure to help companies understand how to plan for transitions or transformations. Regardless of what definitions people use, I’d suggest that there are a few areas that need serious consideration:

  1. Do you understand your supply chain and have you gone back and examined it lately with an eye towards how technology (or technology partners) might be able to help reduce links?  Have you looked at how new technology could enhance the existing portfolio? [Example: With airlines struggling so much, why haven't any of them leveraged their footprint at major airports + telepresence technologies to create augmentations for the output of a large percentage of travel - business meetings?]
  2. Do you understand the potential ecosystem that could be created around your business, your product or (most importantly) your data? We explored the data element on the podcast. This is actually another variant of the supply-chain discussion, but it involves thinking about the value-chain for the business and considering the opening of new doors and some loss of control of the outcome.
  3. Can you build a better mousetrap? Most people thought Apple was crazy to begin building physical stores when the Internet had proven that brick & mortar business was dead. But they just did retail better than everyone else on the planet. They drew a direct line from the customer’s experience (which involved convenience, repair, physical touch/feel) and aligned it to their goals (don’t sell as massive discounts). Zappos took a similar approach with low-margin shoes – focus on the experience and inconvenience of the past and leverage technology to solve those customer pain points.
  4. Can you measure every element of the business, not just the things that are reported on the financial reports? Do you understand the things that influence the direct results or the buying process or satisfaction levels? Given that every aspect of our lives is now recorded digitally, there is an extremely good chance that the information is available (directly or through external services).
  5. The underlying technology is obviously important, but as we’ve seen time and time again, it’s the process and people that need to embrace the change more than the technology. Technology change/transformation is a given. It might take 5yrs or 10yrs, but it’ll happen. Process and business model change doesn’t have EoL dates, it has Chapter 11.

So as usual, Simon Wardley is right about this. But the question becomes, are you transforming technology or transforming how the business leverages technology. They aren’t the same, no matter how many times companies tell you the CIO deserves a table with the decision makers in the company.


May 25, 2013  7:47 PM

Will DevOps fix Enterprise Clouds?

Brian Gracely Brian Gracely Profile: Brian Gracely

I’ve written before that I’m not entirely convinced that the “Build Your Own Cloud” movement is going to be entirely successful, especially if the goal is to enable IT as a Service instead of just virtualizing applications (or the network, or storage, or whatever is the newest Software-Defined *). The amount of change needed to get to the operational state of IT as a Service is just massive, and the majority of it has very little to do with new technology. It breaks ITIL. It breaks existing technology silos in IT. It breaks the current IT budgeting models. And it involves the intersection of change and people, which we all know an have a few challenges.

But as a good disciple of Cloud Computing, I went and read Gene Kim’s excellent book on DevOps, The Phoenix Project. We also had him on the podcast. Travel permitting, I attend the excellent sessions at the Triangle DevOps meetups, run by a bunch of people that do DevOps stuff everyday (operationally or developing products like Chef, Ansible, etc.). When possible, we grab some of them for the podcast as well.

disconnect1So here’s where I keep seeing the disconnect. For a while, the discussion about Enterprise Cloud was always focused around Private Cloud, which often times meant something using VMware vSphere (and maybe vCloud Director). But when I listen to all the people doing DevOps today, it’s rarely on VMware (or even VM-based) environments, it’s almost entirely using various Linux-based tools and packages. and it’s primarily for web-based or web-facing applications. Not the things you’d typically associate with IT-supplied applications. We explored this a while ago, but I don’t know if it’s changed much, even with things like Windows support being added to tools like Puppet. Continued »


May 12, 2013  10:23 AM

New Hybrid Cloud Models Emerging

Brian Gracely Brian Gracely Profile: Brian Gracely

Hybrid-CloudBack in 2007-2008, when the concept of “Private Cloud” began to emerge as a DIY model for evolving IT, there was concern that companies would be locked into a Public-Only or Private-Only decision. Given the maturity of the technologies and IT skills at the time, this created a strategic problem. But then, like a double rainbow made of Skittles and fruity drinks, the concept of “Hybrid Cloud” magically appeared as the unicorn that would provide “the best of both worlds” for long-term IT strategy.

  • “Own the Base and Rent the Spike”
  • “Cloudbursting”
  • “Application migration”
  • “Application Portability”
  • “Avoid Lock-in’

Pick your favorite slogan, they were all there. Throw in the ability to dynamically migrate workloads between physical locations and there was a frenzy of excitement over the possibilities of a Hybrid environment.

And then reality set in and people began to realize that the limitations far outweighed the possibilities. Limited Bandwidth. Security Concerns. Ownership Issues. Consistency of Operations. Early offerings such as Amazon AWS could provide a VPC (Virtual Private Cloud), but it had limitations (or “wish-lists“). CloudSwitch did some cool things (since acquired by Verizon/Terremark), but it was cloaked in a security story and hence didn’t get as much visibility as it could have from “Cloud Architects” at the time.

It also led to an explosion of definitions of Hybrid Cloud, mostly to match the needs of a vendor selling their HW/SW, or an Enterprise Architect trying to justify their design to their CIO. Either way, it’s evolved to where Hybrid Cloud can mean any mix of offerings or architectures where the resources and applications are both on-premise and off-premise. And if I squint my eyes just right, my borrowed concept of “Cloud Concierge” might even fit one of these definitions.

Fast forward to 2013 and we’re beginning to see a new set of Hybrid Cloud offerings emerge that are backed by both evolving technology and vendors large and small. They tend to fall into these categories: Continued »


May 5, 2013  8:28 AM

What is OpenStack?

Brian Gracely Brian Gracely Profile: Brian Gracely

The OpenStack Summit took place a couple weeks ago in Portland to announce the “Grizzly” (G) release and to begin the design activities for the “Havana” (H) release (due in Fall 2013). Much has been written about the technology and vendor trends, including a few from my colleagues Aaron Delp and Jeramiah Dooley, which I believe do a good job of highlighting the transition that’s happening in the community and the technology.

Since then, I’ve had a number of business leaders and financial analysis reach out to ask, “What is OpenStack?”. My first reaction is to give them a little bit of history and overview of the technology landscape. But then I’ve quickly come to realize that this isn’t what they are looking for. From their perspective, they want to really understand where OpenStack fits into the broader IT hierarchy and if it should become part of their strategic thinking. Here’s a sampling of the follow-up questions they tend to ask:

What does OpenStack actually do?

In the most basic sense, OpenStack is a software framework that coordinates the services needed to provide on-demand computing/storage resources for applications. Those services include computing, hypervisors, storage, networking and security. From a user perspective, if OpenStack is implemented correctly, it should just look like a few menus and clicks that let the application owner say, “I need this many resources to start, then I’ll want to grow or shrink that number over time, and I’d also like a few other services to augment my application (backup, geographic resiliency, load-balancing, etc.” If OpenStack is used as part of a more complex system, those menu items would be replaced with programmable APIs. [NOTE: This same description could be used for almost all "cloud management platforms" in the market today.]

OpenStack is not a company (eg. Rackspace), although some companies are using OpenStack as part of their commercial services, and some companies are trying to sell packaged version of OpenStack.

Continued »


April 23, 2013  4:48 PM

Software-Defined vs. Services-Defined

Brian Gracely Brian Gracely Profile: Brian Gracely

AppleOrangeIn the fall of 2012, VMware announced their “Software Defined Data Center” strategy. It articulated a new plan to help IT organizations become more <agile, nimble, responsive, frugal, insert buzzword> and evolve to delivering “IT-as-a-Service”, with software-elements playing the critical building blocks for infrastructure (VMs, Storage, Networking, Security). It is being targeted at the same buyers that made VMware vSphere purchases in the past – centralized IT organizations and IT infrastructure teams. It’s a strategy that plays to their existing installed base of hypervisors, but it leaves several VMware experts asking “Does VMware know Cloud is all about Developers?“. The “Software-Defined” mantra has since been picked up by many companies in the IT industry as a way to refresh their products or align to their potential buyers.

In 2006, Amazon launched the first of their AWS (Amazon Web Services) services, EC2 (compute) and S3 (storage). AWS was targeted at development organizations looking to change the pace and economics of how applications were developed for the web. Since then, they have rapidly grown the number of services to include databases, long-term storage, DNS, CDN, queuing and many other capabilities. The quantity of services have grown to a point where many people ask if AWS is still an IaaS (Infrastructure as a Service) service or moved up to become a PaaS (Platform as a Service) service. Where it fits into the NIST definition seems to be irrelevant to the architects of AWS, who are focused on delivering a set of scalable services for developers looking to build next-generation applications (web, mobile, analytics, etc.). It’s this structure that recently had Jeff Sussna (@jeffsussna) writing “Services-Dominant Logic: Why AWS is So Far Ahead“.

While both of these approaches are being marketed under the umbrella term “cloud computing”, it’s becoming increasingly clear that they are targeting very different groups and they are targeting very different value propositions.

Continued »


April 14, 2013  4:35 PM

What Do Enterprises Expect from OpenStack?

Brian Gracely Brian Gracely Profile: Brian Gracely

openstack-cloud-software-vertical-smallAs OpenStack has begun to mature over the past 18 months, there has been some debate amongst the leading developers about the focus of the projects. On one side are those that believe that OpenStack is competing with VMware. On another side are those that believe that OpenStack is an alternative to Amazon’s AWS. Still others focus on a group of services that could create an open system of interconnecting many clouds.

One of the powerful aspects of an open source project is that developers or companies can take the code and use it any way they choose. Target a certain market. Target certain use cases. Target certain business models.

And in return, users of the software can decide what they want the software to do. They can modify the software if they have a specific need. They can buy packaged versions and use the embedded functionality.

For a project like OpenStack, which is maturing during a time when the market is already full of competing offers, it will often be compared to an existing expectation (or experience) that users have of other products/services.

An example of this is a simple question I posted on Twitter yesterday. In the “Grizzly” release, support for VMware ESXi hypervisor has been added.  So I asked:

Screen Shot 2013-04-14 at 4.17.21 PM

The reason for my question is that I’ve heard a number of Enterprise IT organizations say that they are planning to explore OpenStack in the coming year for their Private Cloud (or Virtualized Data Center) environments. Given that VMware vSphere has 60-80% marketshare in that market segment, many of them are also curious about reusing existing investments in hypervisor licenses, and Live Migration has become a standard capability for Enterprise IT organizations and legacy applications. Continued »


April 11, 2013  11:21 PM

IT Evolution follows Historical Patterns

Brian Gracely Brian Gracely Profile: Brian Gracely

This past week, a colleague asked a commonly heard question these days:

Screen Shot 2013-04-11 at 10.14.38 PM

Jason Edelman works for a well-known VAR (Value-Added Reseller), with a deep technical focus on emerging networking technologies (SDN, OpenStack Quantum, Open vSwitch, etc.). Not only is he trying to stay ahead of the technology evolutions, but he’s also trying to forecast how the changes in consumption models (eg. cloud computing) and open-source (free, paid-support, etc.) might impact his company.

To give Jason some guidance, I sent him a couple links (here, here) that seemed relevant to VARs. It seemed like a simple way to share some knowledge in 140 characters.

But the more I thought about, his question really does hit on much larger system-level evolutions. The good news is that IT is like many industries and we can look to history for how it will likely evolve.

historyLet’s start with a few very good reads - Those two sources should be on everyone’s reading list

  • Simon Wardley’s blog: Simon (@swardley) is a scientist with a deep understanding of technology, economics and industry modeling. Just start by reading the “Popular Posts” on the right side of the page and you’ll quickly realize that the changes we are seeing align to models than many industries have seen in the past. Simon is an excellent follow on Twitter, and has been a guest on the podcast.
  • Porter’s Five Forces: The classic strategy model provides useful frameworks for understanding supply-chains, competitive strengths and weaknesses, buyer vs. seller leverage and competitors.

Continued »


April 8, 2013  11:12 PM

Is “Build Your Own Cloud” the new IT Gym Membership?

Brian Gracely Brian Gracely Profile: Brian Gracely

Every year, as the New Year’s ball drops and people around the world make their resolutions, health clubs and gym’s fire up their marketing machines.  Shed those unwanted pounds! Get in great shape! Get the your swimsuit body ready for Spring Break!

All people need to do is show up at their gym and they’ll quickly become the envy of their friends and neighbors. Just buy the right clothes, the right shoes and the right electronics. Lured by the promise of a smaller waistline, greater flexibility and improved health, customers line up with their checkbook to get the promise of an improved life.

The first couple gym visits go OK. It’s painful, but they lose a couple pounds. They believe a lifestyle change is possible. Then February comes along, and work or travel or family make it tough to get to the gym. The weight-loss plateaus because losing the next 10-15 lbs would require both gym visits AND dietary changes. Being able to look like that athletic guy or girl, doing extra reps each day, would require a full-on lifestyle change. And by May or June, the enthusiam is gone and you’ve fallen back into your old ways. Sure, you visit the gym from time to time, but getting significantly better is a lot more work than you expected. Maybe next year you’ll follow through with your goals.

Sound familiar IT folks? Even though we continue to see studies claiming that Enterprise IT organizations are prioritizing their Private Cloud build-outs, the reality of successful deployments is much fewer than expected, and it’s taking much longer than pontificated.

But how is this possible? You’ve bought all the latest hardware from vendors claiming to have the right “journey to cloud” . You saw some initial cost savings and faster provisioning times of virtual machines. Things were feeling good, but then something happened. Your cost savings began to plateau, and your users continued to ask for services faster than you’ve been able to deliver with the new “cloud”. Continued »


Page 7 of 10« First...56789...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: