From Silos to Services: Cloud Computing for the Enterprise

Page 10 of 14« First...89101112...Last »

August 17, 2013  1:54 PM

Cloud Evolution – USB 2.0 is a Long Way Off

Brian Gracely Brian Gracely Profile: Brian Gracely

Back when we started The Cloudcast (.net) podcast in 2011, our first guest was Christian Reilly from Bechtel. At the time, he was a couple years into a multi-year process of evolving his internal IT architecture to a private cloud. One of the most interesting comments he made about their evolution was the lack of interoperability between technologies and platforms claiming to be “cloud”. The way he explained it, this new paradigm was at a crossroads. It could either emulate the early Internet walled-gardens of AOL and CompuServe, or it could embrace the open standards that allowed the Internet to expand into every aspect of our lives.

More than two years later, the debate about cloud interoperability is still raging.

These days, there seem to be three camps of thought about how to deal with interoperability:

Same Cloud Everywhere 

The simplest way to think about how to leverage multiple clouds is to have the same technology everywhere, in theory ensuring interoperability across multiple environments. This is the approach being taken by VMware vCloud Hybrid Service, vCloud Service ProvidersVirtustream xStream and various implementations of OpenStack. Some of these offerings are also beginning to offer alternative API support (often AWS API capabilities).

Same API Everywhere

Other projects are attempting to take the path of having similar APIs available on multiple cloud instances. This is the approach being taken by The Amazon-Eucalyptus Partnership (Lydia Leung, @cloudpundit), as well as the OpenStack foundation. It’s important to note that early implementations of OpenStack were not all the same, but the OpenStack Foundation is attempting to remedy some of this through the RefStack program. Even with these efforts,  some people are still concerned that OpenStack will become too fragmented and needs a dominant leader (or vendor) to drive it’s success.

Cross-Cloud APIs

The third camp is interested in created a more unified set of APIs that would work across multiple clouds. This is the area that has created the most contention and debate, for obvious reasons – complexity, differentiation, competitive markets, etc. Leading cloud user NetFlix has been actively working to open-source many of the tools they use today in hopes that other cloud providers (they are AWS’s largest customer) will be able to create competitive offerings that give them flexibility for their business. Other leading cloud visionaries are looking to drive cross-cloud interoperability – An Open Letter to the OpenStack Community: Our Future Depends on Embracing Amazon (Randy Bias, @randybias). Within the OpenStack community, not everyone believes this is a good idea. This debate seems to be heavily divided between the “innovation” crowd and those that claim that ecosystems can be overtaken through commoditization (Can OpenStack dominate IaaS? (Simon Wardley, @swardley)).

NOTE: It’s important to remember that just emulating or copying an API doesn’t ensure interoperability between clouds. Those APIs must also be built on top of similarly architected systems, otherwise one API call won’t deliver the expected API result of another call. This can be especially challenging if the underlying infrastructure (compute, network, storage) has different capabilities between clouds.

As you can probably see, we’re still a long way away from the possibility of “USB 2.0″-like compatibility between clouds. Technology, politics, competition and money are getting in the way. It will be interesting to see if the vendors and open-source projects find ways to work more closely together, or if customers decide to use various 3rd-party tools (Enstratius, Righscale, Ravello Systems, Cloud Velocity, etc.)  to get around the lack of interoperability

NOTE: While quite a bit of architectural level work would need to get done to create interop between systems, it’s important to point out some excellent work being done to educate the market – Architecting OpenStack for VMware vSphere (Kenneth Hui, @hui_kenneth)

August 4, 2013  8:34 PM

What is “Enterprise Ready” in Cloud Computing?

Brian Gracely Brian Gracely Profile: Brian Gracely

Probably more than any other question, I get asked all the time if I believe that Amazon AWS is “enterprise ready“. Sometimes the question comes from analysts trying to determine the extent that the IT industry is shifting. Sometimes is comes from vendors trying to determine the pace of change/transformation/disruption. Other times it comes from IT organizations trying to determine what their future strategies look like for procurement, service offerings and future skills evolutions.

“Enterprise Ready” is one of those loaded phrases that you really need to be careful about using, because the person you’re speaking with typically has a preconceived notion about what it means. For many people, it means that the service essentially emulates all the aspects of an existing Enterprise IT data center – include all the elements of performance, redundancy, security, compliance, etc. In essence, they expect the new environment to functionally be like the world they are used to. What they don’t want are the long delays to get things provisioned, the long meetings with security and compliance teams telling them everything is unsecure, or the long budgeting process to procure the required technology. [Insert analogy about eating cake here]

What I try and explain to people when answering this is to think about “Enterprise IT” in two buckets:

  • Bucket 1 – Applications that you typically associate with IT – Email, ERP, HR, Unified Communications, Sharepoint, etc..
  • Bucket 2 – Business requests for technology that typically get turned down by IT

Bucket 1 is all about applications that have long, relatively stable life-cycles and IT is usually trying to balance cost vs. performance of these applications. This is a technology bucket. Known equipment. Known capacity needs. Aligns to depreciation cycles. These applications might be a fit to migrate to a public cloud, if the business is facing some ‘change event’ (eg. M&A, equipment EoL, licensing upgrades pending, new CIO, budget challenges, IT skills challenges, etc.).

Bucket 2 is all about the pace of today’s business world. The world where winning and losing is often measured in how quickly we can transition from a great idea to a great implementation of that idea, via technology + business models. These ideas are responsive to the market, to competition and to changes that we’re planned for in the annual budgeting meeting. At least at the time of ask, they are often the complete opposite of the Bucket 1 applications – unknown capacity needs; shorter usage duration; unknown scalability.

So what might fit into Bucket 2?

  • VP of Marketing would like a smartphone app for the sales-kickoff or annual tradeshow. They aren’t sure if it’ll get 1,000 or 15,000 downloads (unknown capacity, unknown scale). They would like it to be available 2 weeks before the event, and collect data for up to 1 month afterwards. Beyond that it’s not needed (short duration).
  • VP of Operations just got back from a conference discussing “Big Data” and would like to prototype ways to better analyze sales trends and how they are effected by weather, gas prices, seasonality and a few other sources of publicly available data. He needs the prototype completed in 60 days, as he needs to justify an ROI (eg. better sales insight) to justify a more expansive project. If the ROI doesn’t materialize, the bigger project might be cancelled – Quick timeline, potentially wasted capacity beyond 60 days.
  • CIO tells the lines of business that the existing annual IT budget has been exceeded by Q3, as a major project has gone over budget (it occasionally happens), but one of the lines of business has a major opportunity if a new system can be put in place in time. The opportunity is $5-10m in Q4, with a follow-up of $10-20M in Q1. Pace of implementation is of the essence, but where to do it? Sometimes this is called ‘Shadow IT’, it’s just a reality of doing business in the 21st century. Global resources exist, so why shouldn’t a business try and leverage them?

At this point you might be asking why I didn’t explicitly mention AWS and “Enterprise Ready”. Hopefully you’ve figured out that there is more to “Enterprise Ready” than just the underlying technology. In today’s world, there is a place for applications in the public cloud (or remaining in existing data centers) for those characteristics. But there is also unmet Enterprise demand for solving business challenges with technology, now. Those Enterprises, those applications are “Enterprise Ready” too. They are just focused on a different characteristic being the most import element of their success.

So how big could Bucket 2 be? It’s tough to tell (long-term), because we often don’t know how big a new technology segment could be until people and companies understand just what is possible without prior restraints. The Client-Server market was 10x the Mainframe market. It’s not unusual for people to have 3-4 connected devices (smartphones, multiple tablets, laptops, etc.).

Cloud Computing helps level out the short-fall in supply and demand for Enterprise IT. Whether or not the unmet demand is Enterprise Ready is now as much about pace-of-implementation as it is SLAs and IOPs. The forward-looking CIOs are trying to figure out how to deliver both to their Enterprises.


July 30, 2013  11:37 PM

The Beginning or The End of Cloud Computing? It’s confusing…

Brian Gracely Brian Gracely Profile: Brian Gracely

red cloudComputing eras tend to last 10-15yrs at their high-point and then something else takes over. Along with those changes tend to be a few leaders that adapt and continue, while a large number of the leaders fail and new companies (or open-source projects) emerge to take their place and lead the new era.

  • Mainframes: 1960s-70s
  • Minis: 1970s-80s
  • PCs / Client-Server / LANs: 1980s-90s
  • “Web 1.0″ / Commercial Internet: 1990s-2000s

Some might argue that the Cloud Cloud era got started after the Internet bubble burst (2000-2001) and early SaaS applications started to emerge. Others might say that it really took the next step when Amazon.com introduced Amazon Web Services in 2006-2007 and brought the idea of utility computing into reality (with a H/T to Douglas Parkhill for his early thinking back in the 1960s). Another group might point to the 2009 emergence of the concept of “Private Cloud” as the tipping point where it became a reality for many IT organizations (“shadow IT in my company?”) and signaled that traditional IT vendors were concerned about protecting their existing installed base (which apparently isn’t gaining much functional traction).

While it doesn’t really matter when this new era started, it is useful to try and figure out where in the transition the industry is today. As people like to ask, “are we in the 2nd inning or the 7th inning stretch?

Some would argue that we’ve begun to hit the tipping point when the legacy vendors are beginning to show their strains and are starting to fail. While it’s easy to argue that those applications aren’t going away anytime soon (note: IBM still does +$1B in mainframes, in the 2010s!!), many quarters of misses do begin to signal that they might have missed the big shifts around cloud and open source and could eventually joint the boneyards occupied by DEC, Bull, Sun, etc.

Some would argue that we’re beginning to see the makings of “Cloud 2.0″, where standards need to evolve, interoperability needs to evolve, and we may begin to see the classic battles between two technologies that set the ton for a decade to come (eg. Ethernet vs. Token Ring, IP vs. ATM, VHS vs. Beta; Blu-Ray vs. HD-DVD) – can you say AWS APIs vs. OpenStack APIs?

Still others think the Cloud 1.0 wars are over and it’s time to shift from a industry driven by innovation and vendor-led profit models to one that’s driven by commoditization and the next phase of ideas, economic growth and potential that comes from lower costs and easier access to resources (see: Jevons Paradox).

Confused yet?

Continued »


July 13, 2013  5:54 PM

Change Culture or Move Elsewhere – The IT Decision of the 201x’s

Brian Gracely Brian Gracely Profile: Brian Gracely

In the real world, there are the seven George Carlin words that if said will make people uncomfortable, especially if used with the wrong audience or in the wrong context. In the IT world, those words are CHANGE, VALUE-ADD, COMMODITY and AGILITY (in no particular order). Use those words and somebody in the room is going to cringe, or potentially ask you to leave. They are the words that draw the dividing live between vendors, since we mistakenly believe that IT is a zero-sum and there are only “winners” and “losers” and the new will always vanquish the old. You do worship at the altar of The Innovators Dilemma, don’t you?

They are the words that make IT operators sweat, thinking about the 2 a.m. pager notice they’ll get because the new system, which requires new skills, is operating less than optimally and somebody wants it fixed…now!!

Up until a few years ago, they were words that both the IT sellers and IT buyers knew how to balance to keep the ecosystem fairly healthy and constantly evolving. But a few significant changes have come along – namely “cloud computing” and “open-source” – and opened up new options that are disrupting the balance that previously existed. The previous two IT options of Build-it-Yourself or Outsource, both of which used similar technologies and skills, now had a third option – various Public Cloud options (IaaS, PaaS, SaaS, *aaS).

And these new options are forcing the IT conversation to distinctly change from one centered around new technologies to one that’s centered around the pace of change of either IT economics or IT skills/process, and sometimes both.

Let’s just look at a few recent articles:

Both of these articles center around the idea of either building “abstractions” to layers that are deemed “less valuable” (see: don’t VALUE-ADD), or focused on a less valuable group changing (see: CHANGE) otherwise they become irrelevant – oh wait, maybe they could be “strategic”, as long as they quickly learn skills that they never needed before, and fast!! Continued »


July 7, 2013  10:24 PM

Will Hardware Vendors Adapt to Changing Expectations?

Brian Gracely Brian Gracely Profile: Brian Gracely

Earlier this week, GigaOm wrote a post discussing the possibilities that the largest web companies would begin designing their own chips (CPUs, etc.). This was following up on the trend of companies like Facebook, Google, and Amazon designing their own servers and networking equipment, or efforts like Open Compute Project open-sourcing designs that could be delivered by ODMs.

While articles like this are interesting for getting a peek into the 0.01% of companies where this is feasible (and needed), since they are running 21st century bit-factories, the question that seemed to emerge was, “how will this effect the companies that sell hardware for a living?“. I’ve written before about how hardware has been rapidly evolving, especially for cloud computing.

When I read articles like this, I tend to think that this is a macro-level trend that is inevitable. The components within hardware are evolving rapidly, and the net-value of the hardware (by itself) is decreasing vs. the associated (or decoupled) software which is increasing. You can have valid arguments about the timeline over which this evolution will occur, but believe most rationale people will generally agree that the trend in value is shifting towards software and away from hardware. System integration of the two still has it’s place, but even where that occurs (in the supply chain) is changing as well.

The more interesting question to me is how the hardware companies will respond to this. Of course they will claim that hardware still matter, especially for performance. They may also claim that visibility is needed at both a hardware and software layer. Fine, that’s to be expected. But will we see any actual changes in how they do business, or how they go to market?

The reason I ask is this question is because I’m constantly looking to the manufacturing sector to give me clues on the future of IT, since they are running on parallel tracks, albeit that IT is 10-15 years behind.

When Pivotal Labs publicly launched, there was an interesting discussion between Paul Maritz (CEO, Pivotal) and Bill Rue (VP Global Software, GE). They were talking about how the airlines (directly or via Boeing/Airbus) were now buying engines. The discussion centered around the idea that engines were no longer purchased as capital assets, but rather they were now being paid for based on usage. One of the initiatives by GE was to do a much better job of collecting real-time data about the engines to better manage downtime and associated maintenance costs (all things that would effect GE’s ability to collect revenue for the engine’s usage).

This got me thinking – will we begin to see the hardware vendors begin to take a clue from manufacturing and charge usage-based pricing for their equipment rather than just capital assets (paid directly or via lease)? Will we see them begin to add capabilities to better track how their systems are being used, in near real-time?

The challenge of capital (CAPEX) purchases and long depreciation cycles is one of the biggest barriers to companies being able to successful deploy “Private Cloud”, as they don’t have the ability to create “agile, dynamically scalable” resource pools when they can’t budget and buy in that manner.

Will the lack of overall success with Private Cloud deployments, plus the specter of lower-margin hardware sales eventually force the hardware-centric vendors to change their selling models in the future? Do they actually expect to fight Moore’s Law without trying to reinvent the business model at the same time?


July 7, 2013  9:57 PM

Thoughts from an Enterprise Start-Up

Brian Gracely Brian Gracely Profile: Brian Gracely

According to this piece in the NY Times, I’m old. I’m not yet an average/median worker in America, but apparently I’d be “Grandpa Gracely” amongst the hipster tech world.

And when you’re “old”, especially in the technology industry, the expectation is that you’re looking for stability, not change. A nice paycheck, generous benefits and maybe a set of responsibilities that are challenging but won’t have you working late nights and weekends. Squeezing in meetings between checking the status of your 401(k).

But I’ve also worked at some of the larger companies mentioned in the article, and the days of never ending meetings and delayed decision-making had me frequently thinking about leaving behind the bigger brands and making my mark with something smaller. About seven months ago, I made the leap. I joined a small company, backed by venture-capital funding, that was somewhere between maturing start-up and early-stage growth company. For a number of my colleagues, the reaction was “are you sure about that?” or “shouldn’t you have done that years ago?“. Maybe, but this was the right time for me to do this. It’s been an interesting ride so far. I get asked about it all the time, especially from people trying to make next-step decisions in their careers, so I thought I’d share some of my learnings.

Jack of Many Trades  – In general, the smaller the company, the more it will be expected that you can play multiple roles and leverage multiple skills. That’s definitely the case for me. I was hired to drive Solutions and Technical Marketing, but that quickly evolved to include running Product Management, managing Strategic Partner relationships, helping to shape future strategy and being able to do day-to-day Field Enablement. If you like wearing many hats, smaller companies can be a great place to stay challenged and to grow. It can also mean that at times you have overlapping priorities and you may be asked to lead something that it beyond your comfort zone.

Long Days and Long Nights – Smaller companies have less resources. Smaller companies have less brand-awareness. Smaller companies don’t have the luxury of outsourcing the tasks that larger companies take for granted. This means the work is on you. This means the hours will be long. Know what you’re signing up for. This is where self-motivation comes into play, because you’ll have to make some personal sacrifices.

Always Be Closing (ABC) – Whether you like it or not, everyone at a small company is part of the selling process. You may not directly carry a quota, but you’ll mostly likely be interacting with your customers on nearly a daily basis. With a smaller company, those customers are constantly testing anyone they can to see if you’re really able to deliver what you say you can. So while you might not be directly selling the product or service, you’re definitely selling “confidence” and “trust” and “commitment”, the intangible things that customers of smaller companies are evaluating above and beyond the technology. Continued »


June 26, 2013  8:47 PM

New Cloud Ops Model: GUIs vs. APIs

Brian Gracely Brian Gracely Profile: Brian Gracely

The past 4-5 years have forced IT organizations to go through some significant changes as the pace of technology has accelerated greater than ever before. Virtualized resources, converged infrastructures, advanced automation tools, new development frameworks and business users that are smarter and willing to consider new consumption models.

vcenter-lab-manager-9

But as the pace of business need have increased, the common thinking is that more and more IT tasks need to be automated in order to keep up. And hence begins the challenge for IT organizations. For years, the device-specific GUI was the interface of choice for many IT professionals (or CLI for the networking crowd), often augmented by various types of scripts. GUIs worked fine, aside from browser headaches, but they limited the ability to scale operations. Scripts increased the pace, slightly, but script maintenance often became a full-time job. Far too often, IT organizations collected a mountain of GUI + Script debt as they added various products to their environments, each with a different preferred operations model.

Modern IT operations tools, Git / Razor / Chef / Puppet / Ansible / Jenkins / Vagrant / Nagios – are now built around modern programing languages and assume that users will have more advanced skills in programing the tools to drive advanced levels of automation. They replace the traditional GUI with a set of APIs (typically based on REST). And herein begins the challenges for IT organizations: Continued »


June 23, 2013  9:08 AM

Migrating Apps to the Cloud Requires ‘BizOps’

Brian Gracely Brian Gracely Profile: Brian Gracely

Structure2013-VirtustreamThis past week, at the Structure 2013 conference, I had the opportunity to be part of a panel with Ben Kepes (@benkepes) and Rodney Rogers        (@rjrogers87) entitled “Mission Possible: Moving Business-Critical Apps to the Cloud”. The focus of the discussion was real-life examples of Enterprise companies that had migrated their mission-critical applications (eg. ERP, CRM, HCM, SQL Databases, Unified Communications) to public clouds.

One of the topics that we covered, based on a question from the audience, was about the bifurcation of existing applications and new applications, specifically in migration scenarios. The question was “Will this lead to faster adoption of ‘DevOps’?”

While I’m a believer that DevOps and have discussed it quite a few times on the podcast (here, here, here, here), I thought it was important to highlight that there is a non-technical element that is critical to the success of migrating business-critical applications. DevOps is a fundamental operational requirement for anyone building modern, scale-out web applications, but those applications are typically very different than existing (legacy) applications. And in most cases, those existing applications make up 80%+ of the IT portfolio in terms of resource usage and on-going costs.

Just as DevOps is ultimately about providing greater transparency between Developers and Operations, I believe that application migration is ultimately about something I call “BizOps“. Far too often, Cloud Providers talk about the simplicity of migrating applications to their Cloud, focusing primarily on the transition of a VM from on-premise to off-premise. But where problems arise is when that off-premise cloud is fundamentally a black-box to the business (or existing IT staff). Unless a business is migrating to a SaaS application or completely outsourcing the business-critical application, it can’t be overstated how important it is to maintain a level of transparency between the Cloud Provider, the Business and IT. This is “BizOps”.  Continued »


June 14, 2013  10:09 AM

Should Cloud Management be delivered as a SaaS application?

Brian Gracely Brian Gracely Profile: Brian Gracely

saas-platform-11At times I’m a little bit disjointed in how I collect and process information. A nugget here, a news story there, and then a comment to tie together a few fleeting thoughts.

A few weeks ago, Dell acquired Enstratius. They make excellent software for managing and governing multiple cloud environments (via the APIs), and we’ve had their leadership team on the podcast a few times (here, here). They primarily delivered their software as a SaaS application, although it could also be run on-premise. Since the acquisition, Dell has shifted their Public Cloud strategy and now Enstratius products are the core of a plan to let customers leverage resources from multiple cloud platforms. And I believe the fact that it can be delivered as a SaaS application was key to making that shift. It made it much simpler for customers to begin the process of consuming cloud resources, instead of having to setup tons of equipment (hardware/software/security) on-premise.

Last week, a friend that works quite a bit with VMware environments sent me this “BOM” (Bill of Materials) for a reasonable sized setup to create the full vCloud Suite (vCenter, vCloud Director, vChargeback, vCAC, vCNS, vCO). What jumped out at me was the breadth of things that had to be in place to get a Cloud environment up and running. Windows, Linux, Web Servers, multiple Databases. This isn’t uncommon for any Cloud Management Platform (CMP) – OpenStack, CloudStack, etc. – and would typically require teams with a variety of skills to coordinate putting getting this configured properly.

  • 20 Management servers: 2x vCenter, 2x DB servers for vCenter/vCloud/vChargeback, 2x vCloud Cells, 2x vCNS Manager, 2x DB for vCAC, 2x WebServers for vCAC, 2x vCAC DEM Orchestrators, 2x vCAC DEM Workers, 2x vCAC Agent Machines, 1x vChargeback server, 1x vCO
  • 8 Databases: 2x vCenter Update Service, 2x vCenter, 1 vCloud, 1 vCAC, 1 vChargeback, 1x vCO
  • 7 Management Interfaces: 2x vCenter, vCloud, vCNS, vChargeback, vCAC, vCO

And again last week, Gartner analyst Alessandro Perilli (@a_perilli) tweeted:

Perilli is one of many people at Gartner that covers the Cloud Management Platform space, so he gets to see the breadth of offerings in the market from many vendors.

So this ultimately begs the question, “Should Cloud Management be delivered as a SaaS application?Continued »


June 5, 2013  11:00 AM

Are Cloud Operations Transferable?

Brian Gracely Brian Gracely Profile: Brian Gracely

Remember “Cloud in a Box”? It has come in various iterations over the past 4-5 years:

  • “Pre-Defined” or “Pre-Validated” all-in-one racks of equipment
  • Hardware Reference Architectures
  • Hardware + Software Reference Architectures
  • Software-Only Design Docs that are “hardware agnostic”

Lots of vendors or systems integrators have promised to deliver a cloud to their customers in a pre-defined package, but it is typically missing one critical component – OPERATIONS. And when you think about it, the operational model is cloud computing, so it begs the question – are cloud operations transferable?

OperationsRecently GigaOm wrote about the latest round of Cloud Providers trying to bring their version of cloud to a new set of customers. Others have tried this or are currently using a similar strategy for both IaaS or PaaS platforms, including Apprenda, Joyent, Rackspace, Virtustream, and VMware (both CloudFoundry and vCloud).

While operations does include elements of technology, the vast majority is driven by people skills and company specific processes. This is why you’ll see cloud pioneers like Netflix open-source many of their internal tools, or Rackspace give up leadership in OpenStack to the OpenStack Community, because their expertise and learning-curve advantages are in the operations of their cloud environments. The AWS APIs are open (documented), but they don’t expose much of their internal operations.

But what makes the operations so difficult to transfer? Continued »


Page 10 of 14« First...89101112...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: