From Silos to Services: Cloud Computing for the Enterprise

Page 9 of 13« First...7891011...Last »

August 4, 2013  8:34 PM

What is “Enterprise Ready” in Cloud Computing?

Brian Gracely Brian Gracely Profile: Brian Gracely

Probably more than any other question, I get asked all the time if I believe that Amazon AWS is “enterprise ready“. Sometimes the question comes from analysts trying to determine the extent that the IT industry is shifting. Sometimes is comes from vendors trying to determine the pace of change/transformation/disruption. Other times it comes from IT organizations trying to determine what their future strategies look like for procurement, service offerings and future skills evolutions.

“Enterprise Ready” is one of those loaded phrases that you really need to be careful about using, because the person you’re speaking with typically has a preconceived notion about what it means. For many people, it means that the service essentially emulates all the aspects of an existing Enterprise IT data center – include all the elements of performance, redundancy, security, compliance, etc. In essence, they expect the new environment to functionally be like the world they are used to. What they don’t want are the long delays to get things provisioned, the long meetings with security and compliance teams telling them everything is unsecure, or the long budgeting process to procure the required technology. [Insert analogy about eating cake here]

What I try and explain to people when answering this is to think about “Enterprise IT” in two buckets:

  • Bucket 1 – Applications that you typically associate with IT – Email, ERP, HR, Unified Communications, Sharepoint, etc..
  • Bucket 2 – Business requests for technology that typically get turned down by IT

Bucket 1 is all about applications that have long, relatively stable life-cycles and IT is usually trying to balance cost vs. performance of these applications. This is a technology bucket. Known equipment. Known capacity needs. Aligns to depreciation cycles. These applications might be a fit to migrate to a public cloud, if the business is facing some ‘change event’ (eg. M&A, equipment EoL, licensing upgrades pending, new CIO, budget challenges, IT skills challenges, etc.).

Bucket 2 is all about the pace of today’s business world. The world where winning and losing is often measured in how quickly we can transition from a great idea to a great implementation of that idea, via technology + business models. These ideas are responsive to the market, to competition and to changes that we’re planned for in the annual budgeting meeting. At least at the time of ask, they are often the complete opposite of the Bucket 1 applications – unknown capacity needs; shorter usage duration; unknown scalability.

So what might fit into Bucket 2?

  • VP of Marketing would like a smartphone app for the sales-kickoff or annual tradeshow. They aren’t sure if it’ll get 1,000 or 15,000 downloads (unknown capacity, unknown scale). They would like it to be available 2 weeks before the event, and collect data for up to 1 month afterwards. Beyond that it’s not needed (short duration).
  • VP of Operations just got back from a conference discussing “Big Data” and would like to prototype ways to better analyze sales trends and how they are effected by weather, gas prices, seasonality and a few other sources of publicly available data. He needs the prototype completed in 60 days, as he needs to justify an ROI (eg. better sales insight) to justify a more expansive project. If the ROI doesn’t materialize, the bigger project might be cancelled – Quick timeline, potentially wasted capacity beyond 60 days.
  • CIO tells the lines of business that the existing annual IT budget has been exceeded by Q3, as a major project has gone over budget (it occasionally happens), but one of the lines of business has a major opportunity if a new system can be put in place in time. The opportunity is $5-10m in Q4, with a follow-up of $10-20M in Q1. Pace of implementation is of the essence, but where to do it? Sometimes this is called ‘Shadow IT’, it’s just a reality of doing business in the 21st century. Global resources exist, so why shouldn’t a business try and leverage them?

At this point you might be asking why I didn’t explicitly mention AWS and “Enterprise Ready”. Hopefully you’ve figured out that there is more to “Enterprise Ready” than just the underlying technology. In today’s world, there is a place for applications in the public cloud (or remaining in existing data centers) for those characteristics. But there is also unmet Enterprise demand for solving business challenges with technology, now. Those Enterprises, those applications are “Enterprise Ready” too. They are just focused on a different characteristic being the most import element of their success.

So how big could Bucket 2 be? It’s tough to tell (long-term), because we often don’t know how big a new technology segment could be until people and companies understand just what is possible without prior restraints. The Client-Server market was 10x the Mainframe market. It’s not unusual for people to have 3-4 connected devices (smartphones, multiple tablets, laptops, etc.).

Cloud Computing helps level out the short-fall in supply and demand for Enterprise IT. Whether or not the unmet demand is Enterprise Ready is now as much about pace-of-implementation as it is SLAs and IOPs. The forward-looking CIOs are trying to figure out how to deliver both to their Enterprises.

July 30, 2013  11:37 PM

The Beginning or The End of Cloud Computing? It’s confusing…

Brian Gracely Brian Gracely Profile: Brian Gracely

red cloudComputing eras tend to last 10-15yrs at their high-point and then something else takes over. Along with those changes tend to be a few leaders that adapt and continue, while a large number of the leaders fail and new companies (or open-source projects) emerge to take their place and lead the new era.

  • Mainframes: 1960s-70s
  • Minis: 1970s-80s
  • PCs / Client-Server / LANs: 1980s-90s
  • “Web 1.0″ / Commercial Internet: 1990s-2000s

Some might argue that the Cloud Cloud era got started after the Internet bubble burst (2000-2001) and early SaaS applications started to emerge. Others might say that it really took the next step when introduced Amazon Web Services in 2006-2007 and brought the idea of utility computing into reality (with a H/T to Douglas Parkhill for his early thinking back in the 1960s). Another group might point to the 2009 emergence of the concept of “Private Cloud” as the tipping point where it became a reality for many IT organizations (“shadow IT in my company?”) and signaled that traditional IT vendors were concerned about protecting their existing installed base (which apparently isn’t gaining much functional traction).

While it doesn’t really matter when this new era started, it is useful to try and figure out where in the transition the industry is today. As people like to ask, “are we in the 2nd inning or the 7th inning stretch?

Some would argue that we’ve begun to hit the tipping point when the legacy vendors are beginning to show their strains and are starting to fail. While it’s easy to argue that those applications aren’t going away anytime soon (note: IBM still does +$1B in mainframes, in the 2010s!!), many quarters of misses do begin to signal that they might have missed the big shifts around cloud and open source and could eventually joint the boneyards occupied by DEC, Bull, Sun, etc.

Some would argue that we’re beginning to see the makings of “Cloud 2.0″, where standards need to evolve, interoperability needs to evolve, and we may begin to see the classic battles between two technologies that set the ton for a decade to come (eg. Ethernet vs. Token Ring, IP vs. ATM, VHS vs. Beta; Blu-Ray vs. HD-DVD) – can you say AWS APIs vs. OpenStack APIs?

Still others think the Cloud 1.0 wars are over and it’s time to shift from a industry driven by innovation and vendor-led profit models to one that’s driven by commoditization and the next phase of ideas, economic growth and potential that comes from lower costs and easier access to resources (see: Jevons Paradox).

Confused yet?

Continued »

July 13, 2013  5:54 PM

Change Culture or Move Elsewhere – The IT Decision of the 201x’s

Brian Gracely Brian Gracely Profile: Brian Gracely

In the real world, there are the seven George Carlin words that if said will make people uncomfortable, especially if used with the wrong audience or in the wrong context. In the IT world, those words are CHANGE, VALUE-ADD, COMMODITY and AGILITY (in no particular order). Use those words and somebody in the room is going to cringe, or potentially ask you to leave. They are the words that draw the dividing live between vendors, since we mistakenly believe that IT is a zero-sum and there are only “winners” and “losers” and the new will always vanquish the old. You do worship at the altar of The Innovators Dilemma, don’t you?

They are the words that make IT operators sweat, thinking about the 2 a.m. pager notice they’ll get because the new system, which requires new skills, is operating less than optimally and somebody wants it fixed…now!!

Up until a few years ago, they were words that both the IT sellers and IT buyers knew how to balance to keep the ecosystem fairly healthy and constantly evolving. But a few significant changes have come along – namely “cloud computing” and “open-source” – and opened up new options that are disrupting the balance that previously existed. The previous two IT options of Build-it-Yourself or Outsource, both of which used similar technologies and skills, now had a third option – various Public Cloud options (IaaS, PaaS, SaaS, *aaS).

And these new options are forcing the IT conversation to distinctly change from one centered around new technologies to one that’s centered around the pace of change of either IT economics or IT skills/process, and sometimes both.

Let’s just look at a few recent articles:

Both of these articles center around the idea of either building “abstractions” to layers that are deemed “less valuable” (see: don’t VALUE-ADD), or focused on a less valuable group changing (see: CHANGE) otherwise they become irrelevant – oh wait, maybe they could be “strategic”, as long as they quickly learn skills that they never needed before, and fast!! Continued »

July 7, 2013  10:24 PM

Will Hardware Vendors Adapt to Changing Expectations?

Brian Gracely Brian Gracely Profile: Brian Gracely

Earlier this week, GigaOm wrote a post discussing the possibilities that the largest web companies would begin designing their own chips (CPUs, etc.). This was following up on the trend of companies like Facebook, Google, and Amazon designing their own servers and networking equipment, or efforts like Open Compute Project open-sourcing designs that could be delivered by ODMs.

While articles like this are interesting for getting a peek into the 0.01% of companies where this is feasible (and needed), since they are running 21st century bit-factories, the question that seemed to emerge was, “how will this effect the companies that sell hardware for a living?“. I’ve written before about how hardware has been rapidly evolving, especially for cloud computing.

When I read articles like this, I tend to think that this is a macro-level trend that is inevitable. The components within hardware are evolving rapidly, and the net-value of the hardware (by itself) is decreasing vs. the associated (or decoupled) software which is increasing. You can have valid arguments about the timeline over which this evolution will occur, but believe most rationale people will generally agree that the trend in value is shifting towards software and away from hardware. System integration of the two still has it’s place, but even where that occurs (in the supply chain) is changing as well.

The more interesting question to me is how the hardware companies will respond to this. Of course they will claim that hardware still matter, especially for performance. They may also claim that visibility is needed at both a hardware and software layer. Fine, that’s to be expected. But will we see any actual changes in how they do business, or how they go to market?

The reason I ask is this question is because I’m constantly looking to the manufacturing sector to give me clues on the future of IT, since they are running on parallel tracks, albeit that IT is 10-15 years behind.

When Pivotal Labs publicly launched, there was an interesting discussion between Paul Maritz (CEO, Pivotal) and Bill Rue (VP Global Software, GE). They were talking about how the airlines (directly or via Boeing/Airbus) were now buying engines. The discussion centered around the idea that engines were no longer purchased as capital assets, but rather they were now being paid for based on usage. One of the initiatives by GE was to do a much better job of collecting real-time data about the engines to better manage downtime and associated maintenance costs (all things that would effect GE’s ability to collect revenue for the engine’s usage).

This got me thinking – will we begin to see the hardware vendors begin to take a clue from manufacturing and charge usage-based pricing for their equipment rather than just capital assets (paid directly or via lease)? Will we see them begin to add capabilities to better track how their systems are being used, in near real-time?

The challenge of capital (CAPEX) purchases and long depreciation cycles is one of the biggest barriers to companies being able to successful deploy “Private Cloud”, as they don’t have the ability to create “agile, dynamically scalable” resource pools when they can’t budget and buy in that manner.

Will the lack of overall success with Private Cloud deployments, plus the specter of lower-margin hardware sales eventually force the hardware-centric vendors to change their selling models in the future? Do they actually expect to fight Moore’s Law without trying to reinvent the business model at the same time?

July 7, 2013  9:57 PM

Thoughts from an Enterprise Start-Up

Brian Gracely Brian Gracely Profile: Brian Gracely

According to this piece in the NY Times, I’m old. I’m not yet an average/median worker in America, but apparently I’d be “Grandpa Gracely” amongst the hipster tech world.

And when you’re “old”, especially in the technology industry, the expectation is that you’re looking for stability, not change. A nice paycheck, generous benefits and maybe a set of responsibilities that are challenging but won’t have you working late nights and weekends. Squeezing in meetings between checking the status of your 401(k).

But I’ve also worked at some of the larger companies mentioned in the article, and the days of never ending meetings and delayed decision-making had me frequently thinking about leaving behind the bigger brands and making my mark with something smaller. About seven months ago, I made the leap. I joined a small company, backed by venture-capital funding, that was somewhere between maturing start-up and early-stage growth company. For a number of my colleagues, the reaction was “are you sure about that?” or “shouldn’t you have done that years ago?“. Maybe, but this was the right time for me to do this. It’s been an interesting ride so far. I get asked about it all the time, especially from people trying to make next-step decisions in their careers, so I thought I’d share some of my learnings.

Jack of Many Trades  – In general, the smaller the company, the more it will be expected that you can play multiple roles and leverage multiple skills. That’s definitely the case for me. I was hired to drive Solutions and Technical Marketing, but that quickly evolved to include running Product Management, managing Strategic Partner relationships, helping to shape future strategy and being able to do day-to-day Field Enablement. If you like wearing many hats, smaller companies can be a great place to stay challenged and to grow. It can also mean that at times you have overlapping priorities and you may be asked to lead something that it beyond your comfort zone.

Long Days and Long Nights – Smaller companies have less resources. Smaller companies have less brand-awareness. Smaller companies don’t have the luxury of outsourcing the tasks that larger companies take for granted. This means the work is on you. This means the hours will be long. Know what you’re signing up for. This is where self-motivation comes into play, because you’ll have to make some personal sacrifices.

Always Be Closing (ABC) – Whether you like it or not, everyone at a small company is part of the selling process. You may not directly carry a quota, but you’ll mostly likely be interacting with your customers on nearly a daily basis. With a smaller company, those customers are constantly testing anyone they can to see if you’re really able to deliver what you say you can. So while you might not be directly selling the product or service, you’re definitely selling “confidence” and “trust” and “commitment”, the intangible things that customers of smaller companies are evaluating above and beyond the technology. Continued »

June 26, 2013  8:47 PM

New Cloud Ops Model: GUIs vs. APIs

Brian Gracely Brian Gracely Profile: Brian Gracely

The past 4-5 years have forced IT organizations to go through some significant changes as the pace of technology has accelerated greater than ever before. Virtualized resources, converged infrastructures, advanced automation tools, new development frameworks and business users that are smarter and willing to consider new consumption models.


But as the pace of business need have increased, the common thinking is that more and more IT tasks need to be automated in order to keep up. And hence begins the challenge for IT organizations. For years, the device-specific GUI was the interface of choice for many IT professionals (or CLI for the networking crowd), often augmented by various types of scripts. GUIs worked fine, aside from browser headaches, but they limited the ability to scale operations. Scripts increased the pace, slightly, but script maintenance often became a full-time job. Far too often, IT organizations collected a mountain of GUI + Script debt as they added various products to their environments, each with a different preferred operations model.

Modern IT operations tools, Git / Razor / Chef / Puppet / Ansible / Jenkins / Vagrant / Nagios – are now built around modern programing languages and assume that users will have more advanced skills in programing the tools to drive advanced levels of automation. They replace the traditional GUI with a set of APIs (typically based on REST). And herein begins the challenges for IT organizations: Continued »

June 23, 2013  9:08 AM

Migrating Apps to the Cloud Requires ‘BizOps’

Brian Gracely Brian Gracely Profile: Brian Gracely

Structure2013-VirtustreamThis past week, at the Structure 2013 conference, I had the opportunity to be part of a panel with Ben Kepes (@benkepes) and Rodney Rogers        (@rjrogers87) entitled “Mission Possible: Moving Business-Critical Apps to the Cloud”. The focus of the discussion was real-life examples of Enterprise companies that had migrated their mission-critical applications (eg. ERP, CRM, HCM, SQL Databases, Unified Communications) to public clouds.

One of the topics that we covered, based on a question from the audience, was about the bifurcation of existing applications and new applications, specifically in migration scenarios. The question was “Will this lead to faster adoption of ‘DevOps’?”

While I’m a believer that DevOps and have discussed it quite a few times on the podcast (here, here, here, here), I thought it was important to highlight that there is a non-technical element that is critical to the success of migrating business-critical applications. DevOps is a fundamental operational requirement for anyone building modern, scale-out web applications, but those applications are typically very different than existing (legacy) applications. And in most cases, those existing applications make up 80%+ of the IT portfolio in terms of resource usage and on-going costs.

Just as DevOps is ultimately about providing greater transparency between Developers and Operations, I believe that application migration is ultimately about something I call “BizOps“. Far too often, Cloud Providers talk about the simplicity of migrating applications to their Cloud, focusing primarily on the transition of a VM from on-premise to off-premise. But where problems arise is when that off-premise cloud is fundamentally a black-box to the business (or existing IT staff). Unless a business is migrating to a SaaS application or completely outsourcing the business-critical application, it can’t be overstated how important it is to maintain a level of transparency between the Cloud Provider, the Business and IT. This is “BizOps”.  Continued »

June 14, 2013  10:09 AM

Should Cloud Management be delivered as a SaaS application?

Brian Gracely Brian Gracely Profile: Brian Gracely

saas-platform-11At times I’m a little bit disjointed in how I collect and process information. A nugget here, a news story there, and then a comment to tie together a few fleeting thoughts.

A few weeks ago, Dell acquired Enstratius. They make excellent software for managing and governing multiple cloud environments (via the APIs), and we’ve had their leadership team on the podcast a few times (here, here). They primarily delivered their software as a SaaS application, although it could also be run on-premise. Since the acquisition, Dell has shifted their Public Cloud strategy and now Enstratius products are the core of a plan to let customers leverage resources from multiple cloud platforms. And I believe the fact that it can be delivered as a SaaS application was key to making that shift. It made it much simpler for customers to begin the process of consuming cloud resources, instead of having to setup tons of equipment (hardware/software/security) on-premise.

Last week, a friend that works quite a bit with VMware environments sent me this “BOM” (Bill of Materials) for a reasonable sized setup to create the full vCloud Suite (vCenter, vCloud Director, vChargeback, vCAC, vCNS, vCO). What jumped out at me was the breadth of things that had to be in place to get a Cloud environment up and running. Windows, Linux, Web Servers, multiple Databases. This isn’t uncommon for any Cloud Management Platform (CMP) – OpenStack, CloudStack, etc. – and would typically require teams with a variety of skills to coordinate putting getting this configured properly.

  • 20 Management servers: 2x vCenter, 2x DB servers for vCenter/vCloud/vChargeback, 2x vCloud Cells, 2x vCNS Manager, 2x DB for vCAC, 2x WebServers for vCAC, 2x vCAC DEM Orchestrators, 2x vCAC DEM Workers, 2x vCAC Agent Machines, 1x vChargeback server, 1x vCO
  • 8 Databases: 2x vCenter Update Service, 2x vCenter, 1 vCloud, 1 vCAC, 1 vChargeback, 1x vCO
  • 7 Management Interfaces: 2x vCenter, vCloud, vCNS, vChargeback, vCAC, vCO

And again last week, Gartner analyst Alessandro Perilli (@a_perilli) tweeted:

Perilli is one of many people at Gartner that covers the Cloud Management Platform space, so he gets to see the breadth of offerings in the market from many vendors.

So this ultimately begs the question, “Should Cloud Management be delivered as a SaaS application?Continued »

June 5, 2013  11:00 AM

Are Cloud Operations Transferable?

Brian Gracely Brian Gracely Profile: Brian Gracely

Remember “Cloud in a Box”? It has come in various iterations over the past 4-5 years:

  • “Pre-Defined” or “Pre-Validated” all-in-one racks of equipment
  • Hardware Reference Architectures
  • Hardware + Software Reference Architectures
  • Software-Only Design Docs that are “hardware agnostic”

Lots of vendors or systems integrators have promised to deliver a cloud to their customers in a pre-defined package, but it is typically missing one critical component – OPERATIONS. And when you think about it, the operational model is cloud computing, so it begs the question – are cloud operations transferable?

OperationsRecently GigaOm wrote about the latest round of Cloud Providers trying to bring their version of cloud to a new set of customers. Others have tried this or are currently using a similar strategy for both IaaS or PaaS platforms, including Apprenda, Joyent, Rackspace, Virtustream, and VMware (both CloudFoundry and vCloud).

While operations does include elements of technology, the vast majority is driven by people skills and company specific processes. This is why you’ll see cloud pioneers like Netflix open-source many of their internal tools, or Rackspace give up leadership in OpenStack to the OpenStack Community, because their expertise and learning-curve advantages are in the operations of their cloud environments. The AWS APIs are open (documented), but they don’t expose much of their internal operations.

But what makes the operations so difficult to transfer? Continued »

May 26, 2013  12:04 PM

Transforming IT or Transforming the Business?

Brian Gracely Brian Gracely Profile: Brian Gracely

An interesting discussion took place on Twitter yesterday, spurred by one of my favorite industry analysts (Simon Wardley, @swardley). I’ve written about his ideas, analysis and outstanding blog before.

Screen Shot 2013-05-26 at 11.07.40 AM

While it was an excellent discussion and did surface a few fringe industries that might fall into this category (undertaker, local hair-salon, etc.), the general consensus was that every business today is essentially a tech business.  While it’s fairly easy to highlight this with companies where technology is their core product (eg. Netflix, Facebook, etc.), it’s also not difficult to see how technology is core to businesses that don’t make technology-centric products (eg. tractors or farm equipment).

For example, if you’re John Deere (just an example, don’t have insider information on their operations), you obviously have a complex supply-chain in place to be able to pull together all the elements that make up a tractor. They also need sophisticated analytics systems to be able to forecast sales, costs of raw materials, currency exchanges and other macro-level things that could be effected by the economy, government policy, etc. Then within the tractors, there is an ecosystem building around tools and applications that can help farmers better manage their fields. Somewhere all the data being collected could be creating new “big data” knowledge that could be improving crop yields, fuel efficiency of tractors, etc.

butterflyBut in an example like this, what is being transformed by technology? Is this transformation of IT, or transformation of the product/ecosystem, or transformation of the business?

When I think about “transforming IT”, I tend to think about the adoption of new technologies, reducing costs and improving worker productivity.

When I think about “transforming the product/ecosystem”, I tend to think about making data accessible to open APIs, or expanding areas where value can be added to a product (customization, etc.).

When I think about “transforming the business”, I tend to think about using technology to eliminate an element of the existing supply chain. Netflix is a great example of this (remove the need to obtain physical media at a store or kiosk). iTunes is another great example (remember record stores??). Sports has been going through this for the last 10-15 years. But it’s harder to think of examples of these types of transformations where the business isn’t entirely based around technology. It does create some blurred likes, such as how Bechtel is using technology to enhance how they manage contractor relationships and project management.

So this is one of those areas where definitions might actually be more of a hinderance than a structure to help companies understand how to plan for transitions or transformations. Regardless of what definitions people use, I’d suggest that there are a few areas that need serious consideration:

  1. Do you understand your supply chain and have you gone back and examined it lately with an eye towards how technology (or technology partners) might be able to help reduce links?  Have you looked at how new technology could enhance the existing portfolio? [Example: With airlines struggling so much, why haven’t any of them leveraged their footprint at major airports + telepresence technologies to create augmentations for the output of a large percentage of travel – business meetings?]
  2. Do you understand the potential ecosystem that could be created around your business, your product or (most importantly) your data? We explored the data element on the podcast. This is actually another variant of the supply-chain discussion, but it involves thinking about the value-chain for the business and considering the opening of new doors and some loss of control of the outcome.
  3. Can you build a better mousetrap? Most people thought Apple was crazy to begin building physical stores when the Internet had proven that brick & mortar business was dead. But they just did retail better than everyone else on the planet. They drew a direct line from the customer’s experience (which involved convenience, repair, physical touch/feel) and aligned it to their goals (don’t sell as massive discounts). Zappos took a similar approach with low-margin shoes – focus on the experience and inconvenience of the past and leverage technology to solve those customer pain points.
  4. Can you measure every element of the business, not just the things that are reported on the financial reports? Do you understand the things that influence the direct results or the buying process or satisfaction levels? Given that every aspect of our lives is now recorded digitally, there is an extremely good chance that the information is available (directly or through external services).
  5. The underlying technology is obviously important, but as we’ve seen time and time again, it’s the process and people that need to embrace the change more than the technology. Technology change/transformation is a given. It might take 5yrs or 10yrs, but it’ll happen. Process and business model change doesn’t have EoL dates, it has Chapter 11.

So as usual, Simon Wardley is right about this. But the question becomes, are you transforming technology or transforming how the business leverages technology. They aren’t the same, no matter how many times companies tell you the CIO deserves a table with the decision makers in the company.

Page 9 of 13« First...7891011...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: