From Silos to Services: Cloud Computing for the Enterprise

Page 3 of 1012345...10...Last »

June 7, 2014  10:47 AM

Managing Conflicts between Personal and Professional Activities

Brian Gracely Brian Gracely Profile: Brian Gracely

This post is going to be a little bit off-topic (not Cloud Computing centric), but since this post, I’ve seen quite a bit of discussion by people trying to figure out some guidelines, or career guidance, in this new word of self-publishing, social media and other outlets for learning and sharing. It’s definitely not simple, since there no longer any defined lines between work and life, between media and non-media.

Earlier today, a friend started a thread on Facebook because they host a technical blog which allows ad placement by sponsors. He had been approached by a certain company and wanted some guidance on if he should allow that company to sponsor his blog. The central point was that there were some inherent conflicts of interest between the advertising company and this person (and his employer). After a bunch of feedback, advice and suggestions from friends, the issue was resolved. But one of the commenters asked if some of us that live in this new media world would discuss how we deal with the potential conflicts that can arise when you have a full-time job and also have a presence on external media sites (blogs, podcasts, etc.).

This is my experience. Other’s mileage will vary and I expect others will have different situations depending on their employers and how they handle “gray areas”.

For full disclosure, my employer details are here. In addition, I’ve co-hosted a podcast for almost 4 years, as well as written on several technology blogs. Before getting into any “guidance” to others, I’ve always found that it’s important to establish some person rules to guide how I manage the interaction between personal and professional. These are MY PERSONAL rules. Others will have different opinions. I’ve written about a few of these opinions previously (here, here, here) Continued »

May 31, 2014  10:17 AM

A Parallel Path for VMware?

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Computing, Cloud Foundry, KVM, Open vSwitch, OpenStack, Public Cloud, SDN, VMware

[Dislosure: While my employer, EMC, is a majority stakeholder in VMware, I have no insight into VMware's long-term strategic direction. That's way above my pay grade. All information contained in this blog is based on publicly available information - sometimes you just got to know where to look.]

Evolution, transformation, disruption. Regardless of what segment of the IT industry you follow, these buzzwords are dominating the conversation. And a common thread is that all “legacy” (read: “existing”) vendors and technology will struggle with these transitions.  But what happens when vendors actively invest in technology and expertise that run in parallel to those disruptions? Isn’t that the classic playbook from The Innovator’s Dilemma?

We all know that VMware took a bold step away from their traditional business when they launched the VMware vCloud Hybrid Service (vCHS), offering an alternative way for companies to consume IT resources from the cloud – instead of within their own data centers. They became a Cloud Service Provider, beginning to disrupt the ecosystem that had been built up around their hypervisor business. It allowed them to leverage their existing technology and installed base to expand into a different segment of the IT market. With vCHS, they not only offered on-demand IaaS services, but also Desktop-as-a-Service, DR-as-a-Service and many other services on the roadmap.

But what about the technologies that didn’t have VMware’s logo on them?  Continued »


May 27, 2014  7:50 PM

The Gap between Software-Defined and Developer-Adopted

Brian Gracely Brian Gracely Profile: Brian Gracely
API, Cisco, Code, Developer, DevOps, EMC

Software, software, software….

It’s hard to go anywhere in tech these days without someone highlighting the virtues of software. Software is eating the world.  It’s a Software-Defined Economy. It’ll run in your Software-Defined Data Center.

But it feels like there is a gap between the things that developers are using to change the economy, and the tools that fall into the definition of “Software-Defined”. There are the tools of the Devs, and then the tools of the Ops crowd. In the past, the IT community often thought of them as isolated, but that thinking is beginning to change. Examples of “Infrastructure as Code” and “Operations at Scale” are beginning to become more visible and common.

Even the definition of a “developer” is beginning to change. In the past, developer only applied to the people writing the applications that we interacted with as business users. This now applies to infrastructure and operations teams as well. Think of it as the evolution of the SysAdmin, but across all the functional areas of infrastructure and operations.

But there is still opportunity to do better. Making it easier for all kinds of developers to be able to use these new software-defined tools and environments. To eliminate all the friction between them getting the software and making it useful; integrating it into their environments and making it part of their continuous application builds. Just making software free to download is not enough.

  • Is the software available via GitHub?
  • Are the APIs well documented?
  • Does sample code exist across a broad range of languages and frameworks?
  • Can a developer run the software locally?
  • Can the software be up and running in minutes, using the native deployment tools of the developer?
  • Is there an online community to answer questions for developers?
  • Is there an online environment that allows the developers to validate that the software can integrate and scale beyond their local machine?
  • Does the software creator actively participate in the community, both online and in real life?

Continued »


May 25, 2014  10:38 AM

OpenStack Creates a Marketplace

Brian Gracely Brian Gracely Profile: Brian Gracely
Canonical, Cisco, Cloud Foundry, HP, OpenStack, Rackspace, Red Hat, Ubuntu, VMware

Maybe you clicked on this link and thought that I was going to write about the OpenStack Marketplace that was launched during the OpenStack Summit in Atlanta. It’s a natural progression for the OpenStack community to drive awareness of applications and services. But that’s not the marketplace I observed in Atlanta last week.

btw – In case you missed OpenStack Summit, here are all the videos – lots of great technical content and discussions.

The marketplace I observed was the reality of the IT marketplace, with vendors beginning to make moves and announcements which show that OpenStack is no longer a dream about “interoperable open clouds”, but instead is just another set of tools, products and APIs that will vigorously attempt to compete for the hearts, minds and wallets of IT professionals, developers and systems integrators.

This isn’t to say that the OpenStack dream is fading. It was alive and well in keynotes which drew analogies to the Star Wars Rebel alliance, fighting against the Evil Empire and the Death Star. But diving deeper, we saw that there is still division amongst contributors and developers about what should really be considered “Core OpenStack”. Will developers still actively and passionately work on projects if they are not considered “core”? Will some vendors try and claim to be “more core” in their distributions or offerings than others? Continued »


May 19, 2014  12:51 PM

The Challenges of Calculating Cloud TCO

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Computing, Hybrid cloud, Private Cloud, Public Cloud, ROI, TCO, VMware

The big trend with Cloud Computing providers these days is a focus on “Enterprise” customers and workloads. We see this from AWS, VMware, EMC, Cisco, Microsoft, IBM, HP and others. Regardless of how each of those companies define “Enterprise”, this means that they are now subject to the rigors of TCO calculations prior to major purchases being made. Welcome to the wonderful world of Enterprise IT.

Now for the fun part. Building a TCO model that properly explains how much the offering will cost over a standard period of time – typically 3-5 years.  Having built several of these models in the past, I know that they are always held up to huge amounts of scrutiny. Whether customers think you’re trying to hide or mislead them on costs, or whether customer believe you don’t understand realistic use-cases, no two TCO models are ever the same. They all make assumptions and they all have to make tradeoffs between completeness of information and usability (eg. too many inputs, too many calculations). With that in mind, let’s take a look at a recent attempt at a TCO calculator by AWS, comparing their server vs. traditional data centers.

Let’s begin by asking a few basic questions:

  1. What type of use-cases or applications does the TCO tool allow to be modeled? (eg. redundancy, security, availability, performance, etc.)
  2. What assumptions does the TCO tool make about how costs are calculated? (eg. cost of equipment; cost of IT resources; efficiency of IT resources – both people and equipment)
  3. Does the TCO tool make reasonable comparisons between each model? Too many tools compare one approach’s “best” vs. another approaches “worst” scenario.
  4. Does the TCO tool surface all the assumptions of both approaches, or just the approach of the tool originator?

Continued »


April 27, 2014  9:25 PM

Looking Forward to OpenStack Summit (Spring 2014)

Brian Gracely Brian Gracely Profile: Brian Gracely

With the announcement of the release of OpenStack “Icehouse”, the 9th version of the series of OpenStack projects, it’s now time for the community to focus their attention on the 2014 (Spring) OpenStack Summit in Atlanta, GA. This is an opportunity for the OpenStack Foundation to provide their version of the “State of the Stack”, by highlight customers using the software and interesting trends in the marketplace. It’s also a multi-day set of design sessions for engineers involved in the “Juno” (J) release, which is targeted for Fall 2014. Last but not least, it’s a massive recruiting and networking event, where every company has a vacancy sign out front for anyone with OpenStack-centric skills or experience.

This year, I’ll be interested to see a few things:

The Marquee Customer

There have been rumors for over a year that a Fortune 10 customer would be announced as a huge OpenStack user, for production applications. That name hasn’t emerged yet. I’ll be looking for big names building internal Private Clouds on OpenStack, and at least a few names using Service Provider clouds powered by OpenStack.

Interoperability

OpensStack has long been touted as the alternative to AWS (and often VMware), with the promise of open-clouds for customers. I’ll be looking for progress on overall interoperability between BOTH cloud providers and between distributions of OpenStack. This is still enough of an issue that Red Hat’s General Manager of Open Hybrid Cloud Programs (Alessandro Perilli, @giano) took to Twitter to highlight his belief that start-up OpenStack distributions (or products) should not be trusted by customers because the risk of start-up failure is too high. Indirectly, this is also a statement of the interoperability of various OpenStack distributions, if he’s claiming that switching costs could run into the $$ millions. Continued »


April 6, 2014  4:23 PM

Fear and Loathing in Cloudvegas

Brian Gracely Brian Gracely Profile: Brian Gracely

[A better title might be, "How to Drive Yourself Crazy Thinking About Your Career in Tech"]

I wasn’t around when the horse and buggy era ended, but sometimes I feel like I’m having the same conversations that the blacksmiths, saddle makers and stable sweeps had back in the early 1900s. Fast forward a hundred years and this 3rd platform transition has a lot of people wondering what to do next.

Let me step back and put some of this in context. During the late 1990s and early 2000s, the Internet changed everything. Where we work. How we work. The importance of technology for business success. And because of technology, very few of us have the same career path that our parents had before us.

  • Companies are no longer centrally located, with their workforces being globally dispersed.
  • Companies no longer maintain all their own internal services, often using resources available over the Internet to achieve the best mix of price and service.
  • Companies no longer provide long-term pension plans (except some government agencies), so employees rarely stay at the same companies for their entire career.
  • Customers (buyers) no longer get all of their knowledge from their suppliers. The Internet allows thousands of voices and opinions to impact their knowledge and buying decisions.
  • Customers (buyers) are increasingly looking for new sources and consumption models for the technology that will help their businesses.
  • Public Cloud and Open Source Software are moving the margins and control away from vendors and onto parties that can more directly control their future. The supply chain is shifting and with it is the location of value-capture.

So what does this have to do with Cloud Computing? Most people would tend to think that the majority of my days and nights are filled with deep, enlightening conversations about the topic. I wish. In fact, I am spending more and more of my time with people asking what’s next, specifically as it relates to their career in technology.

The Rise of the Rockstars

No phrase in our industry drives me crazier than “He/She is a Rockstar!” Why? A couple reasons:

  • Most bands aren’t made up of dozens of people on stage; instead it’s just a few. So most people are either roadies or groupies.
  • Rockstars travel. A lot. That’s part of the job. The big money is in being close to the customer. No travel, no rockstar.
  • Not all rockstars make the big money. There are plenty that are only headlining at small venues in 2nd or 3rd-tier cities.
  • Except for a few rare exceptions, Rockstars burn out in a short period of time.
  • Rockstars begin to expect indulgences that lead to unrealistic expectations. Continued »


April 6, 2014  3:24 PM

Cloudy Musings and Random Thoughts

Brian Gracely Brian Gracely Profile: Brian Gracely

The “Five Computer” Theory

In 1943, Thomas Watson said, I think there is a world market for maybe five computers. Then in the early 2000s, when the Web 2.0 era was starting GigaOm- resurrected the idea (sorry, can’t find the link, circa 2001-2003), but this time it was five clouds – Google (Search/Ads), Salesforce (Business), Amazon (Retail) and two others. In the first case, it was well before the PC and Smartphone era. In the second case, it was believed that we’d see massive consolidation of web services as companies struggled to figure out business models beyond freemium or backed by Google Ads.

Since then we’ve seen the rise of many “giant computers” – AWS, Google Compute Platform (and all the Google properties – Search, YouTube, Gmail, Maps, etc.), Microsoft/Windows Azure (and their other online properties), Twitter, Facebook, WebEx, Ebay – as well as the rise of many businesses built on top of those computers – Dropbox, SnapChat, Zynga, Box, NetFlix, etc.

We know from the Gartner IaaS MQ that it can be very difficult for smaller companies to catch up to AWS. But this year, I suspect the chart will be somewhat different as Google Compute Platform and VMware vCHS are added, and Microsoft Azure has expanded. Will we begin to see rapid consolidation of the “giant computer” market, with more businesses being built on top of the leaders? Or is still too early to see how the evolution to the 3rd platform will play out?

Opportunities for DevOps Consultants

My friend Jeff Schneider (MomentumSI, Transcend Computing) recently wrote a nice blog on how to evaluate consulting companies that focus on Cloud Computing.  More and more, I get questions from my friends working at VARs (and vendors) trying to figure out how to get better prepared for DevOps (tools, methodology, skills, etc.). I believe there is a huge opportunity for the DevOps focused consulting companies to target VARs and existing vendors with training that would accelerate the understanding of this space. Some might argue that this could eventually eliminate the consultants existing business, but I believe the opposite would happen. They’d see an acceleration of companies looking to adopt the skills/tolls/methodologies, and they would continue to be the leading experts. The VARs (and vendors) would bring them into customers at a pace they have never seen before, because of their local presence and existing relationships. The gap between the consulting experts and the average Dev or IT team is huge. There is opportunity there to shrink the gap and significantly expand the market demand.

Can anything slow down AWS?

I get asked this all the time. Watching them continue to innovate and reduce prices, it can often be difficult to image a scenario where this happens. But there are plenty of possibilities: Continued »


March 29, 2014  9:03 AM

The New Cloud Marketplace

Brian Gracely Brian Gracely Profile: Brian Gracely

The week of March 23rd, 2014 might go down in history as 1 AG (After Google). By launching (or re-launching, who can really keep up), the Google Cloud Platform (GCP) officially created a public utility cloud marketplace.

What? How can that be, considering that there are lots of recognized Cloud providers? While it’s true that all of these companies exist, and many run excellent businesses for specific market segments, most of them aren’t focused on the utility elements of the market.

Screen Shot 2014-03-27 at 10.20.19 PM

This really shouldn’t surprise anybody, if you’ve been paying attention. The experienced segment of the #clouderati have been predicting this for quite a while (here, here). Continued »


March 28, 2014  8:54 PM

Federated InterCloud of Hybridness

Brian Gracely Brian Gracely Profile: Brian Gracely

On the 8th day, the Internet was created. And with it came (basically) seamless ability to access and move information around the big tangled mess of Intertubes. Apparently those TCP/IP inventors were pretty smart. And apparently ever since that day, or whenever Cloud Day 1 was, people have had the belief they could coordinate to do something similar for applications, application-containers and all of their associated data. They like to draw analogies to how email works, or how your mobile phone roams from carrier to carrier. And hence, there is the dream of a Federated InterCloud of Hybridness.

This past week, the market rumbled about how Cisco was planning to spend $1B to create the InterCloud. Since $1B is the new starting point for making cloud announcements or investments, this caught people’s attention. Also, because some media misunderstood it to be that Cisco was actually going to get into the Cloud business and directly compete against AWS, GCP, Azure, Rackspace, etc. That’s not actually the case. This about selling equipment to lots of SPs, with the hope that they will interconnect and allow companies to freely move between them. InterCloud’ing at the network layer, with a means of interconnecting with public cloud APIs.

But Cisco is by no means the first to drive this concept. Let’s take a look at who else has been driving this.

Continued »


Page 3 of 1012345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: