From Silos to Services: Cloud Computing for the Enterprise

Page 21 of 21« First...10...1718192021

November 22, 2012  12:21 AM

“Lock-In”: More than just Vendors, Standards and Open-Source

Brian Gracely Brian Gracely Profile: Brian Gracely

A funny thing happened on the way to the next wave in IT technology – Cloud Computing – people seemed to want combine the portability of a time-machine with the freedom to change of the 1960s. Not only do people want the benefit of unlocking IT service delivery from capital expense budgets, but more and more they also want the flexibility to move the source of those services to any location or any provider.

Then reality sets in and they are faced with the challenges of Data Gravity, multiple variations of APIs, multiple variations of projects, and other nuances that highlight that we’re still quite a long way from being able to switch between any cloud computing service as simply as we plug into any Ethernet port.

So given that we’re still in the early days of this next wave, what is an appropriate level of concern to have for “lock-in” vs. focusing on bringing new value to the business via technology? And are there specific areas where lock-in might be more of a concern that other areas?

In the past, we often looked to standards bodies (eg. IEEE, IETF, W3, etc.) to define interoperability. But in today’s world, there are as many best-practices being defined via open-source projects as in any standards-body, so companies now need to decide if they value pace-of-change over standardization. IT organizations have traditionally leaned towards risk-migration as the higher priority, which is often in conflict with their business-side counterparts that want results now and aren’t thinking about the management and maintenance of legacy systems years into the future.

In some cases, we’re seeing levels of standardization between defined by loosely defined committees (eg. Open Data Center Alliance; podcast) or by groups of providers looking to establish market-baselines (eg. Cloud Foundry Core; podcast). Some companies view efforts like this as a good thing as it moves them farther along towards levels of interoperability, but there are still some in the industry who are never satisfied with the level of “openness” or often seek to find alternative motives from committees or vendors.

With bleeding edge technologies, we’re often faced with the challenge of no existing standards since the technologies are so new that a focus on standardization has not been anyone’s priority. Technologies like Software-Defined Networking (SDN) fit this category today.

Ultimately the answer to the trade-off between complete standardization and keeping up with change should be driven by business needs. Waiting for a piece of technology to become completely standardized or be “completely open” means trading off business opportunity now. Demanding the freedom of portability, whether it’s at the application or provider level, typically means giving up some of the advanced functionality that made the technology unique for the business needs at the time.

Inevitably, change creates costs. Either opportunity costs, capital costs or operational costs. Whether that change is high in the stack (applications, development frameworks, middleware) or down in the technology weeds (management software, operational process, worker retraining, etc.), there is always going to be a cost. It’s critical that both the business and technology “costs” are evaluated and considered when trying to determine whether to focus on standardization or openness. Neither come for free.

November 12, 2012  11:00 AM

Why the Hype about SDN?

Brian Gracely Brian Gracely Profile: Brian Gracely

So far this year, we’ve seen a number of major announcements that have led with the term “Software-Defined Networking” (SDN).

After years of being a critical, albeit some would say boring “plumbing” technology, the industry seems to be abuzz about networking again. “Networking is sexy”. But why?

First, let’s start with the economics and then we’ll move to the technology.

There’s an old saying that, “One man’s profit margin is another man’s business model.” Loosely translated, many companies would like a piece of the business model that Cisco has been successful with for 20+ years, at very high margins.This strategy kicks in when we have major technology transitions (1G > 10Gb; Data Center consolidation; Big Data application, etc.).

Companies that tried to compete with them using customer-ASIC based solutions have failed. It’s too expensive to build your own ASICs unless you have the scale of Cisco. Other companies are trying to leverage “merchant silicon” (Broadcom, Intel) and add differentiation via software. This can reduce the acquisition cost of the hardware, especially when designs move from System-on-Chip (SoC) designs to full platform design.

But the real cost of networking has always been in the “intelligence”. In the past, this intelligence was always tied to specific vendor hardware. But this trend is beginning to change. SDN is about moving some (or all) of the intelligence into independent software functions that don’t reside on the networking hardware. And when the intelligence moves into software, it changes the economic model because it now becomes about flexible licensing or open-source instead of purely capital expense (CapEx) purchases.

So SDN is about changing the economics of networking, whether that’s for a new networking vendor or for companies that are looking to change the percentage of IT spending on networking hardware. But what about the technology side of the equation. Does SDN really do something different than previous “virtualized” networking technologies of the past?

This is where the industry is still debating many details and definitions. To start, we’ve had networking functionality in software for quite a while. Beyond the jungle of “networking appliances” (network functionality running on an x86 platform), there have been “virtual switches” running in hypervisor-enabled servers for almost a decade. VMware vSwitch, Open vSwitch, and Cisco Nexus 1000v are just a few of those “products”.

New networking technology wouldn’t be hype-worthy if it didn’t involve some new protocols. Protocols are the lifeblood of networking, so let’s add a few new ones to the stew. How about OpenFlow, VXLAN, NVGRE, STT, and LISP just to name a few (note: you can thank me later for the great sleep you’ll get after reading all those drafts). These new architectures and protocols allow IT organizations to overlay new logical networks over the existing physical networks to enable things like VM mobility. They also offer the ability to separate the control plane (“the intelligence”) from the data plane (“the packet plumbing”). In theory, this means that the control plane software could come from one vendor (or open-source project) and the data plane hardware from another vendor. This means the potential for customization and flexibility.

But ultimately, the question is if the hype is really hype-worthy. Does SDN solve problems that were previously unsolvable, or solve them in a way that makes a significant economic change to how IT delivers services to the business? This is an area that is still open for debate, and will be one of the uber-hot topics in 2013. If you’re currently a greenfield project, a very large Enterprise or warehouse-scale data center operator, then SDN is probably near the top of your list to explore both the technology and economics. If you’re a developer-centric IT organization and you’re looking to create better synergy between you development and operations groups by using more automated software technologies, then you’re probably exploring SDN. If you’re a company like VMware that wants to compete with public cloud services like AWS and believe that all the functionality around a VM (or application) must be centrally controlled and automated, then SDN is a top-of-mind strategy for survival. But for the rest of the industry, SDN is a new shiny rock that might be gold, or might just be a distraction from time spent on other business impacting technologies.

Software is beginning to eat the world and the evolution of the network is a natural extension to this trend. But how fast it will happen still has a long way to play out.


November 5, 2012  11:00 AM

Two Views of OpenStack – From Beginners to Experts

Brian Gracely Brian Gracely Profile: Brian Gracely

This past week I had the opportunity to have two very distinct conversations about OpenStack, the evolving open-source cloud computing project. One from the beginner’s perspective, and another one from the expert’s perspective. Or another way to look at it, one from the Enterprise perspective and one from the open-source community perspective. The first conversation was with Scott Lowe, a recognized VMware expert and noted author, who is beginning to shift his technology focus from traditional Enterprise tools to more emerging infrastructure technologies. The second conversation was with Jesse Andrews, one of the founding developers of the OpenStack project and an expert in deploying open-source software. There were several key points that I took away from these two conversations:

  1. Learning Curves  – With any new technologies, there are distinct learning curves to grasp new concepts, uncover unexpected problems (or deficiencies), and come up with best-practices for solving common problems. From the Enterprise perspective, OpenStack involves an understanding of different terminology from what is used by vendors like VMware, managing through a lack of documentation and getting a working environment up and running from the bits of multiple projects. From the open-source perspective, it was the evolution of the project from simple compute and storage projects to a series of loosely interconnected projects that needed to incorporate complex functionality for identity, networking, authentication and storage.
  2. Loosely Coupled vs. Tightly Coupled Architectures – While OpenStack contains the word “Stack”, which some might imply that to mean all the elements should be deployed together, this is not necessarily the case. Unlike some other cloud computing projects or products that are either a single entity, or tightly coupled and dependent, OpenStack is designed to be loosely coupled. While it is possible to deploy all the elements of OpenStack to deliver a set of services, it is not required*. In fact, some early users have deployed certain OpenStack elements as standalone services (eg. Storage as a Service). While this loose coupling means that it could be more complex to initially test or deploy, it also has the possibility for greater extensibility with external products and services in the future. It’s a classic design trade-off that will be interesting to watch as more companies decide which cloud computing architectures best solve their business challenges.
  3. Virtual Data Center vs. Cloud Computing – When you talk to people with experience in deploying hypervisor-centric architectures, such as VMware, you often hear them talk about deploying a “Private Cloud”. These architectures take existing applications, place them on a Guest OS within a Virtual Machine (VM), and then leverage the capabilities of the hypervisor to provide various levels of availability for the application. This aligns very well with traditional Enterprise architectures, where there is an application expectation that the infrastructure will be robust and highly available. On the other hand, when talking to people deploying open-source cloud computing software, they are often looking to do something with an application that wasn’t possible before. Their architectural mindset is that the application should be able to survive infrastructure failures. They are building new applications and are shifting the failure profile from infrastructure to application, expecting the infrastructure to use lower-cost hardware that can fail more frequently. This group often calls the hypervisor-centric approach “Virtual Data Center”, and the application-centric approach “Cloud Computing”.

I’ve been following the evolution of OpenStack for the past couple of years. It’s interesting to watch as technology experts from various backgrounds begin to explore if  these technologies make sense for their business. In some cases they are making an “all-in” bet, while others are exploring to see if it offers functionality they can’t already get from commercial offerings. I’m curious to hear if our readers have interest in OpenStack and how they view this emerging cloud computing project.

* OpenStack “distributions”: TryStack (online), DevStackRackspace “Alamo”, Piston Cloud “Airframe”, Ubuntu OpenStack, SUSE OpenStackRedHat OpenStack, Debian OpenStackFedora OpenStackCisco OpenStack, Cloudscaling (apologies if I missed any…)


October 29, 2012  9:00 AM

What 21st Century Bit Factories can learn from 20th Century Widget Factories

Brian Gracely Brian Gracely Profile: Brian Gracely

I’ve written before that in today’s digital economy, I consider data centers to be “21st Century Bit Factories”.  They are this century’s engine that drives knowledge, commerce, communications, education and entertainment. And as these modern factories become critical elements in our global economy, it’s important to look back at the evolution of these environments to see how companies can leverage public and private data centers (those “cloud” things) to drive greater business opportunity and competitive advantage.

[NOTE: I’d recommending watching the two videos from Simon Wardley before continuing, as it’s important to understand that what we’re seeing in the evolution of computing is not unique to the computing industry. It happens for most products and markets over time.]

I believe there to be four critical elements to consider in looking at the parallels between 20th and 21st century factories:

  1. Original Factory Designs – Flexible Containers and Assembly Lines
  2. Supply Chains – Getting In and Out of the Factory
  3. Evolving Factory Operations – Automation and Quality Improvement
  4. Evolving People-Centric Process – Skills and Feedback Loops

Just as manufacturing companies leveraged Demming’s teachings to drive process efficiency for cost and quality of their physical products, so are the modern data center operators to reduce cost and improve response times for their services. PUE is replacing JIT as the input metric. DevOps is replacing Lean as the organizing structure. Fail-Fast is replacing Six Sigma as the quality measurement.

Continued »


October 22, 2012  11:00 AM

Consumerization Killed the IT Rockstar

Brian Gracely Brian Gracely Profile: Brian Gracely

For many decades, there was a level of respect that was afforded to people that had put in 1000s of hours of work (or training) to achieve a level of expertise in a given field. Even if you weren’t an outlier, you possessed unique knowledge that enabled business success. But in today’s world, whether the expert is a doctor, a lawyer, a chef, or a homebuilder, that level of respect is fading because of all the online resources available to everyone. More often than not, people now utilize the web to educate themselves prior to engaging with any of those resources.

While this may be frustrating to many professions, it is creating a significant challenge to IT professionals. Unlike doctors who might have to hear about a patient’s research into an ache or pain, many IT professionals are having to manage expectations from end-users that have setup networks, storage, or applications for themselves. They don’t want your opinion, they just want you to enable their needs. They don’t want you to “enhance” productivity using traditional IT models, they want you to get out of the way so they can be productive in a way that best suits their business needs.

Not only are they familiar with SaaS applications (eg. WebEx, Salesforce, Concur) used for day-to-day business operations, but many of them use “consumer” SaaS applications (eg. Dropbox, Google Apps, Facebook) in the other aspects of their life. In addition to those cloud computing applications, it’s highly likely that many of them are the unofficial “IT Administrator” of their home network of media devices, home routers and storage devices. They have some understanding of how the technology works, and they have experience with online support forums. In their minds, traditional IT is the equivalent of a broadband ISP, providing network access and bandwidth…and not much else. Continued »


October 15, 2012  9:01 AM

Five New Skills Needed for Enterprise Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely
Conceptual image of Future and Past signs

Future-Past image via Shutterstock

The last few years have been filled with discussions about how IT organizations will need to change, adapt or acquire new technology skills as they look to deploy cloud computing services for their companies. Technologies like virtualization, converged infrastructure and new automation tools have been blurring the lines between existing technology groups, forcing IT organizations to reevaluate how to best evolve architectures and operations.

1 – Cloud Concierge – I’ve written about this before, but I believe that it’s going to be become an even more critical role as new parts of the business look to leverage technology in new ways. This could be mobile applications; needs for on-demand resources for short-term projects; or integrations between multiple SaaS services. It’s one thing to say that IT and the Business groups need to communicate better, but it’s much more difficult to expect the Business groups to understand the technical complexities. Being able to provide a centralized service, similar to a hotel concierge, that can help direct groups to the best resources will become more and more important as the boundaries around IT and technology resources expand and dissolve.

2 – Cloud Licensing Translator (Category: Open-Source) – Open Source tools, cloud platforms and developer platforms continue to rapidly accelerate the pace at which new services can be delivered to the business, either directly (in-house IT) or indirectly (through cloud service providers). Automation tools like Chef and Puppet; cloud platforms such CloudStack and OpenStack; development frameworks such as Spring, Ruby on Rails; and Big Data environments using Hadoop, Pig, and Hive. Maybe you think you understand the difference between the community (free) version and the supported (paid) version, but do you understand how the various open source license options could effect your decisions? Do you understand the differences between GPL and Apache license structures? Having knowledge of this may prevent your business from getting caught in a costly or non-interoperable situation in the future.

3 – Cloud Markets Trader or Cloud Cost Management – Where it’s a large or small portion of the technology resources the business consumes, there is a good chance that some percentage will reside in public cloud services (eg. Amazon AWS, Salesforce.com, Microsoft Azure, Rackspace Cloud). Understanding how to these services are priced, both now and for future usage, will become a critical aspect of making sure projects stay on budget. Today this is complicated as there is not a consistent unit of measure to assign pricing (eg. barrel of oil; ounce of gold). Being able to work between the CFO, CIO and Business with clear, transparent pricing is a skill that requires a mix of technical and financial skills.

4 – Cloud Services Marketing – Every great technology owes part of its success to the persuasive evangelist that helped the marketplace understand its value and desire its use. Being able to translate that evangelism to internal projects and groups is as much as salesmanship and marketing as it is about ROI or technical value. Business groups and end-users have dozens of choices in today’s world, so helping them see the value in a new cloud service is as important internally as it is externally in a competitive marketplace.

5 – Cloud Data Analyst – Sometimes this is called “Data Scientist”, but it’s probably more of a shared function between the Data Science and Business Analyst groups. Whether or not business leaders want to admit it, almost every business today is a data-centric business. Being able to bring together the evolving technologies, internal and external data sources, and the ability to connect it back to business models and customer impact is the difference between market leaders and market laggards.

How quickly these roles will emerge, or evolve, will be a factor of how quickly IT organizations are able to change to satisfy business needs. But for IT professionals looking for the next skills and opportunities to explore for career advancement, I’d suggest this list contains a set of future rock stars to help drive your business to new levels of success.


October 15, 2012  9:00 AM

Hello World

Brian Gracely Brian Gracely Profile: Brian Gracely

I’ve been fortunate to write a few guest posts (herehere), but the editors at IT KnowledgeExchange are now letting me loose with my own blog. So Hello World! This blog will be covering cloud technology, data center architecture and the shifting role of IT. It may also include links to my cloud computing podcast, for those of you that prefer listening to content instead of always reading. My goal is to get these out every Monday morning, to bring some unicorns and rainbows to the beginning of your week.

I’m looking forward to some great conversations with this active community of IT experts. Comments, discussions, disagreements and suggestions are always welcomed and encouraged.


Page 21 of 21« First...10...1718192021

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: