From Silos to Services: Cloud Computing for the Enterprise

Page 8 of 10« First...678910

May 12, 2013  10:23 AM

New Hybrid Cloud Models Emerging

Brian Gracely Brian Gracely Profile: Brian Gracely

Hybrid-CloudBack in 2007-2008, when the concept of “Private Cloud” began to emerge as a DIY model for evolving IT, there was concern that companies would be locked into a Public-Only or Private-Only decision. Given the maturity of the technologies and IT skills at the time, this created a strategic problem. But then, like a double rainbow made of Skittles and fruity drinks, the concept of “Hybrid Cloud” magically appeared as the unicorn that would provide “the best of both worlds” for long-term IT strategy.

  • “Own the Base and Rent the Spike”
  • “Cloudbursting”
  • “Application migration”
  • “Application Portability”
  • “Avoid Lock-in’

Pick your favorite slogan, they were all there. Throw in the ability to dynamically migrate workloads between physical locations and there was a frenzy of excitement over the possibilities of a Hybrid environment.

And then reality set in and people began to realize that the limitations far outweighed the possibilities. Limited Bandwidth. Security Concerns. Ownership Issues. Consistency of Operations. Early offerings such as Amazon AWS could provide a VPC (Virtual Private Cloud), but it had limitations (or “wish-lists“). CloudSwitch did some cool things (since acquired by Verizon/Terremark), but it was cloaked in a security story and hence didn’t get as much visibility as it could have from “Cloud Architects” at the time.

It also led to an explosion of definitions of Hybrid Cloud, mostly to match the needs of a vendor selling their HW/SW, or an Enterprise Architect trying to justify their design to their CIO. Either way, it’s evolved to where Hybrid Cloud can mean any mix of offerings or architectures where the resources and applications are both on-premise and off-premise. And if I squint my eyes just right, my borrowed concept of “Cloud Concierge” might even fit one of these definitions.

Fast forward to 2013 and we’re beginning to see a new set of Hybrid Cloud offerings emerge that are backed by both evolving technology and vendors large and small. They tend to fall into these categories: Continued »

May 5, 2013  8:28 AM

What is OpenStack?

Brian Gracely Brian Gracely Profile: Brian Gracely

The OpenStack Summit took place a couple weeks ago in Portland to announce the “Grizzly” (G) release and to begin the design activities for the “Havana” (H) release (due in Fall 2013). Much has been written about the technology and vendor trends, including a few from my colleagues Aaron Delp and Jeramiah Dooley, which I believe do a good job of highlighting the transition that’s happening in the community and the technology.

Since then, I’ve had a number of business leaders and financial analysis reach out to ask, “What is OpenStack?”. My first reaction is to give them a little bit of history and overview of the technology landscape. But then I’ve quickly come to realize that this isn’t what they are looking for. From their perspective, they want to really understand where OpenStack fits into the broader IT hierarchy and if it should become part of their strategic thinking. Here’s a sampling of the follow-up questions they tend to ask:

What does OpenStack actually do?

In the most basic sense, OpenStack is a software framework that coordinates the services needed to provide on-demand computing/storage resources for applications. Those services include computing, hypervisors, storage, networking and security. From a user perspective, if OpenStack is implemented correctly, it should just look like a few menus and clicks that let the application owner say, “I need this many resources to start, then I’ll want to grow or shrink that number over time, and I’d also like a few other services to augment my application (backup, geographic resiliency, load-balancing, etc.” If OpenStack is used as part of a more complex system, those menu items would be replaced with programmable APIs. [NOTE: This same description could be used for almost all “cloud management platforms” in the market today.]

OpenStack is not a company (eg. Rackspace), although some companies are using OpenStack as part of their commercial services, and some companies are trying to sell packaged version of OpenStack.

Continued »


April 23, 2013  4:48 PM

Software-Defined vs. Services-Defined

Brian Gracely Brian Gracely Profile: Brian Gracely

AppleOrangeIn the fall of 2012, VMware announced their “Software Defined Data Center” strategy. It articulated a new plan to help IT organizations become more <agile, nimble, responsive, frugal, insert buzzword> and evolve to delivering “IT-as-a-Service”, with software-elements playing the critical building blocks for infrastructure (VMs, Storage, Networking, Security). It is being targeted at the same buyers that made VMware vSphere purchases in the past – centralized IT organizations and IT infrastructure teams. It’s a strategy that plays to their existing installed base of hypervisors, but it leaves several VMware experts asking “Does VMware know Cloud is all about Developers?“. The “Software-Defined” mantra has since been picked up by many companies in the IT industry as a way to refresh their products or align to their potential buyers.

In 2006, Amazon launched the first of their AWS (Amazon Web Services) services, EC2 (compute) and S3 (storage). AWS was targeted at development organizations looking to change the pace and economics of how applications were developed for the web. Since then, they have rapidly grown the number of services to include databases, long-term storage, DNS, CDN, queuing and many other capabilities. The quantity of services have grown to a point where many people ask if AWS is still an IaaS (Infrastructure as a Service) service or moved up to become a PaaS (Platform as a Service) service. Where it fits into the NIST definition seems to be irrelevant to the architects of AWS, who are focused on delivering a set of scalable services for developers looking to build next-generation applications (web, mobile, analytics, etc.). It’s this structure that recently had Jeff Sussna (@jeffsussna) writing “Services-Dominant Logic: Why AWS is So Far Ahead“.

While both of these approaches are being marketed under the umbrella term “cloud computing”, it’s becoming increasingly clear that they are targeting very different groups and they are targeting very different value propositions.

Continued »


April 14, 2013  4:35 PM

What Do Enterprises Expect from OpenStack?

Brian Gracely Brian Gracely Profile: Brian Gracely

openstack-cloud-software-vertical-smallAs OpenStack has begun to mature over the past 18 months, there has been some debate amongst the leading developers about the focus of the projects. On one side are those that believe that OpenStack is competing with VMware. On another side are those that believe that OpenStack is an alternative to Amazon’s AWS. Still others focus on a group of services that could create an open system of interconnecting many clouds.

One of the powerful aspects of an open source project is that developers or companies can take the code and use it any way they choose. Target a certain market. Target certain use cases. Target certain business models.

And in return, users of the software can decide what they want the software to do. They can modify the software if they have a specific need. They can buy packaged versions and use the embedded functionality.

For a project like OpenStack, which is maturing during a time when the market is already full of competing offers, it will often be compared to an existing expectation (or experience) that users have of other products/services.

An example of this is a simple question I posted on Twitter yesterday. In the “Grizzly” release, support for VMware ESXi hypervisor has been added.  So I asked:

Screen Shot 2013-04-14 at 4.17.21 PM

The reason for my question is that I’ve heard a number of Enterprise IT organizations say that they are planning to explore OpenStack in the coming year for their Private Cloud (or Virtualized Data Center) environments. Given that VMware vSphere has 60-80% marketshare in that market segment, many of them are also curious about reusing existing investments in hypervisor licenses, and Live Migration has become a standard capability for Enterprise IT organizations and legacy applications. Continued »


April 11, 2013  11:21 PM

IT Evolution follows Historical Patterns

Brian Gracely Brian Gracely Profile: Brian Gracely

This past week, a colleague asked a commonly heard question these days:

Screen Shot 2013-04-11 at 10.14.38 PM

Jason Edelman works for a well-known VAR (Value-Added Reseller), with a deep technical focus on emerging networking technologies (SDN, OpenStack Quantum, Open vSwitch, etc.). Not only is he trying to stay ahead of the technology evolutions, but he’s also trying to forecast how the changes in consumption models (eg. cloud computing) and open-source (free, paid-support, etc.) might impact his company.

To give Jason some guidance, I sent him a couple links (here, here) that seemed relevant to VARs. It seemed like a simple way to share some knowledge in 140 characters.

But the more I thought about, his question really does hit on much larger system-level evolutions. The good news is that IT is like many industries and we can look to history for how it will likely evolve.

historyLet’s start with a few very good reads – Those two sources should be on everyone’s reading list

  • Simon Wardley’s blog: Simon (@swardley) is a scientist with a deep understanding of technology, economics and industry modeling. Just start by reading the “Popular Posts” on the right side of the page and you’ll quickly realize that the changes we are seeing align to models than many industries have seen in the past. Simon is an excellent follow on Twitter, and has been a guest on the podcast.
  • Porter’s Five Forces: The classic strategy model provides useful frameworks for understanding supply-chains, competitive strengths and weaknesses, buyer vs. seller leverage and competitors.

Continued »


April 8, 2013  11:12 PM

Is “Build Your Own Cloud” the new IT Gym Membership?

Brian Gracely Brian Gracely Profile: Brian Gracely

Every year, as the New Year’s ball drops and people around the world make their resolutions, health clubs and gym’s fire up their marketing machines.  Shed those unwanted pounds! Get in great shape! Get the your swimsuit body ready for Spring Break!

All people need to do is show up at their gym and they’ll quickly become the envy of their friends and neighbors. Just buy the right clothes, the right shoes and the right electronics. Lured by the promise of a smaller waistline, greater flexibility and improved health, customers line up with their checkbook to get the promise of an improved life.

The first couple gym visits go OK. It’s painful, but they lose a couple pounds. They believe a lifestyle change is possible. Then February comes along, and work or travel or family make it tough to get to the gym. The weight-loss plateaus because losing the next 10-15 lbs would require both gym visits AND dietary changes. Being able to look like that athletic guy or girl, doing extra reps each day, would require a full-on lifestyle change. And by May or June, the enthusiam is gone and you’ve fallen back into your old ways. Sure, you visit the gym from time to time, but getting significantly better is a lot more work than you expected. Maybe next year you’ll follow through with your goals.

Sound familiar IT folks? Even though we continue to see studies claiming that Enterprise IT organizations are prioritizing their Private Cloud build-outs, the reality of successful deployments is much fewer than expected, and it’s taking much longer than pontificated.

But how is this possible? You’ve bought all the latest hardware from vendors claiming to have the right “journey to cloud” . You saw some initial cost savings and faster provisioning times of virtual machines. Things were feeling good, but then something happened. Your cost savings began to plateau, and your users continued to ask for services faster than you’ve been able to deliver with the new “cloud”. Continued »


March 23, 2013  3:43 PM

Big Data Thoughts from Structure:Data

Brian Gracely Brian Gracely Profile: Brian Gracely

This past week I had the opportunity to attend the GigaOm Structure:Data conference in NYC. Unlike many industry conferences, which are sponsored by a vendor or the agenda is dictated by a specific technology, this show did an excellent job of bringing together a broad mix of technologies, vendors, customers and thought leaders. While the hype of the conference was “Big Data”, the technology and deployability are still in the early stages for all but the top 1-2% of the industry. There is a summary from GigaOm here, as well as broad media coverage. Going back through my notes, I found the following thoughts most worthy of follow-up.

  1. Big Data is Difficult.
  2. Data Huggers are the New Server Huggers – Company after company I spoke with highlighted that existing organizational structures are their #1 challenge to Big Data strategy success. Organizations love their data. Organizations don’t love sharing their data with other groups, even within the same business.
  3. Forget the Economy, Big Data is the 1% Club – While Business Intelligence and Data Warehousing have been around for quite a while and are deployed at many companies, the companies that are able to leverage the newer technologies (Hadoop, NoSQL databases, R, etc.) to unlock business insight in real-time is still extremely small.
  4. Big Data != Fast Data - It becoming clear that there is a big difference between Big Data and Fast Data, both in technologies and use-cases.
  5. Hadoop is the Foundation, but beyond that… -  While the Hadoop market is competitive (Apache Hadoop, Cloudera, Hortonworks, IBM, MapR, Oracle, Pivotal, SAP) are all trying to sell a Hadoop-centric product, the real wars will be with the tools, frameworks and extensions that are layers on top of Hadoop.
  6. “Telemetry” will make its way into your vocabulary – Whether it’s called “Internet of Everything” or “Sensor Data” or something else, you will begin to hear a massive push about how telemetry data will be attached to people and machines to drive real-time fast data and unlock new markets.
  7. Connecting to the legacy is key – Many companies are focused on being able to not only integrated legacy datastores into Hadoop-based “Data Lakes” or “Data Reservoirs”, but also focusing on how to integrated existing SQL tools and skills into a Hadoop environment. The SQL aspect is attempting to overcome the shortage of Data Scientists and extend Big Data out to more generalist business users.
  8. Data Scientists are in massive demand – This has been highlighted before, but it’s still a massive shortage in our industry. Not only is there demand for people to analyze the data, but also massive demand for people that can setup/run Hadoop environments and integrated legacy systems with Hadoop.
  9. Huge Opportunities for Big Data On-Demand – While many Cloud Service Providers offer various types of on-demand IaaS resources or on-demand Database services, the ability to experiment on Big Data or Fast Data use-cases is massive. With setup being (still) complicated, there are huge opportunities for Cloud SPs to expand their offerings to be turn-key, as various sizes, to accelerate the time to analysis and action.
  10. Bandwidth is Still a Problem – While Big Data might be a big deal, it still hasn’t overcome that pesky little physics issue – the speed of light. It will be interesting to watch how the location of data (on-premise vs. in public clouds) shapes the industry over the next 3-5 years.
  11. Get familiar with Open-Source Frameworks – Whether you’re deploying with Puppet or Chef, coordinating resources with Zookeeper, or developing tools that leverage Pig or Hive, it’s time to start familiarizing yourself with open-source frameworks and community-based knowledge sharing. Big Data (or Fast Data) is attempting to solve challenges that are beyond a single organizations, so using the tools and frameworks of the community will help accelerate your chance at success.
  12. Your Data is Your Next Product/Market – It was interesting to hear how many side conversations involved companies that currently possess massive amounts of industry-specific data that are now looking to unlock (and sell) this to external industries. For example, intelligent weather data could be extremely valuable to dozens of companies (finance, insurance, farming, transportation, grocery stores, airlines, etc.) that may be able to make better decisions from data that was never previously available to them.
  13. Big Brother Knows About You – You’re welcome to keep fooling yourself into believing that you have a level of privacy or information security. Think again. Every device you interact with, every transaction you make and every location to visit is being tracked, correlated, analyzed and acted upon by someone.


March 23, 2013  12:12 PM

Understanding Cloud Computing Forecasts

Brian Gracely Brian Gracely Profile: Brian Gracely

As the market for Cloud Computing products and services evolves, the stakes for success or failure (for companies, vendors, integrators, etc.) continue to rise. With that in mind, the amount of research that will come to market will continue to grow. For anyone analyzing this data, or using it to help make future strategic or tactical decisions, it’s important to keep several factors in mind. Being able to read between the lines and understand what might be below the surface can make the difference between leading, spotting trends or following the crowd.

  1. Audience – Who is the target audience of the survey? Are they IT professionals that currently work in IT operations, IT architecture or application developers? It’s especially important to understand if they come from IT, or they come from the groups trying to move around IT.
  2. Area of Focus - Do the survey results come from people focused on existing IT systems or future-looking systems (eg. Mobile, Big Data, SDN, Automation, Open-Source, etc.). IT silos can create unique viewpoints about what problems exist and how they can be solved.
  3. Decision-Making / Budget-Owner – Which group(s) within the organization have responsibility for IT budget? Which groups are able to obtain funding for IT services outside the existing IT organization?
  4. Length and Scope of Projects – Is the research focused on length or scope of projects? Long-term projects have a completely different framework (planning, strategic-alignment, project management, budgeting, etc.) that short-term projects, which are primarily driven by immediate needs. Continued »


March 17, 2013  10:01 AM

Bringing Big Data to Big Projects

Brian Gracely Brian Gracely Profile: Brian Gracely

Everyday we get bombarded by technical acronyms (BYOD, CoIT, MDM, APIs, IaaS, etc.) and vendor speak about new ways that IT can bring agility to business. IT organizations need to Mobile-enable their workforce to harness the power of Big Data to uncover new insights that will unlock differentiation and agility. And after a while, the market begins to turn off because the noise to signal ratio gets overwhelming.

Too often we hear technology vendors say that if all IT organizations would just operate like Google or Facebook or Twitter, then IT costs would be reduced and business productivity increased. Except this leaves many companies saying that they don’t have a “deliver digitals ads” problem, so how does that approach make sense for them?

Two years ago, I was introduced to Christian Reilly (@reillyusa), who is part of the IT organization at construction leader Bechtel. Bechtel had been looking at how to solve some massive business challenges (global workforce, complex projects, internal and external employees, etc.) by better leveraging their technology investment. It required them to transform how they thought about technology, as well as implementing a new set of technologies to enable new applications. As I quickly learned from Reilly, this set of changes wasn’t something they could buy shrink-wrapped in a box, but rather it was a multi-year transformation that involved people, process and technology changes.

It had been a while since I last caught up with Reilly, but this past week I saw a very interesting video that Bechtel jointly created with Apple about their iPad rollout. While the video is produced in typical high-production-value Apple manner, under the covers it highlights the implementation of tons of very interesting technology. Their solution is not being used to serve ads or update their social network, but instead is focused on things that aren’t sexy but are critical for Bechtel to solve their business challenges and bring value to their customers. Let take a look at some of the things behind the scenes. Continued »


March 11, 2013  6:11 PM

Cloud Computing – Platforms vs. APIs vs. Tools vs. Features

Brian Gracely Brian Gracely Profile: Brian Gracely

One of the more interesting aspects of public Cloud Computing, beyond all the elements of on-demand (pricing, scaling, etc.), is the number of add-on services that have emerged from the ecosystem to add value around core platforms like Amazon AWS, Rackspace, Azure, Google Compute Engine, etc. Some of these services include Boundary, New Relic, enStratius, Rightscale, Cloudability, ShopForCloud, Cloud Checkr, Newvem, Cloudyn, CloudPassage and many others. These services are allowing customers to not only fill in gaps with the service offerings from those platforms, but also consume these add-on services in the same on-demand manner as the underlying IaaS, PaaS or SaaS platforms.

But an interesting thing tends to happen with software platforms, both on-premise and in the cloud. Over time, they tend to eat their ecosystems. We’ve all experienced it with platforms such as Windows, where things like TCP/IP stacks, web browsers, media players and all sort of other functionality used to require 3rd-party add-on capabilities. And now we’re beginning to experience it with Cloud Computing platforms. We saw it over the past couple weeks with announcements from Amazon AWS – the OpsWorks and TrustedAdvisor services. It’s a classic case of the platform provider wanting to deliver an end-to-end experience to the customer, as well as adding stickiness to the platform. For the 3rd-party tools vendors, it becomes a inflection-point where they have to decide if they now want to compete on price, features, unique technology, or just fold up shop. We discussed some of this on The Cloudcast Eps.77 (starting at 19:30 mark).

So if you’re a customer of any of these services, what should you do? Continued »


Page 8 of 10« First...678910

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: