From Silos to Services: Cloud Computing for the Enterprise

Page 4 of 9« First...23456...Last »

December 8, 2013  6:11 PM

The Ebbs and Flows of SDN

Brian Gracely Brian Gracely Profile: Brian Gracely

Screen Shot 2012-11-27 at 8.56.20 AMAfter 10+ years of limited change in the networking industry, we entered 2013 with a great amount of fanfare and promise with a new set of networking technologies to address the challenges brought about by server virtualization and web-scale data centers. The $1.2B acquisition of Nicira by VMware, considered one of the leading pioneers of SDN, got people thinking that maybe there was a chance that leading architectures and marketshare could be changing over the next few years. It also signaled that major companies believed in this new networking paradigm and were willing to put significant dollars behind the SDN trend.

Then things got interesting…

Big Switch Networks had attempted to capture the early marketshare by open-sourcing their Floodlight controller, but this was muted when the OpenDaylight project was launched by the Linux Foundation. Major vendors including Brocade, Cisco, Citrix, Ericsson, IBM, Juniper Networks, Microsoft, NEC, Red Hat and VMware joined as founding members. Both the Floodlight and OpenDaylight controller were options, but since then Big Switch has dropped out of the project. Juniper eventually filled the gap of alternative technology when it contributed the OpenContrail project.

Along the way, we also saw the launch of interesting start-ups like Plumgrid, Nuage Networks, and Embrane, and Arista Networks continues to march towards an IPO.

And then things got even more interesting… Continued »

November 30, 2013  11:36 AM

The Ebbs and Flows of OpenStack

Brian Gracely Brian Gracely Profile: Brian Gracely

openstack-cloud-software-vertical-largeNothing creates more opinions in the Cloud Computing industry than OpenStack. More that Predictions about “Year of VDI” or “What is SDN?” or “Cloud vs. Cloudwashing”. Nothing. In my 20+ years in the IT industry, I’ve never seen anything like this before.

During the OpenStack Summit, the talk is almost unanimously positive. # of developers, # of committers, # of new projects, training projects, documentation projects and a growing list of customers from companies that you’ve actually heard of. On the surface, the OpenStack community appears to be well organized and making large strides in delivering a massive set of projects via a community model.

Then we start seeing commentary like this from the experienced OpenStack engineers. Soon after, we see commentary from Gartner (here, here) about several major challenges that the OpenStack community and OpenStack-centric companies face in 2014. Those two led to several well written responses, from both vendors and analysts.

The difference in tone between the week of OpenStack Summit and the following weeks was significant. It would have been one thing if the negativity had come from the CloudStack community, or AWS directly. It would have been understandable. But this was coming from within the OpenStack community, or analysts with deep understanding of Enterprise customers and the overall Cloud Management space.

Regardless of your opinion on OpenStack, it’s hard not to wonder where the truth really resides.

And then Solum happens…

Just when you thought the OpenStack community had begun to hit their stride in terms of hype vs. projects vs. execution, somebody swapped the “I” for a “P” and introduced a PaaS-like project (“Solum”) into the mix. OpenStack already had commitments and projects to work with both Cloud Foundry and OpenShift, well-defined PaaS projects. And then for some reason, a faction within the OpenStack community decided that it needed it’s own variation on a PaaS. Confused? I don’t blame you. You wouldn’t be alone.  Continued »


November 30, 2013  10:51 AM

Can Private Cloud Management be Hybrid?

Brian Gracely Brian Gracely Profile: Brian Gracely

Screen Shot 2013-11-29 at 5.06.50 PMOne of the interesting slides from the keynote at AWS re:Invent was one that showed the pace of new features/capabilities over the past few years.When building on a platform, the ability to rapidly increase “learning curve” is tremendous, and AWS is obviously increasing the pace at which they can innovate. Collectively seeing customer problems; sharing best-practices for service delivery across their teams, and building a culture of services-led delivery for each new service.

We’re seeing this ability to rapidly innovate in the SaaS Management companies as well. They simplify the ability to consume their services and they are taking their day-to-day learnings and using it to constantly improve their product. From a software development perspective they are leveraging Agile principles to increase their pace of delivery, but they are increasing their operational learning-curve, which is equally important.

I’ve explored if more Cloud Management Platform companies should be moving to a SaaS model before. At the time, I was thinking that it made quite a bit of sense from a financial perspective. Allow companies to grow their cloud environment at a pace that makes sense for them. Moving to a cloud operations model (eg. on-demand, self-service) can often be challenging, involves more process change than technology change.

The more I think about it, the more I think the learning-curve is the most important part of the equation for both the cloud management vendors and their customers. As Bernard Golden wrote earlier this week, the core software platform that controls your cloud is “Magic”. When systems are complex, the ability to learnings about the system are critical.

Is there a need for a Hybrid/Private Cloud?

If a company wants to keep their data on-premise, wants to have network response times that allow them to create known SLAs, and they are still concerned about public cloud security – then this tends to lead to look at Private Cloud solutions. But then there’s the challenge of getting their people and process aligned to properly deliver cloud services. The operations team typically would need to get trained on new tools. They might need to get re-organized to allow them to focus on “service delivery” instead of just running technology silos (compute, storage, networking). They may even need assistance in how to think about “building services” (eg. create “products” that cloud users can get from self-service portals).

We’ve always talked about Hybrid Cloud as being a mix of Private Cloud resources and Public Cloud resources, potentially with a unified management framework that interconnects them. But what about the IT organization that wants Private Cloud characteristics for performance, security and data retention, but has been struggling to get re-organized or have a functioning Cloud Management Platform?

Isn’t this an opportunity that a CIO might consider looking at to bring agility to their IT organization and their business users? Isn’t it a win-win situation for both companies and vendors, to increase their learning-curve?

We’re still seeing survey data that says that most CIOs want to build Private Clouds (with or without additional Hybrid/Public Cloud resources), but the major success stories haven’t been significant – lots of virtualization, not as much self-service. Is 2014 the year where we begin to see vendors start bringing new options to fill these opportunities?

 


November 29, 2013  5:50 PM

The Rise of the SaaS Management Tools

Brian Gracely Brian Gracely Profile: Brian Gracely

One of the biggest takeaways I had from the 2013 AWS re:Invent conference was the increasing number of companies that were delivering various forms of IT management (network monitoring, application monitoring, cost modeling, etc.) as SaaS applications. Back in March I wrote about  how this is a challenging market for individual companies, especially with the rise of platforms. Regardless of this, the number of companies that are entering and succeeding continues to grow.

NOTE: It’s important to keep in mind that this isn’t a new phenomenon, as companies like Meraki and Aerohive have been doing this successfully for networking infrastructure for quite a while.

Where are the Focus Areas?

These companies tend to focus around a few core areas:

  1. Cloud Costs and Cost Management – (Cloudability, Cloudyn, Cloud Checkr,  Rightscale) Trying to make sense of Cloud Computing costs, even within a single cloud can be complicated. On-Demand, Spot Instances, Reserved Instances, Inbound vs. Outbound Bandwidth, IOPs vs Provisioned IOPs. It’s not as simple as buying a server or some storage capacity. The best tools let companies do per group/project tracking; recommend when to best use on-demand vs. reserved instances; highlight the cost of adding redundancy across regions, and other advanced capabilities to help save current costs and better forecast future costs.
  2. Network Monitoring – (Thousand Eyes, Boundary) – It can often be difficult to map paths through a cloud network, or identify performance bottlenecks. These companies are focused on correlating network traffic with topologies, both for proactive and reactive monitoring. They are able to track traffic across multiple clouds (public and private) as well as link to application deployment activities (eg. new code deployed to web/database servers) to track how changes impact network traffic.
  3. Application Monitoring – (New Relic, Data Dog) Just as it can be complicated to monitor a cloud network, it can be equally complex to monitor applications in the cloud. How are resources shared with other applications/customers (compute, storage)? Do some code packages have known performance or security issues? How have things changed since new code was deployed, or a new level of redundancy added?
  4. Application Migration - (ElasticBox, Ravello Systems) – These companies are focused on helping companies take existing code or applications, package or encapsulate them, and migrate them (“as is”) to the cloud. These are great tools for companies looking to leverage public cloud without having to change existing applications. They are also great for ALM (Application Lifecycle Management), especially when it might span both public and private cloud.
  5. Security Monitoring – (CloudPassage, Adallom) – Whether this included the ability to add security functionality to IaaS cloud (eg. Firewall, IDS/IPS, Authentication) or monitoring traffic across multiple SaaS platforms, these platforms allow security to be as flexible as compute in the cloud.

In the past, many companies have tried to pull many or all of these areas together into a single, monolithic system. In talking to these companies, they say that many times their early customers had purchased those large systems in the past, but never got them fully installed and operational due to costs or complexities in integration.

Going Beyond Monitoring

Many of these vendors are taking monitoring a step farther by integrating with critical tools in the common workflow of IT operations or developers. For example, many integrate directly with GitHub to be able to track changes to applications or code, especially as it impact performance issues or new security threats. Others integrate collaboration models to allow group-based troubleshooting, correlation between cloud performance and patches/changes. Still others are integrating the ability to simplify how data/trends/maps are shared (just send a URL) so that multiple teams can see either real-time or historical information.


November 28, 2013  10:29 AM

Thoughts from AWS re:Invent 2013

Brian Gracely Brian Gracely Profile: Brian Gracely

It’s been a couple hectic weeks since the AWS re:Invent conference, enough time to process what was announced and what has become one of the major cloud computing events in our industry (some would say “THE” event). The event has grown to ~9000 attendees and estimates have AWS now delivering $1B per quarter to Amazon. Considering that AWS delivers compute, storage, database, and analytics, that $4B annual run rate would make it a formidable competitor to many established IT vendors and Systems Integrators. Because Amazon does not actually break out AWS numbers, it’s difficult to known their actual profit margins, but estimates are anywhere from 40%-65% (gross margin).

I had a chance to speak about the show with the SpeakingInTech (@SpeakingInTech) podcast, hosted by The Register - http://www.theregister.co.uk/2013/11/27/speaking_in_tech_episode_86/ (AWS discussion starts at around the 24:00 mark)

AWS Wants to be the New CIO

Listening to both Andy Jassy and Werner Vogels keynote addresses, it was unusual to not hear them mention the term “CIO”. This is a mainstay of traditional IT tech conferences, but AWS made it clear that they want to disrupt the existing supply chain and just focus on business groups and developers coming right to the AWS service. They took it a step further and bifurcated the world into “IT” and “Cloud”, where “Cloud” is the thing that is focus on business growth.

They identified six common use-cases that bring customers to AWS:

  1. Dev/Test of Traditional Applications
  2. New Applications for the Cloud
  3. Supplement On-Prem with Off-Prem (Cloud) – typically analytics and batch processing – make daily adjustments, but not on production systems (overlapping utilization)
  4. Cloud Applications that reach back to On-Prem for services (eg. payment handling on-prem)
  5. Migrating traditional applications to the Cloud (websites, research simulations) – faster setup, faster performance, lower costs
  6. All-In (eg. NetFlix)

Nobody is Safe from AWS

By my guestimates, the attendance at the show was 33:33:33 (%) developers, systems-integrators, customers. If you deliver IT services (VARs, SIs, Service Providers), AWS is trying to change the supply chain and where you potentially fit (or no longer fit). If you build IT equipment, AWS is trying to change the pricing and consumption model of your customers by moving them from CAPEX to OPEX and long-budget-cycles to on-demand. At the 2013 event, they announced “services” that overlap VDI, Flash Storage, Monitoring, Backup, Disaster Recovery and Real-Time Analytics. Continued »


October 31, 2013  1:53 PM

The New Rules of IT are New Business Models

Brian Gracely Brian Gracely Profile: Brian Gracely

There’s really only one constant in IT (or technology) and that’s CHANGE. Technology changes, company strategies and partnerships change, and eventually best practices change. But we often get one concept wrong (or confused), because we tend to focus and obsess on the pace of change in the consumer space (see: AT&T Next). In the Enterprise, we also see perpetual change in technology, but it takes a long time before the “rules” of the technology industry change. Capital investments take time to depreciate. Technology skills and retraining can take many years to evolve. Sales channels are built over time, not to mention the maturity of the business models across various parts of the value-chain.

But we’re in the early innings of one of those significant rule changing shifts.

Technology is Changing

Whether directly or indirectly affected, cloud computing and open-source are having a significant impact on today’s IT technology. It may not be generating the direct multi-billion dollar revenues that Wall Street loves to see, but the open-source movement is having an impact in every area of technology. Whether it’s being used by companies like Google, Facebook or Amazon, or whether it’s driving projects like OpenStack, CloudStack, OpenDaylight, CloudFoundry, OpenShift, various NoSQL databases, etc., the shift in community involvement by individuals and companies is significant. It’s pushing the pace of innovation and it’s forcing companies to add developer resources towards these projects. But figuring out how or if open-source will disrupt your business or your competitors is still TBD.

But More Importantly, Business Models are Changing

Long standing partners are rethinking the value of those relationships. We saw this start happening a few years ago as Cisco and HP parted ways over servers and networks, and Dell and EMC over storage. But today’s changes aren’t just about vendors moving into new technoogy categories. This is about them not only disrupting their technology partners, but also their go-to-market partners and sometimes even themselves. Continued »


October 28, 2013  1:05 PM

Is it really possible to compare Cloud offerings?

Brian Gracely Brian Gracely Profile: Brian Gracely

I’ve written before about how Cloud Computing can be confusing (here, here, here). New vendors, legacy vendors, cloudwashing, free software, automation skills to learn, etc. Whenever there is chaos and confusion, many people look for something familiar to give them a sense of direction and proximity to their existing world. And while many pundits like to talk about how Hardware and Software are becoming commoditized, or certain services (such as “Infrastructure as a Service, or IaaS”) are becoming commonplace and non-differentiated, we still have confusion about some of the most basic building block elements. Let me illustrate this with a couple examples of activities you might undertake soon.

Lesson 1 – Not all apples are created equal

This past week, a couple different groups (NetworkComputing, CloudSpectator) attempted to do baseline testing on various IaaS cloud services, in an attempt to compare them in an apples-to-apples format.

In 2013, if someone wanted to compare the cost, performance and features of a given IaaS service, you’d think that this would be a relatively simple task. Just pick a common unit of measure (CPU, RAM, Storage, maybe network bandwidth) and run some tests. Sounds simple enough, right? Think again.

The CloudSpectator report attempted to compare Performance and Price across 14 different IaaS providers. They used an entry-level “unit of measure” (1 VM, 2vCPU, 4Gb RAM, 50Gb Storage) and ran their benchmark tests. The results were shown both in terms of raw performance and in a performance/price metric. Across a set of 60+ tests, the results showed that some Cloud providers scored better than others. The results also showed that certain providers were optimized for certain types of tests much more so that other types of tests. Some of the results were hardware-centric while cloud architecture or the associated cloud-management software influenced others. Big deal you might say, that’s to be expected.

But what you might not expect is that not all of the Cloud providers even offered a 2+4 configuration. Some offered 1+4, 4+4 or slightly different variations, without the ability to customize. Still others only offered higher-performance “unit of measure” on systems with much larger CPU/RAM footprints. So now the arguments started about whether or not the results were skewed because the “correct” platform may not have been chosen for each Cloud provider to deliver optimal test results.

The arguments about whether Price/Performance is a relevant measurement for Cloud offerings are valid. Sometimes services are more important to applications than performance or infrastructure available. Sometimes they aren’t. It depends on the application; one size does not fit all. And as we saw, one size isn’t always available to all, so the end-users may have to do some re-calculations to compare Cloud services. Continued »


October 28, 2013  12:35 PM

In 2013, It’s Time for a Moonlighting Revolution

Brian Gracely Brian Gracely Profile: Brian Gracely

moonlightingBack in the late 1980s, when we only had a handful of TV channels, one of the most popular shows was “Moonlighting”. Bruce Willis (back when he had hair) and Cybil Sheppard starred as a pair of moonlighting detectives. They did their regular jobs during the day and scratched their problem-solving itch at night.

Regardless of the show’s popularity, many companies still maintain a strict policy of not allowing their employees to do other work, even if it was “off the clock”. It was distracting. It could create conflicts of interests. It put the individual ahead of the company.

Sidebar: Remember when we used to have that novel concept of “life” and “work” being two distinct and separate activities, defined by hours of the day? Ah, the good old days!

But when I look around at the technology industry today, the list of moonlighting projects that people are creating is outstanding. Here’s a short list of projects that have caught my eye recently:

I’ve written about my giving-back or “free” projects (here, here) the last couple years. Most of these moonlighting projects by others seem to have the same motivations as mine. First, they allow the creator some freedom to explore an area of passion that they might not be able to within the day-to-day constraints of their day job. Second, they give themselves a chance to elevate their voice in the industry. Third, they often open up new channels of interaction with communities of people that share a similar passion or expertise. And finally, they (sometimes) open up some new opportunities to make some money – because life isn’t getting any cheaper.  Continued »


October 14, 2013  10:39 AM

Will Developer Preferences Hold PaaS Back?

Brian Gracely Brian Gracely Profile: Brian Gracely

Puppies-Cows

[Photo Credit: @aneel; Piston Cloud Computing]

There’s a interesting duality to developers, especially those that subscribe to the evolutionary model known as “DevOps” which is popular among those working on modern cloud computing applications. On one hand, they believe in a world of consistency and standardization down below the applications (puppies vs. cows?). On the other hand, I’ve yet to meet two developers that can agree on how to build an application or the tools used to design / code / monitor that application. When it comes to their tools, not the underlying “plumbing”, customization and choose is king.

So this brings us to the question of Platform-as-a-Service (PaaS) – is it part of the infrastructure plumbing or part of the developers tool kit? Depending on which PaaS platform is used, it can be either or both. Platforms like Heroku run on AWS; CloudFoundry can run on AWS, VMware or OpenStack; and OpenShift offers options to run on Docker, OpenStack, or SE Linux.

Then you have services like Amazon AWS, which many would classify as an IaaS, but now delivers a wide range of add-on services that offer similar database, queuing, data analytics and storage services as the PaaS platforms. The difference being that they are optional services, not necessarily mandated to use the platform.

Some will argue that polyglot PaaS – the inclusion of multiple development frameworks (eg. Java, Ruby, Go, .NET, etc.) – will help eliminate the concerns about lack of flexibility for developers. But others have argued that doing everything means that a platform doesn’t do any of them particularly well.

And then there’s the question of who (or whom?) should run the PaaS platform. Convention wisdom says it’s a DevOps team that brings together the Developers and Operators. But ultimately, choices will need to be made about what images, packages, tools and customizations will be supported for various classes of applications – similar to what exists in traditional IT environments today. Continued »


September 30, 2013  10:30 PM

Confused about SDN? You should be…

Brian Gracely Brian Gracely Profile: Brian Gracely

It’s been a long time since Networking was generating this much buzz in the IT industry. You have go back to the late 1990s, when Gigabit Ethernet companies combined hardware-based Layer-2 and Layer-3 capabilities into a single switching platform. At the time, this was considered fairly ground-breaking because prior to this Layer-2 was done in hardware and Layer-3 in software, meaning that significant architecture issues were involved when companies needed to “switch” vs. “route” traffic around their network. The start-up scene at the time was very, very crowded – names like Granite Systems, YAGO Systems, Extreme Networks, Alteon Networks, Rapid City Networks, Prominet, Berkeley Networks and several others than never survived.

Nobody thinks about performance of Layer-2 vs. Layer-3 these days, but back then it was a true technology-religion war. Fortunes were made and lost. Architectural visions were torn apart by competitors on a daily basis.

  • “It doesn’t scale…”
  • “This requires an entirely new architecture…”
  • “Which group is going to be in charge of the operations and how will we troubleshot?
  • “This will commoditize networking…”

Any of that sound familiar?

Today’s Networking world is going through a similar time. Tons of startups – this time it’s Nicira, Big Switch, PlumGrid, Contrail, Plexxi, Insieme, Cumulus, Arista, Embrane and many, many others. If you searched through LinkedIn or Dice or GitHub, you’d probably see that many of the lead engineers at these new companies have backgrounds from some of those previous generation Gigabit startups. In addition, many have DNA linkage to Cisco, Juniper or Stanford, which is sort of the litmus test I tend to use when starting my evaluations of networking companies and technologies.

And all of these companies want you to change your network. Adopt their technologies so you can save money, move faster, be more agile and make the application teams happier. On the flip side, some people are complaining about all the potential change:

  • “It doesn’t scale…”
  • “This requires an entirely new architecture…”
  • “Which group is going to be in charge of the operations and how will we troubleshot?
  • “This will commoditize networking…”

Haven’t we been here before?

“Software-Defined Networking” is the buzzword in this space. It’s the “Cloud” of the infrastructure world.

I’d suggest that people trying to understand SDN should think about it in three distinct areas:

  1. What is the core SDN architecture of this company or open-source project (eg. Open Daylight, OpenStack Neutron, etc.)? Does it have the control, scaling and functional requirements your network needs? How much of it is available today vs. the future?
  2. How does this SDN architecture interact with your legacy network, or integrate with legacy equipment / tools / process? Is it entirely an overlay architecture or are there ways to bring in legacy environments such that it’s not a completely separate silo? How do you do operations, monitoring and troubleshooting between the new and legacy environments?
  3. How are applications mapped to the new SDN environment? Can either application-centric tools (and automation tools) be used, or does this environment have a proprietary way to describe applications to the network?

Every SDN architecture has trade-offs. Some are software-only, or “virtual overlays”. Some are hardware+software. Some provide L2-L7 serivces, while others focus just on certain functional areas (eg. L4-L7). Having a framework to decode each architecture and how it might solve specific challenges for you is very important.

If you’re not sufficiently confused yet, I’ll throw out one other area to take a look at – the intersection between “cloud stacks” (eg. OpenStack) and “SDN controllers” (eg. OpenDaylight). See this video or listen to the SDN podcast discuss we had recently.

What’s interesting about this scenario is that both the cloud stacks and the SDN controllers are building various L2-L7 functionality into their systems. So where’s the right place for it to below? How will various services be orchestrated between the systems? And who will define policies when there are network services needed (SDN), but they will run as a virtual-appliance on a server/VM (cloud stack)?


Page 4 of 9« First...23456...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: