From Silos to Services: Cloud Computing for the Enterprise

Page 3 of 812345...Last »

November 29, 2013  5:50 PM

The Rise of the SaaS Management Tools



Posted by: Brian Gracely
APi, AWS, Cloud Computing, Cloud Management, DevOps, Performance, SaaS, Start-up

One of the biggest takeaways I had from the 2013 AWS re:Invent conference was the increasing number of companies that were delivering various forms of IT management (network monitoring, application monitoring, cost modeling, etc.) as SaaS applications. Back in March I wrote about  how this is a challenging market for individual companies, especially with the rise of platforms. Regardless of this, the number of companies that are entering and succeeding continues to grow.

NOTE: It’s important to keep in mind that this isn’t a new phenomenon, as companies like Meraki and Aerohive have been doing this successfully for networking infrastructure for quite a while.

Where are the Focus Areas?

These companies tend to focus around a few core areas:

  1. Cloud Costs and Cost Management – (Cloudability, Cloudyn, Cloud Checkr,  Rightscale) Trying to make sense of Cloud Computing costs, even within a single cloud can be complicated. On-Demand, Spot Instances, Reserved Instances, Inbound vs. Outbound Bandwidth, IOPs vs Provisioned IOPs. It’s not as simple as buying a server or some storage capacity. The best tools let companies do per group/project tracking; recommend when to best use on-demand vs. reserved instances; highlight the cost of adding redundancy across regions, and other advanced capabilities to help save current costs and better forecast future costs.
  2. Network Monitoring – (Thousand Eyes, Boundary) – It can often be difficult to map paths through a cloud network, or identify performance bottlenecks. These companies are focused on correlating network traffic with topologies, both for proactive and reactive monitoring. They are able to track traffic across multiple clouds (public and private) as well as link to application deployment activities (eg. new code deployed to web/database servers) to track how changes impact network traffic.
  3. Application Monitoring – (New Relic, Data Dog) Just as it can be complicated to monitor a cloud network, it can be equally complex to monitor applications in the cloud. How are resources shared with other applications/customers (compute, storage)? Do some code packages have known performance or security issues? How have things changed since new code was deployed, or a new level of redundancy added?
  4. Application Migration - (ElasticBox, Ravello Systems) – These companies are focused on helping companies take existing code or applications, package or encapsulate them, and migrate them (“as is”) to the cloud. These are great tools for companies looking to leverage public cloud without having to change existing applications. They are also great for ALM (Application Lifecycle Management), especially when it might span both public and private cloud.
  5. Security Monitoring – (CloudPassage, Adallom) – Whether this included the ability to add security functionality to IaaS cloud (eg. Firewall, IDS/IPS, Authentication) or monitoring traffic across multiple SaaS platforms, these platforms allow security to be as flexible as compute in the cloud.

In the past, many companies have tried to pull many or all of these areas together into a single, monolithic system. In talking to these companies, they say that many times their early customers had purchased those large systems in the past, but never got them fully installed and operational due to costs or complexities in integration.

Going Beyond Monitoring

Many of these vendors are taking monitoring a step farther by integrating with critical tools in the common workflow of IT operations or developers. For example, many integrate directly with GitHub to be able to track changes to applications or code, especially as it impact performance issues or new security threats. Others integrate collaboration models to allow group-based troubleshooting, correlation between cloud performance and patches/changes. Still others are integrating the ability to simplify how data/trends/maps are shared (just send a URL) so that multiple teams can see either real-time or historical information.

November 28, 2013  10:29 AM

Thoughts from AWS re:Invent 2013



Posted by: Brian Gracely
AWS, Big Data, Cloud Computing, Costs, Data Center, DevOps, Enterprise, NoSQL, Open Source, Transformation

It’s been a couple hectic weeks since the AWS re:Invent conference, enough time to process what was announced and what has become one of the major cloud computing events in our industry (some would say “THE” event). The event has grown to ~9000 attendees and estimates have AWS now delivering $1B per quarter to Amazon. Considering that AWS delivers compute, storage, database, and analytics, that $4B annual run rate would make it a formidable competitor to many established IT vendors and Systems Integrators. Because Amazon does not actually break out AWS numbers, it’s difficult to known their actual profit margins, but estimates are anywhere from 40%-65% (gross margin).

I had a chance to speak about the show with the SpeakingInTech (@SpeakingInTech) podcast, hosted by The Register - http://www.theregister.co.uk/2013/11/27/speaking_in_tech_episode_86/ (AWS discussion starts at around the 24:00 mark)

AWS Wants to be the New CIO

Listening to both Andy Jassy and Werner Vogels keynote addresses, it was unusual to not hear them mention the term “CIO”. This is a mainstay of traditional IT tech conferences, but AWS made it clear that they want to disrupt the existing supply chain and just focus on business groups and developers coming right to the AWS service. They took it a step further and bifurcated the world into “IT” and “Cloud”, where “Cloud” is the thing that is focus on business growth.

They identified six common use-cases that bring customers to AWS:

  1. Dev/Test of Traditional Applications
  2. New Applications for the Cloud
  3. Supplement On-Prem with Off-Prem (Cloud) – typically analytics and batch processing – make daily adjustments, but not on production systems (overlapping utilization)
  4. Cloud Applications that reach back to On-Prem for services (eg. payment handling on-prem)
  5. Migrating traditional applications to the Cloud (websites, research simulations) – faster setup, faster performance, lower costs
  6. All-In (eg. NetFlix)

Nobody is Safe from AWS

By my guestimates, the attendance at the show was 33:33:33 (%) developers, systems-integrators, customers. If you deliver IT services (VARs, SIs, Service Providers), AWS is trying to change the supply chain and where you potentially fit (or no longer fit). If you build IT equipment, AWS is trying to change the pricing and consumption model of your customers by moving them from CAPEX to OPEX and long-budget-cycles to on-demand. At the 2013 event, they announced “services” that overlap VDI, Flash Storage, Monitoring, Backup, Disaster Recovery and Real-Time Analytics. Continued »


October 31, 2013  1:53 PM

The New Rules of IT are New Business Models



Posted by: Brian Gracely
Cisco, Dell, HP, Hybrid Cloud, Lock-In, Microsoft

There’s really only one constant in IT (or technology) and that’s CHANGE. Technology changes, company strategies and partnerships change, and eventually best practices change. But we often get one concept wrong (or confused), because we tend to focus and obsess on the pace of change in the consumer space (see: AT&T Next). In the Enterprise, we also see perpetual change in technology, but it takes a long time before the “rules” of the technology industry change. Capital investments take time to depreciate. Technology skills and retraining can take many years to evolve. Sales channels are built over time, not to mention the maturity of the business models across various parts of the value-chain.

But we’re in the early innings of one of those significant rule changing shifts.

Technology is Changing

Whether directly or indirectly affected, cloud computing and open-source are having a significant impact on today’s IT technology. It may not be generating the direct multi-billion dollar revenues that Wall Street loves to see, but the open-source movement is having an impact in every area of technology. Whether it’s being used by companies like Google, Facebook or Amazon, or whether it’s driving projects like OpenStack, CloudStack, OpenDaylight, CloudFoundry, OpenShift, various NoSQL databases, etc., the shift in community involvement by individuals and companies is significant. It’s pushing the pace of innovation and it’s forcing companies to add developer resources towards these projects. But figuring out how or if open-source will disrupt your business or your competitors is still TBD.

But More Importantly, Business Models are Changing

Long standing partners are rethinking the value of those relationships. We saw this start happening a few years ago as Cisco and HP parted ways over servers and networks, and Dell and EMC over storage. But today’s changes aren’t just about vendors moving into new technoogy categories. This is about them not only disrupting their technology partners, but also their go-to-market partners and sometimes even themselves. Continued »


October 28, 2013  1:05 PM

Is it really possible to compare Cloud offerings?



Posted by: Brian Gracely
AWS, Cloud Computing, Cloud Management, CloudSpectator, CloudStack, Data Center, Enterprise, Gartner, Mission-Critical Applications, NetworkComputing, Open Source, Performance, Rackspace, SLA

I’ve written before about how Cloud Computing can be confusing (here, here, here). New vendors, legacy vendors, cloudwashing, free software, automation skills to learn, etc. Whenever there is chaos and confusion, many people look for something familiar to give them a sense of direction and proximity to their existing world. And while many pundits like to talk about how Hardware and Software are becoming commoditized, or certain services (such as “Infrastructure as a Service, or IaaS”) are becoming commonplace and non-differentiated, we still have confusion about some of the most basic building block elements. Let me illustrate this with a couple examples of activities you might undertake soon.

Lesson 1 – Not all apples are created equal

This past week, a couple different groups (NetworkComputing, CloudSpectator) attempted to do baseline testing on various IaaS cloud services, in an attempt to compare them in an apples-to-apples format.

In 2013, if someone wanted to compare the cost, performance and features of a given IaaS service, you’d think that this would be a relatively simple task. Just pick a common unit of measure (CPU, RAM, Storage, maybe network bandwidth) and run some tests. Sounds simple enough, right? Think again.

The CloudSpectator report attempted to compare Performance and Price across 14 different IaaS providers. They used an entry-level “unit of measure” (1 VM, 2vCPU, 4Gb RAM, 50Gb Storage) and ran their benchmark tests. The results were shown both in terms of raw performance and in a performance/price metric. Across a set of 60+ tests, the results showed that some Cloud providers scored better than others. The results also showed that certain providers were optimized for certain types of tests much more so that other types of tests. Some of the results were hardware-centric while cloud architecture or the associated cloud-management software influenced others. Big deal you might say, that’s to be expected.

But what you might not expect is that not all of the Cloud providers even offered a 2+4 configuration. Some offered 1+4, 4+4 or slightly different variations, without the ability to customize. Still others only offered higher-performance “unit of measure” on systems with much larger CPU/RAM footprints. So now the arguments started about whether or not the results were skewed because the “correct” platform may not have been chosen for each Cloud provider to deliver optimal test results.

The arguments about whether Price/Performance is a relevant measurement for Cloud offerings are valid. Sometimes services are more important to applications than performance or infrastructure available. Sometimes they aren’t. It depends on the application; one size does not fit all. And as we saw, one size isn’t always available to all, so the end-users may have to do some re-calculations to compare Cloud services. Continued »


October 28, 2013  12:35 PM

In 2013, It’s Time for a Moonlighting Revolution



Posted by: Brian Gracely
Blog, Cloud Computing, Consumerization of IT, Moonlighting, Motivation, Open Source, OpenStack, Podcast, Shadow IT, Start-up

moonlightingBack in the late 1980s, when we only had a handful of TV channels, one of the most popular shows was “Moonlighting”. Bruce Willis (back when he had hair) and Cybil Sheppard starred as a pair of moonlighting detectives. They did their regular jobs during the day and scratched their problem-solving itch at night.

Regardless of the show’s popularity, many companies still maintain a strict policy of not allowing their employees to do other work, even if it was “off the clock”. It was distracting. It could create conflicts of interests. It put the individual ahead of the company.

Sidebar: Remember when we used to have that novel concept of “life” and “work” being two distinct and separate activities, defined by hours of the day? Ah, the good old days!

But when I look around at the technology industry today, the list of moonlighting projects that people are creating is outstanding. Here’s a short list of projects that have caught my eye recently:

I’ve written about my giving-back or “free” projects (here, here) the last couple years. Most of these moonlighting projects by others seem to have the same motivations as mine. First, they allow the creator some freedom to explore an area of passion that they might not be able to within the day-to-day constraints of their day job. Second, they give themselves a chance to elevate their voice in the industry. Third, they often open up new channels of interaction with communities of people that share a similar passion or expertise. And finally, they (sometimes) open up some new opportunities to make some money – because life isn’t getting any cheaper.  Continued »


October 14, 2013  10:39 AM

Will Developer Preferences Hold PaaS Back?



Posted by: Brian Gracely
Application Tiering, AWS, Cloud Computing, Cloud Management, DBaaS, DevOps, Lock-In, Mission-Critical Applications, Open Source, OpenStack, Pivotal, Platform

Puppies-Cows

[Photo Credit: @aneel; Piston Cloud Computing]

There’s a interesting duality to developers, especially those that subscribe to the evolutionary model known as “DevOps” which is popular among those working on modern cloud computing applications. On one hand, they believe in a world of consistency and standardization down below the applications (puppies vs. cows?). On the other hand, I’ve yet to meet two developers that can agree on how to build an application or the tools used to design / code / monitor that application. When it comes to their tools, not the underlying “plumbing”, customization and choose is king.

So this brings us to the question of Platform-as-a-Service (PaaS) – is it part of the infrastructure plumbing or part of the developers tool kit? Depending on which PaaS platform is used, it can be either or both. Platforms like Heroku run on AWS; CloudFoundry can run on AWS, VMware or OpenStack; and OpenShift offers options to run on Docker, OpenStack, or SE Linux.

Then you have services like Amazon AWS, which many would classify as an IaaS, but now delivers a wide range of add-on services that offer similar database, queuing, data analytics and storage services as the PaaS platforms. The difference being that they are optional services, not necessarily mandated to use the platform.

Some will argue that polyglot PaaS – the inclusion of multiple development frameworks (eg. Java, Ruby, Go, .NET, etc.) – will help eliminate the concerns about lack of flexibility for developers. But others have argued that doing everything means that a platform doesn’t do any of them particularly well.

And then there’s the question of who (or whom?) should run the PaaS platform. Convention wisdom says it’s a DevOps team that brings together the Developers and Operators. But ultimately, choices will need to be made about what images, packages, tools and customizations will be supported for various classes of applications – similar to what exists in traditional IT environments today. Continued »


September 30, 2013  10:30 PM

Confused about SDN? You should be…



Posted by: Brian Gracely
Cisco, Cloud Computing, CloudStack, Converged Infrastructure, Lock-In, Open Source, OpenDaylight, OpenFlow, OpenStack, Overlay Network, Private Cloud, SDN, Software-Defined Networking, Virtualization, VMware

It’s been a long time since Networking was generating this much buzz in the IT industry. You have go back to the late 1990s, when Gigabit Ethernet companies combined hardware-based Layer-2 and Layer-3 capabilities into a single switching platform. At the time, this was considered fairly ground-breaking because prior to this Layer-2 was done in hardware and Layer-3 in software, meaning that significant architecture issues were involved when companies needed to “switch” vs. “route” traffic around their network. The start-up scene at the time was very, very crowded – names like Granite Systems, YAGO Systems, Extreme Networks, Alteon Networks, Rapid City Networks, Prominet, Berkeley Networks and several others than never survived.

Nobody thinks about performance of Layer-2 vs. Layer-3 these days, but back then it was a true technology-religion war. Fortunes were made and lost. Architectural visions were torn apart by competitors on a daily basis.

  • “It doesn’t scale…”
  • “This requires an entirely new architecture…”
  • “Which group is going to be in charge of the operations and how will we troubleshot?
  • “This will commoditize networking…”

Any of that sound familiar?

Today’s Networking world is going through a similar time. Tons of startups – this time it’s Nicira, Big Switch, PlumGrid, Contrail, Plexxi, Insieme, Cumulus, Arista, Embrane and many, many others. If you searched through LinkedIn or Dice or GitHub, you’d probably see that many of the lead engineers at these new companies have backgrounds from some of those previous generation Gigabit startups. In addition, many have DNA linkage to Cisco, Juniper or Stanford, which is sort of the litmus test I tend to use when starting my evaluations of networking companies and technologies.

And all of these companies want you to change your network. Adopt their technologies so you can save money, move faster, be more agile and make the application teams happier. On the flip side, some people are complaining about all the potential change:

  • “It doesn’t scale…”
  • “This requires an entirely new architecture…”
  • “Which group is going to be in charge of the operations and how will we troubleshot?
  • “This will commoditize networking…”

Haven’t we been here before?

“Software-Defined Networking” is the buzzword in this space. It’s the “Cloud” of the infrastructure world.

I’d suggest that people trying to understand SDN should think about it in three distinct areas:

  1. What is the core SDN architecture of this company or open-source project (eg. Open Daylight, OpenStack Neutron, etc.)? Does it have the control, scaling and functional requirements your network needs? How much of it is available today vs. the future?
  2. How does this SDN architecture interact with your legacy network, or integrate with legacy equipment / tools / process? Is it entirely an overlay architecture or are there ways to bring in legacy environments such that it’s not a completely separate silo? How do you do operations, monitoring and troubleshooting between the new and legacy environments?
  3. How are applications mapped to the new SDN environment? Can either application-centric tools (and automation tools) be used, or does this environment have a proprietary way to describe applications to the network?

Every SDN architecture has trade-offs. Some are software-only, or “virtual overlays”. Some are hardware+software. Some provide L2-L7 serivces, while others focus just on certain functional areas (eg. L4-L7). Having a framework to decode each architecture and how it might solve specific challenges for you is very important.

If you’re not sufficiently confused yet, I’ll throw out one other area to take a look at – the intersection between “cloud stacks” (eg. OpenStack) and “SDN controllers” (eg. OpenDaylight). See this video or listen to the SDN podcast discuss we had recently.

What’s interesting about this scenario is that both the cloud stacks and the SDN controllers are building various L2-L7 functionality into their systems. So where’s the right place for it to below? How will various services be orchestrated between the systems? And who will define policies when there are network services needed (SDN), but they will run as a virtual-appliance on a server/VM (cloud stack)?


September 30, 2013  9:39 PM

Cloud Means Change and Opportunity – If You Want It.



Posted by: Brian Gracely
Cloud Computing, Consumerization of IT, Data Scientist, DevOps, Hybrid Cloud, Multi-Cloud, Open Source, Outsourcing, SaaS, Transformation

When we were recording Eps.100 of The Cloudcast, which was during VMworld 2013, a discussion came up about why people may have hesitations about Cloud Computing. We jokingly called the segment, “Why Does Cloud S**k?“, and a number of people from the roundtable panel gave their opinions. One side was that people are scared or confused about the unknown – the technology, the competitive landscape, their jobs – while others took the opinion that Cloud Computing was opening up a huge number of new opportunities if people were willing to learn a little bit and stretch themselves.

I’ve written about the “change” aspect of Cloud Computing several times, including how people need to pay-it-forward and what new roles might be emerging. Change is the one constant within the IT industry, whether it’s major shifts like PC to Internet to Cloud, updates to significant systems like SAP, or just adjusting to working with remote co-workers that telecommute. So for the people that claim that they don’t want to adjust to Cloud Computing, I’d wonder how they survive in IT during any technology period?  Continued »


September 19, 2013  12:32 PM

Why did Software Defined * (Everything) happen?



Posted by: Brian Gracely
Cloud Computing, Converged Infrastructure, Data Center, Open Source, Software-Defined, Software-Defined Data Center, Virtualization, VMware

Low Res Model T assembly lineIn the early 1900s, Henry Ford revolutionized the transportation industry by mass-producing the automobile. It was amazing. People could leave their homes to see the country.  Then there was a significant need for major infrastructure to enable that “exploration application”. Highways and freeways were built. We marveled at the feats of engineering to build the roads and bridges to connect us from sea to shining sea. At some point, we stopped being fascinated with the road and an amazing ecosystem of hotels, restaurants, amusement parks and other “entertainment applications” sprung up. New cities were born and the economy of the entire country grew as new possibilities were available to more people. The removal of friction from one application led to the growth of many other applications. The standardization of the road enabled incredible economic growth.

Our industry does not lack for hyperbole and many have come to believe that “Software Defined” is quickly become the latest candidate for the buzzword bingo Hall of Fame. But in order for a term (or concept) to generate this much inertia, or “noise”, there has to be a reason because there’s too much money in the IT industry to chase unicorns.

So how’d we get to this point…?

A few basic things happened.

  1. Moore’s Law doesn’t sleep.
  2. The pace of hardware change and software change is mismatched and people no longer have any patience.
  3. Open Source software become more mainstream and visible.
  4. Public Cloud Computing became more mainstream and everyone became IT.

To begin with, almost everyone has access to the same, fast hardware. This might be x86-CPU server boxes, or merchant-silicon networking boxes. There are still companies that do unique things with hardware elements, or create tightly integrated packaging, but it’s no longer a pre-requisite for entry into the market. This significantly lowers the barrier to entry.

On the software side of the equation, we have broadly available software libraries and development tools that are accelerating the pace of development. Combine this with a shift to Agile methodologies and Continuous Integration. Throw in a few layers of abstraction (from OS or hardware) and the applications bits are getting created faster than ever. Continued »


September 19, 2013  12:30 PM

Google’s Potential Strategic Cloud Advantage



Posted by: Brian Gracely
AWS, Cloud Computing, Google

When Google launched (or beta’d, or preview’d) the Google Compute Engine (GCE), many people though it was a response to Amazon Web Services and it’s significant market lead in IaaS. While this might be somewhat true, I tend to believe there are some nuances that people are missing that could have significant impact on different industries that you might expect.

Think about this –

Amazon builds marketplaces.

Google builds platforms.

Amazon thinks about end-user experiences.

Google thinks about platform-user experiences.

So while Amazon has built AWS to be a utility computing platform, with a number of very interesting services, it’s really much more of a utility computing marketplace. They provide the tools to create a new market for IT services and IT applications.

Google on the other hand is all about building platforms to interact with digital information, as an underpinning to drive advertising. They subsidize this amazing collection of information by providing a number of free services that end-users can enjoy (eg. Maps, YouTube, Gmail, Android Mobile, etc.).

It’s possible to believe that Google is attempting to compete with AWS as a next-generation IT platform, but I think they may have very different intentions. I think their bigger ambitions aren’t the IT industry, but rather the broader media industry.

Google now owns the most ubiquitous web property for consuming media – YouTube.

Google controls the fastest grow next interaction point for billions of humans – the Android OS running on the majority of smartphones. This can obviously be extended to tablets or potentially other large-displays (eg. “formerly called TVs” devices).

Google is beginning to move even closer to users, with a foray into wearable computing, beginning with Google Glass.

Continued »


Page 3 of 812345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: