From Silos to Services: Cloud Computing for the Enterprise

Page 5 of 9« First...34567...Last »

October 28, 2013  12:35 PM

In 2013, It’s Time for a Moonlighting Revolution

Brian Gracely Brian Gracely Profile: Brian Gracely

moonlightingBack in the late 1980s, when we only had a handful of TV channels, one of the most popular shows was “Moonlighting”. Bruce Willis (back when he had hair) and Cybil Sheppard starred as a pair of moonlighting detectives. They did their regular jobs during the day and scratched their problem-solving itch at night.

Regardless of the show’s popularity, many companies still maintain a strict policy of not allowing their employees to do other work, even if it was “off the clock”. It was distracting. It could create conflicts of interests. It put the individual ahead of the company.

Sidebar: Remember when we used to have that novel concept of “life” and “work” being two distinct and separate activities, defined by hours of the day? Ah, the good old days!

But when I look around at the technology industry today, the list of moonlighting projects that people are creating is outstanding. Here’s a short list of projects that have caught my eye recently:

I’ve written about my giving-back or “free” projects (here, here) the last couple years. Most of these moonlighting projects by others seem to have the same motivations as mine. First, they allow the creator some freedom to explore an area of passion that they might not be able to within the day-to-day constraints of their day job. Second, they give themselves a chance to elevate their voice in the industry. Third, they often open up new channels of interaction with communities of people that share a similar passion or expertise. And finally, they (sometimes) open up some new opportunities to make some money – because life isn’t getting any cheaper.  Continued »

October 14, 2013  10:39 AM

Will Developer Preferences Hold PaaS Back?

Brian Gracely Brian Gracely Profile: Brian Gracely

Puppies-Cows

[Photo Credit: @aneel; Piston Cloud Computing]

There’s a interesting duality to developers, especially those that subscribe to the evolutionary model known as “DevOps” which is popular among those working on modern cloud computing applications. On one hand, they believe in a world of consistency and standardization down below the applications (puppies vs. cows?). On the other hand, I’ve yet to meet two developers that can agree on how to build an application or the tools used to design / code / monitor that application. When it comes to their tools, not the underlying “plumbing”, customization and choose is king.

So this brings us to the question of Platform-as-a-Service (PaaS) – is it part of the infrastructure plumbing or part of the developers tool kit? Depending on which PaaS platform is used, it can be either or both. Platforms like Heroku run on AWS; CloudFoundry can run on AWS, VMware or OpenStack; and OpenShift offers options to run on Docker, OpenStack, or SE Linux.

Then you have services like Amazon AWS, which many would classify as an IaaS, but now delivers a wide range of add-on services that offer similar database, queuing, data analytics and storage services as the PaaS platforms. The difference being that they are optional services, not necessarily mandated to use the platform.

Some will argue that polyglot PaaS – the inclusion of multiple development frameworks (eg. Java, Ruby, Go, .NET, etc.) – will help eliminate the concerns about lack of flexibility for developers. But others have argued that doing everything means that a platform doesn’t do any of them particularly well.

And then there’s the question of who (or whom?) should run the PaaS platform. Convention wisdom says it’s a DevOps team that brings together the Developers and Operators. But ultimately, choices will need to be made about what images, packages, tools and customizations will be supported for various classes of applications – similar to what exists in traditional IT environments today. Continued »


September 30, 2013  10:30 PM

Confused about SDN? You should be…

Brian Gracely Brian Gracely Profile: Brian Gracely

It’s been a long time since Networking was generating this much buzz in the IT industry. You have go back to the late 1990s, when Gigabit Ethernet companies combined hardware-based Layer-2 and Layer-3 capabilities into a single switching platform. At the time, this was considered fairly ground-breaking because prior to this Layer-2 was done in hardware and Layer-3 in software, meaning that significant architecture issues were involved when companies needed to “switch” vs. “route” traffic around their network. The start-up scene at the time was very, very crowded – names like Granite Systems, YAGO Systems, Extreme Networks, Alteon Networks, Rapid City Networks, Prominet, Berkeley Networks and several others than never survived.

Nobody thinks about performance of Layer-2 vs. Layer-3 these days, but back then it was a true technology-religion war. Fortunes were made and lost. Architectural visions were torn apart by competitors on a daily basis.

  • “It doesn’t scale…”
  • “This requires an entirely new architecture…”
  • “Which group is going to be in charge of the operations and how will we troubleshot?
  • “This will commoditize networking…”

Any of that sound familiar?

Today’s Networking world is going through a similar time. Tons of startups – this time it’s Nicira, Big Switch, PlumGrid, Contrail, Plexxi, Insieme, Cumulus, Arista, Embrane and many, many others. If you searched through LinkedIn or Dice or GitHub, you’d probably see that many of the lead engineers at these new companies have backgrounds from some of those previous generation Gigabit startups. In addition, many have DNA linkage to Cisco, Juniper or Stanford, which is sort of the litmus test I tend to use when starting my evaluations of networking companies and technologies.

And all of these companies want you to change your network. Adopt their technologies so you can save money, move faster, be more agile and make the application teams happier. On the flip side, some people are complaining about all the potential change:

  • “It doesn’t scale…”
  • “This requires an entirely new architecture…”
  • “Which group is going to be in charge of the operations and how will we troubleshot?
  • “This will commoditize networking…”

Haven’t we been here before?

“Software-Defined Networking” is the buzzword in this space. It’s the “Cloud” of the infrastructure world.

I’d suggest that people trying to understand SDN should think about it in three distinct areas:

  1. What is the core SDN architecture of this company or open-source project (eg. Open Daylight, OpenStack Neutron, etc.)? Does it have the control, scaling and functional requirements your network needs? How much of it is available today vs. the future?
  2. How does this SDN architecture interact with your legacy network, or integrate with legacy equipment / tools / process? Is it entirely an overlay architecture or are there ways to bring in legacy environments such that it’s not a completely separate silo? How do you do operations, monitoring and troubleshooting between the new and legacy environments?
  3. How are applications mapped to the new SDN environment? Can either application-centric tools (and automation tools) be used, or does this environment have a proprietary way to describe applications to the network?

Every SDN architecture has trade-offs. Some are software-only, or “virtual overlays”. Some are hardware+software. Some provide L2-L7 serivces, while others focus just on certain functional areas (eg. L4-L7). Having a framework to decode each architecture and how it might solve specific challenges for you is very important.

If you’re not sufficiently confused yet, I’ll throw out one other area to take a look at – the intersection between “cloud stacks” (eg. OpenStack) and “SDN controllers” (eg. OpenDaylight). See this video or listen to the SDN podcast discuss we had recently.

What’s interesting about this scenario is that both the cloud stacks and the SDN controllers are building various L2-L7 functionality into their systems. So where’s the right place for it to below? How will various services be orchestrated between the systems? And who will define policies when there are network services needed (SDN), but they will run as a virtual-appliance on a server/VM (cloud stack)?


September 30, 2013  9:39 PM

Cloud Means Change and Opportunity – If You Want It.

Brian Gracely Brian Gracely Profile: Brian Gracely

When we were recording Eps.100 of The Cloudcast, which was during VMworld 2013, a discussion came up about why people may have hesitations about Cloud Computing. We jokingly called the segment, “Why Does Cloud S**k?“, and a number of people from the roundtable panel gave their opinions. One side was that people are scared or confused about the unknown – the technology, the competitive landscape, their jobs – while others took the opinion that Cloud Computing was opening up a huge number of new opportunities if people were willing to learn a little bit and stretch themselves.

I’ve written about the “change” aspect of Cloud Computing several times, including how people need to pay-it-forward and what new roles might be emerging. Change is the one constant within the IT industry, whether it’s major shifts like PC to Internet to Cloud, updates to significant systems like SAP, or just adjusting to working with remote co-workers that telecommute. So for the people that claim that they don’t want to adjust to Cloud Computing, I’d wonder how they survive in IT during any technology period?  Continued »


September 19, 2013  12:32 PM

Why did Software Defined * (Everything) happen?

Brian Gracely Brian Gracely Profile: Brian Gracely

Low Res Model T assembly lineIn the early 1900s, Henry Ford revolutionized the transportation industry by mass-producing the automobile. It was amazing. People could leave their homes to see the country.  Then there was a significant need for major infrastructure to enable that “exploration application”. Highways and freeways were built. We marveled at the feats of engineering to build the roads and bridges to connect us from sea to shining sea. At some point, we stopped being fascinated with the road and an amazing ecosystem of hotels, restaurants, amusement parks and other “entertainment applications” sprung up. New cities were born and the economy of the entire country grew as new possibilities were available to more people. The removal of friction from one application led to the growth of many other applications. The standardization of the road enabled incredible economic growth.

Our industry does not lack for hyperbole and many have come to believe that “Software Defined” is quickly become the latest candidate for the buzzword bingo Hall of Fame. But in order for a term (or concept) to generate this much inertia, or “noise”, there has to be a reason because there’s too much money in the IT industry to chase unicorns.

So how’d we get to this point…?

A few basic things happened.

  1. Moore’s Law doesn’t sleep.
  2. The pace of hardware change and software change is mismatched and people no longer have any patience.
  3. Open Source software become more mainstream and visible.
  4. Public Cloud Computing became more mainstream and everyone became IT.

To begin with, almost everyone has access to the same, fast hardware. This might be x86-CPU server boxes, or merchant-silicon networking boxes. There are still companies that do unique things with hardware elements, or create tightly integrated packaging, but it’s no longer a pre-requisite for entry into the market. This significantly lowers the barrier to entry.

On the software side of the equation, we have broadly available software libraries and development tools that are accelerating the pace of development. Combine this with a shift to Agile methodologies and Continuous Integration. Throw in a few layers of abstraction (from OS or hardware) and the applications bits are getting created faster than ever. Continued »


September 19, 2013  12:30 PM

Google’s Potential Strategic Cloud Advantage

Brian Gracely Brian Gracely Profile: Brian Gracely

When Google launched (or beta’d, or preview’d) the Google Compute Engine (GCE), many people though it was a response to Amazon Web Services and it’s significant market lead in IaaS. While this might be somewhat true, I tend to believe there are some nuances that people are missing that could have significant impact on different industries that you might expect.

Think about this –

Amazon builds marketplaces.

Google builds platforms.

Amazon thinks about end-user experiences.

Google thinks about platform-user experiences.

So while Amazon has built AWS to be a utility computing platform, with a number of very interesting services, it’s really much more of a utility computing marketplace. They provide the tools to create a new market for IT services and IT applications.

Google on the other hand is all about building platforms to interact with digital information, as an underpinning to drive advertising. They subsidize this amazing collection of information by providing a number of free services that end-users can enjoy (eg. Maps, YouTube, Gmail, Android Mobile, etc.).

It’s possible to believe that Google is attempting to compete with AWS as a next-generation IT platform, but I think they may have very different intentions. I think their bigger ambitions aren’t the IT industry, but rather the broader media industry.

Google now owns the most ubiquitous web property for consuming media – YouTube.

Google controls the fastest grow next interaction point for billions of humans – the Android OS running on the majority of smartphones. This can obviously be extended to tablets or potentially other large-displays (eg. “formerly called TVs” devices).

Google is beginning to move even closer to users, with a foray into wearable computing, beginning with Google Glass.

Continued »


August 24, 2013  10:06 AM

Top 5 Challenges for Private Cloud Success

Brian Gracely Brian Gracely Profile: Brian Gracely

Several years ago I wrote a post about The 5 Ps of Cloud Computing, back in the early days of IT organizations thinking they could design and operate cloud computing environments internal to their own data centers. My friend Christian Reilly (Chief Cloud & Mobile dude at Bechtel) wrote a variation based on his experience building both an internal cloud as well as a mobile application store for this business.

Back in 2009 and 2010, the maturity of the technology and skill-sets within IT to take on a transformational project as large as “private cloud” were not really there. During this time, we did see a large number of companies evolve their IT organizations to be more cost efficient through technologies such as server virtualization or converged infrastructure, but the demands for agility and speed from the business continued to put pressure on these projects to move up through the application layers.

[Sidebar: I saw a great quote on Twitter from Marc Lucovsky (@marklucovsky) about the difference between actually delivering *aaS as a service vs. selling *aaS products for someone else to run - (translation) "the value is in the *aaS portion, not so much the underlying technology"]

 

Screen Shot 2013-08-24 at 9.59.27 AM

 

 

 

 

Continued »


August 17, 2013  5:25 PM

Open Source for Cloud – Projects vs. Products

Brian Gracely Brian Gracely Profile: Brian Gracely

As more and more open-source projects get implemented within Enterprise IT organizations, one of the most frequently confused topics I hear discussed is open-source projects (FOSS – Free Open Source Software) vs. products. It’s not surprising, since the majority of IT organizations purchase the tools they use as “products” (on-premise or off-premise).

Let’s Start with the Basics

There are 100s of open source projects (Apache, Linux, etc.) that are targeting large IT challenges today. They are being created by individual developers and fostered by groups of developers that want to further the project. Some of the most popular include:

  • Linux OS
  • Apache Web Server
  • MySQL DB
  • PHP
  • OpenStack (multiple projects)
  • CloudStack
  • Various NoSQL Databases (Hadoop, Cassandra, MongoDB, Couchbase, Riak, etc.)
  • Open vSwitch
  • Various SDN projects (OpenFlow, FloodLight, OpenDaylight)

At this stage, these projects are just code. Anyone can download them, use them or contribute code back to them. There are guidelines to follow depending on the open source license that is used, which is especially important if someone decides to use some of the code to create a new product.

Projects Inside of Projects

One area that’s often confusing is when an open-source project is actually a series of loosely coupled project. OpenStack is a great example of this. There is no “openstack.exe” or “openstack.rpm”. OpenStack is actually made up of multiple projects (Nova, Horizon, Glance, Swift, Cinder, Neutron, Heat, etc), some that are official (in a given release) and others that are experimental. In order to implement OpenStack, it is optional whether or not a group uses some or all of the projects. It just depends on their specific needs.

Beyond the Projects

In between the projects and commercial products are a series of efforts, typically driven by commercial vendors, to take the next step in simplifying an open-source code base for use by IT organizations. These efforts are typically a “free” version of their commercial product. The free versions typically have one of the following characteristics:

  • Often a subset of the commercial product – compatible, but might not have all the “enhanced” features
  • Not as actively supported by the vendor, but rather by the followed community around the open-source project.
  • May be more aligned to the most recent fixes, pulls and enhancements in the “trunk” of the open-source project. This would be preferable for developers or customers that need the latest, bleeding-edge capabilities vs. stability.
  • Examples of this includes: RedHat Fedora, Basho Riak, 10Gen MongoDB, etc.

Commercial Products 

Some IT organizations are overworked and understaffed. They might love to reduce their IT costs by only using FOSS, but the challenges of limited documentation and support create pressures that they aren’t prepared to manage. For these customers, vendor-offered products, based on open-source projects, might be a good fit. Not only does it offer them the ability to potentially reduce acquisition costs, but it also gives them the following benefits:

  • Access to professional support and documentation, as well as community support that may expand their ability to solve problems.
  • Risk management that they could fall back to the open-source “trunk” code if the vendor they are working with has financial problems or doesn’t deliver the required updates in a timely manner. While this isn’t seamless, it does offer some customers a way to manage the risk of vendor selection.
  • For those IT organizations that do have capable developers, they can access the open-source versions and either better understand how the code works, or submit a pull request for new capabilities that they have developed.
  • Leverage the experience from other IT organizations that are using open source software, either through meetups or via online communities.

I’d love to get your feedback on how (or if) your IT organization is engaging with open-source projects, or if you’re using open-source software internally today.


August 17, 2013  1:54 PM

Cloud Evolution – USB 2.0 is a Long Way Off

Brian Gracely Brian Gracely Profile: Brian Gracely

Back when we started The Cloudcast (.net) podcast in 2011, our first guest was Christian Reilly from Bechtel. At the time, he was a couple years into a multi-year process of evolving his internal IT architecture to a private cloud. One of the most interesting comments he made about their evolution was the lack of interoperability between technologies and platforms claiming to be “cloud”. The way he explained it, this new paradigm was at a crossroads. It could either emulate the early Internet walled-gardens of AOL and CompuServe, or it could embrace the open standards that allowed the Internet to expand into every aspect of our lives.

More than two years later, the debate about cloud interoperability is still raging.

These days, there seem to be three camps of thought about how to deal with interoperability:

Same Cloud Everywhere 

The simplest way to think about how to leverage multiple clouds is to have the same technology everywhere, in theory ensuring interoperability across multiple environments. This is the approach being taken by VMware vCloud Hybrid Service, vCloud Service ProvidersVirtustream xStream and various implementations of OpenStack. Some of these offerings are also beginning to offer alternative API support (often AWS API capabilities).

Same API Everywhere

Other projects are attempting to take the path of having similar APIs available on multiple cloud instances. This is the approach being taken by The Amazon-Eucalyptus Partnership (Lydia Leung, @cloudpundit), as well as the OpenStack foundation. It’s important to note that early implementations of OpenStack were not all the same, but the OpenStack Foundation is attempting to remedy some of this through the RefStack program. Even with these efforts,  some people are still concerned that OpenStack will become too fragmented and needs a dominant leader (or vendor) to drive it’s success.

Cross-Cloud APIs

The third camp is interested in created a more unified set of APIs that would work across multiple clouds. This is the area that has created the most contention and debate, for obvious reasons – complexity, differentiation, competitive markets, etc. Leading cloud user NetFlix has been actively working to open-source many of the tools they use today in hopes that other cloud providers (they are AWS’s largest customer) will be able to create competitive offerings that give them flexibility for their business. Other leading cloud visionaries are looking to drive cross-cloud interoperability - An Open Letter to the OpenStack Community: Our Future Depends on Embracing Amazon (Randy Bias, @randybias). Within the OpenStack community, not everyone believes this is a good idea. This debate seems to be heavily divided between the “innovation” crowd and those that claim that ecosystems can be overtaken through commoditization (Can OpenStack dominate IaaS? (Simon Wardley, @swardley)).

NOTE: It’s important to remember that just emulating or copying an API doesn’t ensure interoperability between clouds. Those APIs must also be built on top of similarly architected systems, otherwise one API call won’t deliver the expected API result of another call. This can be especially challenging if the underlying infrastructure (compute, network, storage) has different capabilities between clouds.

As you can probably see, we’re still a long way away from the possibility of “USB 2.0″-like compatibility between clouds. Technology, politics, competition and money are getting in the way. It will be interesting to see if the vendors and open-source projects find ways to work more closely together, or if customers decide to use various 3rd-party tools (Enstratius, Righscale, Ravello Systems, Cloud Velocity, etc.)  to get around the lack of interoperability

NOTE: While quite a bit of architectural level work would need to get done to create interop between systems, it’s important to point out some excellent work being done to educate the market - Architecting OpenStack for VMware vSphere (Kenneth Hui, @hui_kenneth)


August 4, 2013  8:34 PM

What is “Enterprise Ready” in Cloud Computing?

Brian Gracely Brian Gracely Profile: Brian Gracely

Probably more than any other question, I get asked all the time if I believe that Amazon AWS is “enterprise ready“. Sometimes the question comes from analysts trying to determine the extent that the IT industry is shifting. Sometimes is comes from vendors trying to determine the pace of change/transformation/disruption. Other times it comes from IT organizations trying to determine what their future strategies look like for procurement, service offerings and future skills evolutions.

“Enterprise Ready” is one of those loaded phrases that you really need to be careful about using, because the person you’re speaking with typically has a preconceived notion about what it means. For many people, it means that the service essentially emulates all the aspects of an existing Enterprise IT data center – include all the elements of performance, redundancy, security, compliance, etc. In essence, they expect the new environment to functionally be like the world they are used to. What they don’t want are the long delays to get things provisioned, the long meetings with security and compliance teams telling them everything is unsecure, or the long budgeting process to procure the required technology. [Insert analogy about eating cake here]

What I try and explain to people when answering this is to think about “Enterprise IT” in two buckets:

  • Bucket 1 – Applications that you typically associate with IT – Email, ERP, HR, Unified Communications, Sharepoint, etc..
  • Bucket 2 – Business requests for technology that typically get turned down by IT

Bucket 1 is all about applications that have long, relatively stable life-cycles and IT is usually trying to balance cost vs. performance of these applications. This is a technology bucket. Known equipment. Known capacity needs. Aligns to depreciation cycles. These applications might be a fit to migrate to a public cloud, if the business is facing some ‘change event’ (eg. M&A, equipment EoL, licensing upgrades pending, new CIO, budget challenges, IT skills challenges, etc.).

Bucket 2 is all about the pace of today’s business world. The world where winning and losing is often measured in how quickly we can transition from a great idea to a great implementation of that idea, via technology + business models. These ideas are responsive to the market, to competition and to changes that we’re planned for in the annual budgeting meeting. At least at the time of ask, they are often the complete opposite of the Bucket 1 applications – unknown capacity needs; shorter usage duration; unknown scalability.

So what might fit into Bucket 2?

  • VP of Marketing would like a smartphone app for the sales-kickoff or annual tradeshow. They aren’t sure if it’ll get 1,000 or 15,000 downloads (unknown capacity, unknown scale). They would like it to be available 2 weeks before the event, and collect data for up to 1 month afterwards. Beyond that it’s not needed (short duration).
  • VP of Operations just got back from a conference discussing “Big Data” and would like to prototype ways to better analyze sales trends and how they are effected by weather, gas prices, seasonality and a few other sources of publicly available data. He needs the prototype completed in 60 days, as he needs to justify an ROI (eg. better sales insight) to justify a more expansive project. If the ROI doesn’t materialize, the bigger project might be cancelled – Quick timeline, potentially wasted capacity beyond 60 days.
  • CIO tells the lines of business that the existing annual IT budget has been exceeded by Q3, as a major project has gone over budget (it occasionally happens), but one of the lines of business has a major opportunity if a new system can be put in place in time. The opportunity is $5-10m in Q4, with a follow-up of $10-20M in Q1. Pace of implementation is of the essence, but where to do it? Sometimes this is called ‘Shadow IT’, it’s just a reality of doing business in the 21st century. Global resources exist, so why shouldn’t a business try and leverage them?

At this point you might be asking why I didn’t explicitly mention AWS and “Enterprise Ready”. Hopefully you’ve figured out that there is more to “Enterprise Ready” than just the underlying technology. In today’s world, there is a place for applications in the public cloud (or remaining in existing data centers) for those characteristics. But there is also unmet Enterprise demand for solving business challenges with technology, now. Those Enterprises, those applications are “Enterprise Ready” too. They are just focused on a different characteristic being the most import element of their success.

So how big could Bucket 2 be? It’s tough to tell (long-term), because we often don’t know how big a new technology segment could be until people and companies understand just what is possible without prior restraints. The Client-Server market was 10x the Mainframe market. It’s not unusual for people to have 3-4 connected devices (smartphones, multiple tablets, laptops, etc.).

Cloud Computing helps level out the short-fall in supply and demand for Enterprise IT. Whether or not the unmet demand is Enterprise Ready is now as much about pace-of-implementation as it is SLAs and IOPs. The forward-looking CIOs are trying to figure out how to deliver both to their Enterprises.


Page 5 of 9« First...34567...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: