If you’re into DevOps, you’re probably already familiar with Docker, the container management technology that came from dotCloud – back when it used to be a PaaS company. If you’re more of a traditional IT person, you might have just recently heard of Docker if you follow VMware and saw them discuss it at the recent VMworld 2014 event in San Francisco. If you’re curious about the technology, here’s some places to get started:
NOTE: There is a difference between Docker (the company) and Docker (the technology). The company is a VC-backed entity that created the technology and is responsible for commercializing it in multiple ways. The technology is available via open-source, as well as through commercial offerings – both from Docker and other companies (see the list below). I know that’s a little bit confusing, but welcome to the open-source, open-core, new tech from the Cloud moving to the Enterprise world of modern IT.
Ready to dig in? Here’s some good starting places:
- Docker Weekly Newsletter – https://www.docker.com/subscribe_newsletter/
- CenturyLink Labs Newsletter – tons of good stuff related to Docker (independent) – http://panamax.io/# (see bottom for sign-up)
- Read the Code – as pointed out before, here’s the source code: https://github.com/docker/docker
- Not into Reading the Code – here’s a book by the Docker creators – http://www.dockerbook.com/
- Want to Experiment (online) – https://www.docker.com/tryit/
- Want to Install it (laptop) – Mac, Windows, Linux
- Try it Live: Several Cloud Providers now offer Docker, including Google Cloud Platform, Digital Ocean, Tutum and several others
- Tutorial on Docker 101 – http://slides.com/kennycoleman/introdocker#
- But what about Security? – http://www.slideshare.net/jpetazzo/linux-containers-lxc-docker-and-security
Hanging with the Community
Would rather learn from others? Here’s where to go:
- Find Docker-ized Files & Applications on Docker Hub – centralized repository for Docker images
- Go to DockerCon – it was in June, so plan for 2015
- Watch the Videos from DockerCon or recent meetups – https://www.youtube.com/user/dockerrun
- Attend a Docker Meetup (just about everywhere) – https://www.docker.com/community/meetups/
Using a PaaS Continued »
What a strange, weird and often times confusing week in San Francisco. As always, it’s great to be able to reconnect to such a broad community. But it definitely highlighted that our industry as at a crossroads in many dimensions.
Before the show there was a massive earthquake outside of San Francisco which destroyed a lot of good grapes (and some real estate in Napa). And we had a really interesting Twitter discussion about how geeks like to learn new stuff.
Heading into the week, the excitement level felt somewhat muted. Usually we know about the majority of planned announcements, but there’s always a level of anticipation about those few projects or announcements that have been well-kept secrets.
VMware – The Infrastructure Company
There is a part of the VMware Software-Defined Data Center (SDDC) story that leads to the conclusion that IT will evolve to deliver IT-as-a-Service, which has a bunch of benefits for driving cost-savings and agility for the business. But I’ve noticed an interest nuance in how they tell their story which has me a little bit confused. In order to get to that IT-as-a-Service state, the critical elements are around Standardized Application Services, delivered via a Self-Service Catalog, with automated deployments going on behind the scenes. It’s about the consumption model for end-users and the management model for IT. But in the current SDDC story, that stuff all comes last. It comes after they talk about turning Compute, Storage and Networking into software. Software-Defined Plumbing. This is how infrastructure companies talk – infrastructure, infrastructure, infrastructure – and then sprinkle in some security and management at the end.
Everybody knew about “Project Marvin”, now called EVO:Rail. For a show focused on Software-Defined Enterprise, it was a small set of hardware that seemed to steal the buzz from the show. They also previewed the concept of EVO:Rack, but didn’t provide any timelines, so I’m just going to assume that it’s at least 12 months out (if not more). The technology to seamlessly manage multi-rack scale is very complex and not immediately visible within the VMware portfolio today
Some people seemed to get hung-up on the terminology between Converged Infrastructure and Hyper-Converged. I think it might be simpler to call this new trend 2nd-generation Converged Infrastructure (1st-gen being products like VCE Vblock, and reference architectures like FlexPod and VSPEX) . At the end of the day, it’s still primarily technology that you could buy before, with some pre-built installation scripts. It’s slightly improved in terms of having an element manager that looks at the entire system, but the previous device-level tools are still available. And those tools aren’t cheap or simple, so there are still areas for improvement. Continued »
It dawned on me recently that I’ve been part of this crazy IT community for 20 years now. My first job was in sales for a small reseller. When they asked me if I knew the 7-layer OSI model, I confidently said “yes” and later realized that I had confused that with Maslow’s Hierarchy of Needs, which I learned about in some psychology course. Stack’s a stack, right? Since then I’ve had the opportunity to manage a consulting team, be a systems engineer, handle support calls, be a product manager, be a marketeer, work in corporate M&A, start a few small “companies” and a bunch of other fun stuff. I’ve been lucky that people and companies have let me bounce around and explore different interests.
And after 20 years, sometimes I feel like I’ve been doing this forever. People in our industry like to joke that an “Internet Year” is like 7 years in the real world, so I suppose it makes sense that I feel like I’ve been doing this forever. And then I look at how old my kids are, and my mortgage, and my driver’s license and realize that AARP doesn’t kick in for a while and “retirement” is at least ANOTHER 20 years away. Crap! Now what?
I’ve written before that I tend to have a bunch of conversations with colleagues that fall into the 35-45 yrs age-range. Even did a podcast about it. Everybody is watching the crazy pace of change in the IT industry and they are trying to figure out what to do next, what to learn next, where is the path forward.
I always try and stress two things to anyone that asks me for guidance/advice on those questions:
- Whatever you decide to do, make sure it’s something moving you towards next steps and bigger goals, as you have 10-15-25 more years left to work – unless you have a rich relative or are really awesome at winning the lottery.
- Expect that you’re probably going to need to gain the experience you need for that next role without anyone paying you to do it (eg. learning for “free”)
The other advice I’d probably give someone today is that we’re nearing the end of a long-run of how the IT industry has been modeled. The model of vendor > distribution > channel/SI > customer, now has serious competition from public Cloud Computing and various forms of Open-Source. OPEX is replacing CAPEX and Software is replacing Hardware. Knowing how to write code will be important no matter where you end up. There is going to be quite a bit of chaos over the next 5 years, and then some new equilibrium will most likely shake out. Go read Simon Wardley’s blog if you want some more guidance on a model for mapping out the future – just beware, he’s super smart and your head will probably hurt after reading a few articles. Continued »
I saw the image for Pizza-as-a-Service floating around from many of my colleagues over the last few weeks. It came from an article written by Albert Barron (IBM). And while I would love to live in a world where Pizza was the only food available, I have a couple issues with this article. Actually, I don’t have any issues with the main premise of the article, which appears to be educational to help explain the difference between the concepts of Infrastructure-as-a-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Where I have some issues is how our industry tends to take analogies like this and stop at the *-as-a-Service definitions.
My issues tend to fall into three categories:
- These definitions are quickly blurring. For example, how do you classify something like Relational Database Service (RDS) from Amazon Web Services (AWS)? It delivers a Database via IaaS infrastructure, but several Database Admin (DBA) functions are automated behind the scenes, like a PaaS.
- Since many companies sell a mix of on-premises and off-premises capabilities, it’s somewhat misleading to claim “vendor manages”, especially if this is for a Private Cloud environment. The vendor may be creating the automation to enable certain functions as-a-Service, but they aren’t actually operating them for the customer (unless this is an outsourcing or SaaS-delivered service). So while the vendor may have helped you automate the provisioning, backup and protection of a Database, this isn’t an apples-to-apples Database-as-a-Service with a public cloud offering. I’ve seen many customers misunderstand that language and set the wrong expectations for their IT operations or developers.
- Even if people grasp the differences in definitions, they often don’t grasp the differences in real-world use-cases. For example, I find many people that don’t understand what happens to an application once it starts to leverage “services” from a cloud. I’ll go back to the IaaS + Database-as-a-Service example. If a company leverages those services, which make a lot of sense if they don’t want to focus on DBA tasks, then they are going to have to figure out how to replace those DBA-as-a-Service capabilities if they ever choose to move that application to another Cloud provider. I suspect that this is because infrastructure people understand things up to the VM-level, and application people understand stuff above the VM (or container). Either way, it’s always interesting to see people want to make a claim about “lock-in” when they really don’t understand what locks a customer or a specific application into a specific cloud. It’s not always just the cloud provider.
Sometimes I’m a complete blowhard, and other times my soapboxes start to materialize in the marketplace. I’ve been on my soapbox about Cloud Management as-a-Service a couple times in the past year (here, here). My premise goes something like this:
- Cloud management software is either overly expensive to buy upfront, or often too complex to install and operate.
- The number of people that have the skills to setup and operate a Cloud environment is still fairly small
- The companies that run Public Cloud environments for many customers have a learning curve that will grow exponentially faster than any set of individual cloud on-premises cloud deployments.
- Unless you’re a Public Cloud provider and have built your own intellectual property (eg. Cloud Management software), there really isn’t any differentiation that an IT organization gains from going through the effort of learning to install and operate their own Cloud Management infrastructure. If it works, it efficiently spins up resources for application teams. If it fails, then life is pretty miserable for everyone involved.
And when asked about use-cases, I typically point to three initial starting points:
- Customer that wants to allow their developers to access Public Cloud resources, but they would like to continue to have some amount of visibility (and potentially policy-based control) of that usage. Sometime this is called “Cloud Brokering”.
- Customer that wants to own the underlying infrastructure (in their Data Center or at a CoLo facility), but doesn’t want to to operate the underlying Cloud Management infrastructure. Customer would like Cloud Management to be an OPEX model, and pay for it as they evolve their IT operations to use the new system.
- Customer that wants to build a Hybrid Cloud environment, combing #1 and #2 into a single system.
The greatest trick the devil ever pulled was making mainstream media believe that everyone in IT was an experienced Linux SysAdmin.
Or so I thought as I traveled to Portland, Oregon for OSCON 2014 (all content and videos included). I wasn’t exactly sure what to expect, although I’ve been to several open-source events in the past (LinuxCon, OpenStack Summit, dozens of local meetups for Docker, DevOps, etc.). I went because I wanted to be uncomfortable and learn some new things. I went because I wanted to meet people in new segments of the community. I went because I wanted to learn how open-source communities act and interact, from the perspective of developers, community leaders, evangelists, operations, and all the groups somehow involved. I went because I wanted to get some hands-on experience and ask questions.
Stuff I confirmed:
- Even if you’re attending the event, it doesn’t automatically mean you’re a Linux guru. Lots of people still trying to figure it out.
- Everything is on GitHub. It’s there, or it doesn’t really exist to this community.
- DevOps skills are in high demand
- Docker is the new hotness, or at least the new “gotta see how much world hunger this solves”
- No two developers or admins use the same tools.
- There are lots of companies doing the cool stuff, they just hide it behind how you interact with them on the web/mobile.
- Not all companies are comfortable with using FOSS software, but many of them keep exploring because of the possibilities.
I took a tutorial on Git and GitHub. Even with the small examples that we worked on, with approximately 200 people collaborating on the same project, I’m still trying to wrap my brain around how some sites and applications get updated so frequently without issues (then I saw this picture and I don’t feel so bad. Fork code bases, update, merge, pull requests, reviews, accepts, push code. The notes from the tutorial were shared on this Google Doc, as well as some Q&A. Most of the tutorial can be done in your own time, content here. LinkedIn might be what people consider their “formal” resume/CV, but GitHub is becoming the place where people go to see what you’ve actually accomplished.
I took a tutorial on Docker (slides here; here’s another good set of tutorials), because you couldn’t walk 5 feet without running into someone doing something interesting with Docker (see: Kubernetes, Mesos, Cloud Foundry, Clocker, etc.). The layering, nesting and portability elements of Docker are very interesting, and community support and associated projects are very strong (Cloudcast podcasts on Docker – Eps.97, Eps.139, Eps.143, Eps.150). Some people associate Docker with virtualization, like VMware or KVM. Other people associate Docker with application virtualization. Others consider it to be an IaaS or PaaS replacement. Many of the Docker experts will tell you that each of those is partially right, and partially wrong.
I spent a morning in Simon Wardley’s strategy and value-chain mapping course, “Playing Chess with Companies“. It was an excellent opportunity to understand and practice the concept of value-chain mapping (eventually there will be an app for WardleyMaps) and how it can be used to not only analyze existing strategy, but also view competitors and attempt to predict how certain actions could impact the competitive landscape. 3-4hrs isn’t enough time to grasp all the concepts, so I suggest spending some time on Simon’s blog to get more details and examples of how it applies in real life. In particular, spend some time trying to understand the evolutionary model of concepts > products > commodity > utility. Also spend some time understanding how the evolution not only creates disruption to existing actors, but also potentially opens new opportunities for other actors.
I spent time listening to how Facebook rolls out updates. Their design is modular, so updates only impact small areas of what customers see for any given change. But they measure the updates meticulously, and apply them to small sample sizes, to determine if the update accomplished it’s goal (faster response times, longer viewing times, usage metrics, reduced CPU usage, etc.).
I sat through a session which described how Instragram builds their web and mobile apps, using a team of 5-6 people. I got exposed to the React Mobile development framework and the types of tradeoffs being made between Traditional Server-Side, Rendered AJAX and Single-Page applications (bandwidth, how much code executes, static content vs dynamic content, etc.)
I was exposed to some new concepts – Kubernetes (which runs Google Compute Platform) and Mesos. These can be looked at as separate concepts and projects, or linked together to help scale and model micro-service based architectures.
I saw a mix of companies/groups/organizations highlighting on-premises offerings (Ceph, MesosSphere, Docker, etc.) vs. companies/groups/organizations highlighting offerings that ran as SaaS applications (Auth0, SauceLabs, Mashery, GitHub, etc.). This was different than AWS Summit a few weeks ago.
I picked the brains of community and developer evangelists to learn how they run successful meetups and hackathons. Lots of little tips that I hadn’t expected, and that I plan to incorporate into future events.
The OSCON community is generally very friendly. Probably because it doesn’t view itself as overly competitive, but rather trying to solve complex problems and focused on building things. They let the business’ business model worry about competition. While not always the case, the technology discussions are collaborative. The problems they focus on are modern web-centric problems (mobile, social, big data, etc.) and there isn’t talk of legacy integrations. There is demonstration vs. presentation. There’s a sense of shared ownership between builders and users. It’s an event that draws people focused on the next 5-10yrs worth of potential change. And Portland is a great place to host the event. It embraces weirdness, which is sort of how new ideas are often viewed. Everything is within walking distance. Nobody is in a rush. There are dozens of brew pubs nearby that let you sit and have engaging conversations.
It was an uncomfortable week, in a good way. I learned a lot. I had to turn off some old ways of thinking to hear how new thinking was taking place. Problem solving in ways that didn’t worry about certain limitations or constraints. I was one of the “normals”, which meant I was an outsider. That was OK, I knew that going in. I’m trying to reset my future compass. There was a sense that what’s discussed at OSCON is the emerging north.
Over the past couple years, I’ve been spending more and more time digging into various open-source projects that are gaining visibility, and sometimes traction, within the Enterprise. Projects like Hadoop, OpenStack, CloudStack, Docker, Cloud Foundry, Git as well as several others. I tend to look at them from three perspectives:
- Functionality – What commercial or popular capability can the open-source project replace (e.g. OpenStack/CloudStack vs. VMware vCloud; Hadoop vs. Google MapReduce)?
- Community Model – Is the project primarily driven by a benevolent leader or by a group of many contributors? Is it a commercially independent project (e.g. Apache Foundation projects, Docker) or is it more commercially driven (e.g. OpenStack, Cloud Foundry)?
- Tools or Frameworks- Is the project generally focussed on being a loose coupling of tools (e.g. OpenStack) or is it an extensible framework that could evolve into various sub-projects over time?
Open-Source projects are an interesting departure from the economic model that has dominated the IT industry for decades. In the past, a company invested massive amounts of capital for R&D, then distributed the output as product/license through various channels. Customers paid for the technology and the originator/vendor captured the majority of the revenues in the distribution channel. But Cloud Computing and Open-Source have flipped that model on its head. Now technology created within projects, funded by a mix of vendors and independent individuals, can be used to power many digital business models (eg. Google Ads, SaaS software, online insurance quotes, home automation, etc.). In some cases the technology was created by the business owner and then open-sourced (eg. Google MapReduce >> Hadoop) and in other cases the businesses are built using open-source technologies. Continued »
- This feels a lot like VMworld. Many of the same faces, and many of the same companies. Are they seeing a marketplace trend, or just hedging their bets?
- It has gotten very commercialized. Lots of booths on the show floor, lots of people throwing extravagant parties, lots of swag being given away. But the commercialization didn’t align to the customer attendance. It still seems to be very vendor / integrator-centric, with companies like Rackspace sending as many as 250 people.
- RedHat had just acquired Inktank, highlighting my prediction that we’d begin to see intense consolidation of OpenStack talent. RedHat followed this up soon after with the acquisition of eNovance.
- “Open-ness” will give way to capitalism. This started with the HP Helion announcement, which was expected, but then got interesting as RedHat announced new support guidelines for non-RedHat OpenStack environments.
Since then, we’ve seen a few more interesting moves that make me wonder if the Paris event will be the last time we see the broader OpenStack community come together.
- Mirantis expanded their efforts to deliver their own distribution, offer Openstack-as-a-Service and created a partnership with Oracle.
- Cisco continues to slowly drip details of their InterCloud products and services, with OpenStack being a central element.
- RedHat continues to expand its OpenStack “stack”, releasing the ManageIQ stack and continuing to acquihire talent. They appear to be keenly focused on Private Cloud environments.
- Smaller announcements from companies such as Piston Cloud and Bluebox are becoming more focused on Cloud Foundry / PaaS environments.
- MetaCloud continues to offer an interesting mix of business models and operational experience.
- Rackspace is working with Morgan Stanley to investigate its future options.
Attending an AWS event is a strange experience, especially if you’ve been attending technology events for many years and have gotten used to a certain pattern of announcements, demos, showfloors, etc. I’ve attending several AWS events in the past, primarily AWS re:Invent, but the AWS Summit in NYC seemed a little different this time.
First, the event is primarily attended by people that wouldn’t classify themselves as “IT Professionals”. It’s mostly people that would associate themselves with the “products” that a business makes or sells, and they leverage technology as a means of creating and delivering those products or services.
Second, the host (AWS) isn’t speaking to IT professionals and telling them about transformations and changes that are needed before they can get to the next stage in delivering IT. They focus on what is available now, to help their business do something now. And the context isn’t speeds and feeds, it’s business outcomes. How to get from ideas to execution.
Third, they have mastered the art of speaking in ways that ease the mind about the challenges and complexities of building applications and running technologies environments. And make no mistake, making technology work for a business is difficult, often very difficult. They speak in terms of pennies-per-hour and massive long-term savings, and “undifferentiated heavy lifting”. Note to IT departments, they are you talking about you, the purveyors of that “undifferentiated heavy lifting”. AWS is in the IT replacement business.
Finally, they don’t expose their org chart when discussing their offerings. It’s not the Storage division or the Compute division or Desktop division. They’ve figure out how to link together services, without mandating that they be bundled for pricing or renewals. And the services/products are all offered with public APIs, so it doesn’t initially limit the ecosystem from building adjacent capabilities around the core AWS services (more about this later). Continued »
There was quite a bit of buzz today with the announcement that Tesla Motors would release their patents to the world.The automaker, and it’s CEO (Elon Musk), that have become the envy of the automotive industry is giving away it’s most prized possessions? They are playing a technology game with Ford, Mercedes, Toyota and others, so why let them have the chance to catch up? Having grown up in Detroit, but spending the bulk of my professional career in technology, this is a fascinating turn of events and crossing of the streams. [Note: Because of the wording of this announcement, some are questioning if they will have any real applicable usage by outside companies.]
In today’s world, whether it’s automobiles or Cloud Computing, there are some massive platforms in the market. Competing against the largest platforms, or introducing new technology (or business models) can be extremely complex and costly. So how does a smaller company possibly compete against the industry giants?
One thing to do is to play by a different set of rules. In the case of Tesla Motors, they have developed the knowledge and experience to create an incredible set of products. But for them to scale, they would need to invest massive amount of capital to buildout the nationwide network of charging stations. And build service stations. And optimize their supply-chain. So instead of owning a vertical business model, they are choosing to let the market share in the opportunity of electric vehicles. They are sharing the costs and sharing the potential. In a game against competitors that are 10-100x as large, it may be the only way to remain as more than a small niche in the overall transportation market. Electric vehicles are a small market segment, at this time. It will be interesting to watch and see if the market helps it grow, establishing Tesla technology as the standard for any future vehicle.
But this approach doesn’t always work. For example, several years ago Rackspace decided that the future would be on-demand cloud computing, a shift from their traditional hosting business. But the industry leader, Amazon Web Services (AWS) was the 800 lb. gorilla and many years ahead of them. While Cloud Computing is still in it’s early stages, Rackspace decided that leveraging open-source was their best chance to compete. Instead of having to hire 100s of top-flight engineers to build their environments, they created and enlisted the OpenStack community. 100s of “free” engineering resources, creating Cloud Computing software that they could use within their own data centers. But several years into OpenStack, that strategy hasn’t closed the gap between them and AWS. Continued »