One of the most frequently discussed topics on the podcast, sometimes on the air and sometimes off, is the financial future of all these cool new technology companies that have open source software at the core of their business. Companies like Docker, CoreOS, Hashicorp, Mesosphere, Puppet Labs, Chef, Mirantis, Ansible and many others. We watch as they continue to get increasing large VC funding rounds, and like the dot-com bubble of the last 1990s, we wonder how they will match their revenue models with their valuation models. [Note: Puppet Labs had a strong Q1 revenue announcement]. Are these companies that are built only for acquisition (short-term), or do they have the sustaining power to last 5, 6 or 7 years before they move to IPO stage.
Keep in mind that these companies, with open source at their core, are different from other start-ups that are enduring longer periods of existence/growth (5, 6, 7yrs) without building an open source community – eg. Nutanix, SolidFire, Virtustream, Box.net, etc.
As I think about all the funding, I remember back to some of the most important (and pragmatic) guidance I’ve gotten from entrepreneurs like Rodrigo Flores and Rodney Rogers – “no matter how cool you think your technology is, it can only create a sustainable business if you can actually collect a Purchase Order (P.O.)”. And it’s important to remember that there is a big difference between building a community around a technology and building a business around your technology. It’s the difference between creating value and capturing value. The former requires technologists and vision. The latter requires that you understand not only who will fall in love with the technology, but who will approve and fund the acquisition of the technology (directly or as-a-Service). It means contacts, licenses, terms of payments and all those are things that aren’t embedded in a GitHub update. Continued »
For just a two day event, the 2015 Cloud Foundry summit produced quite a few interesting elements for a community that is starting to mature and hit it’s stride. I’ve written about it a couple times (here, and indirectly from discussions here) immediately following the show. But now that I’ve had a few days to reflect on it, some other thoughts came to mind:
The Foundries are Forming
When we spoke with Sam Ramji, he spoke about the development and contribution model for Cloud Foundry projects. While “outside” contributions are considered, to become a core contributor one must work within the “Foundry” model of pair programming. And in 2015, several new “Dojos” have been stood up. Dojos are the physical locations where the Cloud Foundry model of pair programming is preached and practiced. While a dojo existed at the Pivotal offices in San Francisco, new dojos have now been added in EMC’s Boston/Cambridge, and new dojos were announced for RTP, NC (by IBM) and another independent dojo in San Francisco (by Cloud Foundry Foundation). In speaking with leaders in the Cloud Foundry Foundation, it’s expected that more dojos will open in 2015 and 2016, including locations outside of the United States.
New Business Models are Forming
IBM offers Cloud Foundry as a public cloud service, under the BlueMix brand. Pivotal offers it under the Pivotal Web Services Brand. CenturyLinkCloud now offers Cloud Foundry via their 13 Data Centers. Huiwei will offer Cloud Foundry to their global customer base. And many others are offering Cloud Foundry both on-prem and off-prem, delivering interoperability across multiple clouds and in multiple geographic regions. Interoperability and competitive markets create great value and opportunities for customers. It creates the potential for new business models to emerge.
Public Cloud is the Starting Point, But Private Clouds still Exist
The premier customers that were highlighted throughout the week were well-known Enterprise names, across multiple industries. For many of them, their starting point in using Cloud Foundry was a public cloud offering or instance. It not only allows the developers to get new applications up and running quickly, but it also begins the organizational transformation that’s needed to make PaaS a successful IT model. It’s the model called out in books like “Switch“, where small but noticeable change is more important than boiling the ocean. And then for many of these companies, they would deploy an on-prem instance and continue to build cloud native apps and transform their culture or business. This is a reversal of the model we frequently heard about IaaS models, where it started as an on-prem IT Transformation, with the promise of extending it to a public cloud for added capacity – “cloudbursting”. Continued »
In my case, I recently found myself in Kitty Hawk, North Carolina, ascending the sandy Kill Devil Hills. It was on a school field trip with my older daughter to learn about the history of the state. On this site, back in 1903, Orville and Wilbur Wright manned the first motorized flight. Soaring amongst the birds, high above the sandy ground, finding balance between wind and gravity.
In parallel, I’ve been preparing a talk about “Why Business Should Care about DevOps” for EMCworld. To draw an analogy, I go back and look at how Henry Ford revolutionized the creation of not only a new industry, but also the surrounding industries of hotels, suburbia, drive-thru restaurants, and interstate highways.
Even though I benefit from their genius on a daily basis, I hadn’t really thought about any of these men in decades. Sometimes we take their amazing accomplishments for granted once they become a normal part of our lives.
Three amazing individuals. Three amazing innovators that spent their entire lives focused on a singular purpose – airplanes and automobiles.
But we no longer celebrate the ones that dedicated their lives to a singular purpose. Those that continued to iterate and improve on their enormous accomplishments. Too often we celebrate the “Renaissance Men (and Women)” that dabble in many pursuits.
I’m not sure what happened:
- Is it that the Microsoft “monopoly” case has people afraid of having too much market share?
- Do we morally fear that the prospect of “Do No Evil” isn’t possible if you achieve the successes of Google?
- Is it that market dominance is a fading concept, with the barrier to entry for new competition being so low, that people believe they need a portfolio of ideas to iterate upon in parallel?
- Is it that we’ve become such a distracted society that boredom and ego need to be feed at all times?
It’s not as if the world lacks for big problems to be solved or re-invented. We have global problems of food, water, energy, and literacy. Continued »
Last week, Amazon finally broke out the revenues (and expenses) of their Amazon Web Services (AWS) division. As many people expected, it’s 2014 revenues were in the $5-6B range, showing solid profitability. So what does this all mean?
From purely a contribution perspective, AWS is obviously a critical aspect of the overall Amazon business. While it may be less than 5% of overall revenue, it’s much more profitable than other business segments. And it can be the foundation for many other aspects of Amazon’s business – video, phone, tablets, FireTV, etc.
But to me, there are two aspects of their revenue announcements that are most interesting. Continued »
This past week, VMware had a launch of a new area of focus called “Cloud-Native Apps“. For the most part, it was focused on how VMware plans to directly address the use of containers (in multiple formats) within their broader stack and strategy. The announcement highlighted two new projects (Photon and Lightwave), which are being open sourced. It was an important launch for VMware because there are growing levels of interest around containers from both the market and VCs. Meetups with container-centric topics (or DevOps) are growing in popularity. Companies such as Docker, CoreOS, Mesosphere, Hashicorp and Kismatic are all expanding their offerings in ways that could one day compete with VMware in the data center. This is in addition to all of the PaaS platforms (eg. Cloud Foundry, OpenShift, Route 66, etc.) expanding their underlying support for containers and container management frameworks (eg. Lattice). Even Microsoft is getting into the container game in a serious way.
While it’s still very early days in this space for VMware, I had a few questions following the launch:
1 – How far does the Vision/Strategy extend?
It’s understandable that VMware has an “embrace and extend” strategy, as incumbents always need to find the balance between extending existing products/revenue and expanding into new areas. This approach is aligned to more of the “Gartner Bi-Modal IT” view of the future of IT, where the older and newer technologies are running in parallel but managed by a centralized group. This is different from the “Pioneers, Settlers and Town Planners” approach, which highlights how the future looking groups will break free from the exist model and blaze new paths.
So how far will VMware extend this strategy to develop more elements of a container-centric architecture vs. integrating with commonly used elements in the marketplace today? Will they eventually decouple some of these new elements from the legacy elements (eg, ESX VMs), or does that remain a constant long-term?
2 – How far does the Open Source support extend?
It’s great to see that VMware is setting up repos on GitHub and actively seeking contributions from the community (or will be soon). But how far will the open source projects extend? If Project Photon integrates into ESX, will VMware ever offer an open source version of ESX? This was something Adrian Cockcroft mentioned to us on the podcast, and VMware had Adrian speaking at their launch (starts at 11m45s). Continued »
Words and labels and tags in our industry mean something – at least for a while – and then marketing organizations tend to get involved and use words and labels and tags to best align to their specific agenda. For example, things like “webscale” or “cloud native apps” were strongly associated with the largest web companies (Google, Amazon, Twitter, Facebook, etc.). But over time, those terms got usurped by other groups in an effort to link their technologies to hot trends in the broader markets.
Another one that seems to be shifting is PaaS, or Platform as a Service. It’s sort of a funny acronym to say out loud, and people are starting to wonder about it’s future. But we’re not an industry that likes to stand still, so let’s move things around a little bit. Maybe PaaS is the wrong term, and it really should be “Platform”, since everything in IT should eventually be consumed as a service.
But not everyone believes that a Platform (or PaaS) should be an entirely structured model. There is lots of VC money being pumped into less structured models for delivering a platform, such as Mesosphere, CoreOS, Docker, Hashicorp, Kismatic, Cloud66, Apache Brooklyn (project) and Engine Yard acquiring OpDemand. Continued »
A few weeks back, a friend of mine asked me to help him create some live demos for a big event that his company had coming up. As with any demo, we spent some time focused on the target audience, what we expected of their knowledge background, and then the actual story that we wanted to tell. Then making the technology actually work was the last piece of the puzzle.
In this case, he wanted to showcase some next-generation technology, which was targeted at a “DevOps-centric” audience. As we built out the storyboard, we walked through the tasks he wanted to be shown and how they would be displayed. What struck me as odd was how often he wanted to show each layer within his demo (infrastructure, management, apps) and how many times he wanted to showcase a GUI. The audience he was going after would typically never use those tools, instead focusing on either CLIs or more likely APIs to integrate between the layers. It was becoming a demo of functionality that a customer would never use in real-life, but it looked pretty on stage and a big display. I tried to get him thinking like this, like his target DevOps audience would think:
Every year, the landscape of open source in the Enterprise seems to make subtle changes as IT organizations struggle to find a balance between becoming more agile and having the skills to engage with open source software and communities. Even the traditional vendors are getting into the game (EMC, Cisco, HP, Juniper, VMware, etc.)
Why does open source appeal to Enterprise IT organizations?
- Acquisition Cost – In theory, the acquisition cost of open source software should be either $0, or much lower than commercial software. Of course this can vary wildly as options for “commercial support” – “open core” – or commercial software “built on open-source” (eg. OpenStack) instead of open standards (eg. IETF, IEEE).
- Licensing – It goes hand-in-hand with acquisition costs, but has some nuanced differences. More and more, business leaders and developers understand the power of accelerating the “idea-to-execution” paradigm, which means that they need to be willing to experiment. Flexible, open licensing means that more projects can be started. When the license costs are $0 (or lower), this better aligns the costs to the value from the IT organization or business. It flips the “vendors get paid up-front” to “vendors get paid with usage/consumption, or when the business realizes value”.
- Community Roadmaps & Timelines – The pace of software projects coming out of the open source communities (Apache, Linux, etc.) is typically much faster than commercial vendors – typically every 3 to 6 months vs. once a year. The ability to leverage all the creative resources that are passionate about a project is an excellent way to get leverage and speed for new projects.
- Open Interfaces – To succeed in open source, projects need to be flexible to the components around it. It must support open APIs and be pluggable for various architectures. In more and more cases, this is providing the trade-off of “solution” vs. “components” (and here) lock-in avoidance that many companies desire.
Got Skills? – There are many skills that typical Enterprise IT organizations may not have readily available:
- Linux – Most open source projects highly leverage aspects of Linux. Free courses are available online.
- GitHub – Being able to interact with the source control system that houses most of today’s open software. Lots of free resources and tutorials are available online.
- Open source licensing – In most cases, Enterprise IT won’t attempt to sell the software they create, but they should be knowledgeable about the different types of licensing and when they may/must the software they add/change to a given project. Educating yourself on the options is important.
- Writing to APIs – Does your IT organization primarily rely on GUIs, CLIs and some scripts? Evolving this to interact with APIs requires some more advanced development skills, or a willingness to work with new tools to interact with those APIs. This is a good introductory tutorial on REST APIs.
- Funny project names – Don’t expect to see “Enterprise Edition vX.X”, instead you need to get used to things like “Pig”, “BOSH”, “Hive”, “Swift” or “Clocker”.
- Need Documentation? If there’s anything developers dislike more than meetings, it’s writing external documentation. “Read the code” replaces “RTFM”, but either way this can lead to frustration from Enterprise IT groups that may have come to expect more complete examples in documentation. This is an areas where paid-support might add value to IT shops.
- Learn the New Application Models – a great place to start is The Tweleve-Factor App model. This is the basis of the microservices trend that is so frequently discussed for modern application development.
What are you doing within your company regarding open source projects? Is it a company-level priority yet, or just something that you’re exploring outside of work?
For the last 9-12 months, my day job has keep me pretty focused on more next-gen technologies and open source. While this is fun and exciting and involves a lot of new learning, it also creates an interesting dynamic when out talking to customers and communities that aren’t based in Silicon Valley (or maybe Seattle, Austin and a couple other tech hotspots). Whether we’re talking about open source or automation or micro-services or containers, the inevitable question always comes up – “This doesn’t seem to be aligned to our current apps, so what will we do with them?” In many cases, these are packaged applications (Microsoft, SAP, Oracle) that may or may not have any customizations.
Like all the great mysteries of IT, the answer to this question is “it depends”. And then like anyone giving advice to a situation that has limited context or details (since we just met), it’s important to provide a framework of possibilities.
Many existing applications will not be re-written to take advantage of modern frameworks, not should they. Maybe there are ways to “wrap” services-centric architectures around them (good discussion on Eps.6 of The Goat Farm podcast), but often these just need basic care and feeding to continue providing the service to the business. If anything, I suspect that we’ll begin to see many UNIX/Sparc environments migrated to x86/VMware environments if that older hardware goes EoL. Get those apps onto lower-cost hardware as part of a cost-reduction project. It’ll be a big, but boring, business for VMware. All the elements to handle those large apps (lots of CPU cores, lots of RAM, dedicated I/O, node-level HA) are now embedded in VMware vSphere 6.
Depending on how old the applications are, there is some chance this happens, if some of the previous development team still exists and can explain the legacy code. If the application is part of a business transition, such as a move to mobile-centric devices, then this becomes more likely. Maybe the entire application isn’t rewritten, but enough is modified so it functions properly on the new devices – touch screens expect different interactions than those that use a mouse/keyboard for input. If these re-writes happen, it’s also an opportunity for IT and Business groups to look at the corresponding culture shifts that align to DevOps, creating a more agile environment to operate those applications.
Other than a few specialty, vertical applications (clinical trials?), there is basically a SaaS application for everything you do in-house today. Other than a few giants (Salesforce, WebEx/Go-To-Meeting, Workday, Concur, etc.), it’s an extremely fragmented market segment. Some areas will grow extremely large over time (eg. Office 365, Adobe Suite, Box, Dropbox) as installed bases are migrated. Others will offer unique value-add on top of other applications (eg. Twillio). And almost every major packaged vendor is looking to make major offerings more attractive as SaaS applications (Oracle, SAP, Microsoft, Adobe, etc.). Given that the UI and UX is almost always better for SaaS than on-prem applications (both web and mobile), you’ll rarely ever find an end-user that doesn’t prefer a SaaS application.
[NOTE: We didn’t talk about SaaS applications for adding-value around more modern environments. We’ll save that for another post]
If I had to bet on where the major of efforts will be with legacy apps over the next 3-5 years, I’d put the odds at something like this:
- Ignore: 60-70%, with a focus on UNIX/RISC migrations to x86/VMware
- Rewrite: 15-20%, with a focus on integration with mobile apps
- SaaS-ify: 30-50%, with a focus on applications that drove “productivity” in the 90s and 00s (email, collaboration, etc. being realized as non-differentiated and commodities that nobody really wants to manage.
Ever since Docker took there most recent funding round ($40M Round B at ~ $300M valuation), many people have speculated about the future of the company. Do they evolve to become the next VMware? Do they have a monetization model that would lead them to an eventual IPO? Do they get acquired by a larger company – and if so, who?
Given that Docker is usually discussed in the context of building/deploying/managing Linux containers, it was surprising to see Docker announce a partnership with Microsoft enable Docker with Windows (and not just Boot2Docker for Windows). Interesting. Strange bedfellows? This isn’t the first time that Microsoft has appeared to announce support for the new hotness, many months or years after the initial buzz was created in the community (Zune, Phone, Tablet, Azure IaaS, etc.). But given Microsoft’s resources and reach, being a fast-follower is essentially their business model these days – and “v3″ is their new beta or GA.
But the more I thought about it, this makes alot of sense for Microsoft. As we see more computing activities move to either mobile devices (tablets, phones) or public clouds, the underlying OS is becoming less Microsoft-centric. But under the new Microsoft leadership, the willingness to embrace things like iOS or Linux (in Azure) is becoming more commonplace. It appears that they are embracing (or re-embracing) the value in monetizing the applications and frameworks above the OS.
So why does Docker make sense for Microsoft? In a nutshell, because Docker is becoming the element that’s nearest to the OS where developers care about the technology. And because Docker provides a truly portable format (unlike VMs) across many environments – laptops, clouds – it has the potential to help Microsoft more seamlessly help future developers to Microsoft’s platforms make a (more)seamless transition of applications. If the future applications can be written to use native Linux or Windows Docker containers, it not only removes friction for the developers, but it also doesn’t create a revenue friction (loss) for Microsoft. This was always a burden for them to adopt the new hotness in the past – it had to be adapted and locked into Windows.
Whether Microsoft becomes an active community contributor, a proprietary extender (MS Containers 2015?) or an acquirer is yet to be seen. Any of those outcomes is possible. There’s new leadership in place that is doing things very differently than in the past. Or maybe we’ll see (generic) Docker support in Windows along with a Microsoft-specific version, allowing developers choice with a potential for licensing uplift.
The nice thing about open source is that we can allow follow along to see how things are progessing (Docker + Azure; ASP.NET + Docker). This doesn’t mean we’ll see everything, as some features/functions could be held back from upstream contributions (eg. “kept private”), but it’s a big step forward for Microsoft.