There seems to be an interesting trend developing in the PaaS (Platform as a Service) community that I want to write about a bit this morning. What if we didn’t need a a “heavy” Operating System in our cloud computing environments? What if we took a page out of the old switches and routers operations manual for the management of operating systems? Let me explain what I mean.
As I mentioned in a previous post, at OSCON the hot top products were Salt, Ansible, and Docker. Take a step back for a moment and notice they are all devops tools. They are all ways to make an organization go faster. What I also noticed at OSCON was while the public PaaS services were getting notice, the tools to build a private PaaS were a higher focus this year. I would also argue that everything I write here could be applied to cloud computing in general, not just PaaS.
I followed the buzz after the show on Twitter and noticed some folks take this conversation a step further, while some products (Ansible, Salt, Puppet, Chef) allow automation and some allow containerization of applications (Docker), what about the Operating System? The only way to optimize the operating system is to minimize it as much as possible. We need a small fast operating system that provisions quickly with a minimal footprint. Once we have this in place the automation and provisioning process is highly optimized. What would something like that look like?
To get an idea, go check out CoreOS. CoreOS is a “just enough” Linux OS but it is designed to be highly scalable and deployable vs. a bare minimum footprint like Tiny Core Linux and others. Think of CoreOS as a product that is designed to do one thing and do it very well. I’ve signed up for the Core OS Alpha, I’ll let everyone know on Twitter if I get in. (As a side note, they ask for your favorite type of beer when you sign up, that is a bit like trying to pick your favorite child. In the end I went with Oatmeal Stout)
For those that are skimming along, what makes CoreOS different? From a distribution stand point it is minimal, Docker compatible, and has built it clustering capabilities. But, where I think it stands out is from an operations perspective. First of all, the operating system is READ ONLY. Yes, you read that right. You can’t write to the OS. From an operations stand point, this is genius. I equate this operations model to my old days of working on Cisco switches and routers. In this model you had two partitions on the boot flash, one was active and other was standby. The partitions were also read only by default. If you had a problem you could always revert over to the standby to see if that corrected an issue. You could also perform very quick upgrades by replacing the standby flash, making it active, and then simply rebooting. CoreOS uses this model.
As I see it (still need to play with it), the operations would look very similar in a CoreOS/Docker set up. You would boot to the active CoreOS partition and from there Docker containers would be placed on top. What we have achieved here is a loose coupling between the operating system and the applications. We have also removed the operating system as this huge monolithic object that is hard to replace, upgrade, or maintain operations control over. I really like this model and plan to explore it more in the future.
What are your thoughts? Is borrowing an operations concept from the networking world and applying it to operating systems a good idea?