Clouds in the Open: The Operations Evolution of Open Source & Public Clouds

Page 1 of 3123

September 2, 2013  12:16 PM

VMworld Wrap Up



Posted by: Aaron Delp
cloud computing, Cloud Management, Hybrid Cloud, IaaS, SDN, VMware

vmworld2013_logo

Another VMworld is in the books. The show was very interesting this year for a number of reasons. First off, go back and take a look at my focus for this year and let’s recap from there.

Mobility, VMware Horizon Suite, etc.

*crickets*… *crickets* There wasn’t much in the way of mobility this year at the show. This could be a new strategy of odd years being a focus on the data center and even years a focus on mobility. I’m not sure. There were no new products in the mobility space so really no presence in the keynote speeches. There were plenty of sessions about the Horizon product line, plenty of Hands on Labs, show floor space, etc. Just no buzz…  Moving on

Cloud Computing

As expected, VMware had a number of announcements in this area. Everything from changes to the vCloud Suites, introduction of Nicira as a product officially, vCloud Hybrid Service, and dissolution of vCloud Director over time as a product. What I found most fascinating this year was the complete lack of emphasis in vCloud Director. The announcement of vCloud going away over time didn’t happen until the last day of the show. If you take a look at the Day 2 keynote, there is no mention of vCloud or even the vCloud Suites at all. Everything was vCloud Automation Center, vCenter Operations Manager (vCOPS), and vCloud Application Director (formerly vFabric).

Why the sudden change? In my opinion, VMware is acknowledging what the industry has known for some time, vCloud was slow to adopt as a product because no one really wanted to use it. I think the announcement was the right decision for the company.

While I liked the Nicira announcements, the general availability of vCloud Hybrid Service was very cool. I had a chance to play with it in the Hands on Labs and I was decently impressed with the product. I think it will fill a niche for Enterprise customers who are looking for something that allows application portability without rearchitecture to a “cloud era” workload. I believe they nailed their target market with this product.

Hypervisor

No surprise here either, but vSphere was bumped up to expand the reach and scalability of VMware’s hypervisor. As this space becomes commodity VMware is doing everything they can to add additional features into the hypervisor layer. vSAN was added (is it vSAN or VSAN, I’ve seen both?) as well as vFlash to allow the hypervisor to embrace high operations flash local in servers. It will be interesting to see which vCloud features make their way into the hypervisor over time. Will we see the concept of multi-tenancy and vAPPs in vSphere?

Event Logistics

After ten years, VMworld is down to an art. The pure size of the show (reported to be 22,000 people!) is amazing but at the same time I admire the efficiency of the show. I was still able to get into the sessions I wanted due to pre-registration, I was able to walk the show floor and talk to vendors, and I was able to see the keynotes. The only slightly disappointing event was when the Hands on Labs crashed one day and there was a long line to get in once they were running again. All in all it was a great show. No real surprises this year, nothing that I think changes the future of VMware as a company, but still a great event.

August 26, 2013  1:18 PM

What I’m Looking Forward to at VMworld



Posted by: Aaron Delp
cloud computing, Cloud Management, Cloud Operations, Hybrid Cloud, IaaS, VMware

vmworld2013_logo

This week I’ll be heading to VMworld and I have a number of long term questions I’ll be looking to answer this week. Consider the following:

What is the end goal of the VMware Horizon Suite and Mobility in general?

As VMware matures, they are looking to bundle products together as a natural progression in the company’s evolution. The mobile industry is moving much faster than most these days and I admit I don’t follow what VMware does in this space as closely as I used too. In the past I have been unimpressed with VMware’s “hypervisor” on a device partitioning method. In my opinion it just doesn’t seem ideally suited to the devices it runs on. Hypervisor on a server? Yes please.  Hypervisor on my phone? No thanks.

Mobility to me is about accessing my applications and my data in a seamless way while still providing security, auditing, and compliance back to the Enterprise. Mobile devices in the Enterprise will be THE wave of the future and Windows only based applications are on the way out. Tomorrow’s Enterprise will be Windows applications + Native Mobile Applications (IOS & Android) accessed through an API for universal access.  The long term answer is the right UI for the right device.  See past Cloudcast’s on this topic here, here, and here.

Lastly, What is the future of IOS7 and Android in the Enterprise? I am very interested to see where this is headed. I’m still an IOS fan today (currently writing this in a car on an iPad with a Bluetooth keyboard) but more and more I’m flipping over to the Android operating system. But, most organizations have been slow to adopt Android.  Will they be forced too adaopt Android based on customer demand the way IOS was forced upon the Enterprise as a business device demanding first class service.

What is VMware’s plan for cloud computing?

As I write this I know VMware’s vCloud Hybrid Service that has been in limited availability will go general availability this week.  I am guessing they plan to announce other products.  How will all the pieces of this puzzle fit together?

puzzle

As I see it today, VMware appears to continue down the path of supporting very traditional workloads.  I don’t see the pieces and products falling into place to really support evolving cloud era workloads and the new architectures typically required to support them.  For more information on what I’m talking about, see this video.

What is VMware’s plan to keep the hypervisor away from a commodity status?

It’s no secret the hypervisor is quickly becoming a commodity as it matures, just as Simon Wardley predicts any product will over time. Will VMware introduce new products or features to keep vSphere ahead in the hypervisor game to justify continuing charging customers for the hypervisor layer? Will they just bundle vSphere into the vCloud Suite and call it day?

Time will tell.  What do you think?

Disclaimer: This blog is not associated with my day job. I used to work for a VMware owned company (VCE) and I currently work for a VMware competitor (Citrix). None of this factored into this blog or my questions as presented here. Do with that information what you will but I have disclosed it.


August 21, 2013  2:15 PM

What is the vMotion Moment for Cloud Computing?



Posted by: Aaron Delp
cloud computing, Cloud Management, Cloud Operations, IaaS, VMware

As we lead up to VMworld next week and all the announcements that will surround the show, a thought occurred to me. What is the “vMotion Moment” for Cloud Computing? What is the one feature that universally appealed to everyone and made the product take off?

If you aren’t familiar with VMware vSphere and vMotion specifically, take a look at this link. Back in the day I was selling and supporting VMware ESX 2.X and GSX, there was no vSphere or vCenter yet. Virtualization at the time was a double edged sword because it was good on one hand; it delivered consolidation and allowed for less server hardware.  But, it was also bad; you had an “all eggs in one basket” scenario because all virtual machines resided on local storage typically and so if you lost a physical server, you lost ALL of the virtual machines on that host.

Then vMotion came out. It changed everything over night. We now had the ability to consolidate resources and “magically” move virtual machines from one host to another, provided you had SAN storage.  vMotion not only increased virtualization sales, it increased storage back end sales because it was the new data center standard architecture.  You can ask most folks who were around back then if they remember their first vMotion demo and most will say yes.  It had an immediate impact to sales and adoption in our industry.

Why was this one architecture change so impactful?

  • Business Operations were on board – It was the first big step in addressing highly available server based virtualization computing.
  • Business Leaders were on board – A vMotion demo was short and easy for them to consume (it could be as simple as this video, set up a ping to prove it and do a live vMotion). They can see how this will benefit their business in a minute or less. No more downtime!  (Not really, but hey, they were on board)
  • Industry Vendors were on board – This is subtle but important. The SAN vendors jumped all over this and the server vendors embraced it (I was IBM at the time specializing win virtualization) even though it meant selling less hardware.  The server folks took solace through OEM agreements to resell VMware products to help bolster the declining hardware sales.

I would compare where we are today in cloud computing to the ESX/GSX days of VMware virtualization. We are waiting on that “vMotion Moment” that will provide the value to all parties in an easy to consume message that will drive wide spread adoption. So, you might be asking, what is that feature or architecture shift that will make this happen?  Honestly, I’m not sure.

Cloud needs a quick, easy to articulate business value proposition to continue to grow.  Many ideas have come and some have gone (cost savings, cloudbursting, Jevon’s paradox, agility, etc.) but I keep looking for a universal architectural advantage that may or may not be coming.

What are your thoughts?


August 20, 2013  11:56 AM

Is the Operating System Dead Weight?



Posted by: Aaron Delp
Cloud Applications, cloud computing, Cloud Management, Cloud Operations, Consolidation, DevOps, Open Clouds, PaaS

There seems to be an interesting trend developing in the PaaS (Platform as a Service) community that I want to write about a bit this morning. What if we didn’t need a a “heavy” Operating System in our cloud computing environments? What if we took a page out of the old switches and routers operations manual for the management of operating systems? Let me explain what I mean.

As I mentioned in a previous post, at OSCON the hot top products were Salt, Ansible, and Docker. Take a step back for a moment and notice they are all devops tools. They are all ways to make an organization go faster. What I also noticed at OSCON was while the public PaaS services were getting notice, the tools to build a private PaaS were a higher focus this year. I would also argue that everything I write here could be applied to cloud computing in general, not just PaaS.

I followed the buzz after the show on Twitter and noticed some folks take this conversation a step further, while some products (Ansible, Salt, Puppet, Chef) allow automation and some allow containerization of applications (Docker), what about the Operating System? The only way to optimize the operating system is to minimize it as much as possible. We need a small fast operating system that provisions quickly with a minimal footprint. Once we have this in place the automation and provisioning process is highly optimized. What would something like that look like?

To get an idea, go check out CoreOS. CoreOS is a “just enough” Linux OS but it is designed to be highly scalable and deployable vs. a bare minimum footprint like Tiny Core Linux and others. Think of CoreOS as a product that is designed to do one thing and do it very well. I’ve signed up for the Core OS Alpha, I’ll let everyone know on Twitter if I get in. (As a side note, they ask for your favorite type of beer when you sign up, that is a bit like trying to pick your favorite child. In the end I went with Oatmeal Stout)

For those that are skimming along, what makes CoreOS different? From a distribution stand point it is minimal, Docker compatible, and has built it clustering capabilities. But, where I think it stands out is from an operations perspective. First of all, the operating system is READ ONLY.  Yes, you read that right.  You can’t write to the OS. From an operations stand point, this is genius. I equate this operations model to my old days of working on Cisco switches and routers. In this model you had two partitions on the boot flash, one was active and other was standby. The partitions were also read only by default. If you had a problem you could always revert over to the standby to see if that corrected an issue. You could also perform very quick upgrades by replacing the standby flash, making it active, and then simply rebooting. CoreOS uses this model.

As I see it (still need to play with it), the operations would look very similar in a CoreOS/Docker set up. You would boot to the active CoreOS partition and from there Docker containers would be placed on top. What we have achieved here is a loose coupling between the operating system and the applications. We have also removed the operating system as this huge monolithic object that is hard to replace, upgrade, or maintain operations control over. I really like this model and plan to explore it more in the future.

What are your thoughts? Is borrowing an operations concept from the networking world and applying it to operating systems a good idea?


July 31, 2013  7:18 PM

Why OpenStack Isn’t Like the Linux Kernel



Posted by: Aaron Delp
cloud computing, IaaS, OpenStack

While attending OSCON last week and Gartner Catalyst some discussions ultimately lead to my thoughts on OpenStack and where it will go in the market.  While they have been having great momentum leading up to their third birthday as related on the Cloudcast last week, a common analogy keeps popping up over and over that I wanted to dig into a little bit.

openstacklogo1linux_penguin-580-90

Is OpenStack the next Linux Kernel?  Will it follow the same path to success?

On the surface the analogy makes perfect sense.  Both are Open Source and have seen great success.  Both look to be projects that aren’t going away soon and are in a state of constant evolution.  It only makes sense to compare them at some levels.

Where are the projects the same?

Right now the state of OpenStack very much mirrors the beginnings of Linux.  In the early days of Linux many brave souls compiled the kernel themselves and that was the established way to get up and running (and earn LOTS of geek cred in the process).  But, this was painful and a barrier to entry.  So, we entered the second phase of this lifecycle, distributions of Linux.  Everyone and their brother (or sister) had a distribution and before you know it you had a list like this!  Go ahead and click on that list but please come back…

This made Linux amazingly popular but amazingly fractured and the barrier to entry was high and no one was making any money.  From here we moved onto the third phase of the Linux lifecycle, commercialization (the dreaded word to all the open source folks).  This is where all the evil companies stepped in and (gasp) started to make products and charge for either the product, consulting expertise, or support.  In the end this is where most things end up because we all have bills to pay and a need to put food on the table.  Commercialization has a vested interest in removing as many barriers to entry as possible to improve the profitablity of the project.

Where are the projects different?

Where OpenStack and Linux diverge is the acceleration of this model and the differences in how long it took to evolve through all three phases.  Linux didn’t happen over night, it was a natural progression and I would argue Linux forged the path for other projects to follow after it.  Because it had no precedent, it evolved in an elegant progression.  By doing this, it ruined any chance of OpenStack following this same path.

Wait? What?  What do you mean by that statement?

Now that Linux created a blueprint to move from a community project into a commercialized ecosystem, everyone in the OpenStack community has compressed this life cycle (even if they didn’t know they did).  OpenStack as a mature product is somewhere in the second phase (making distributions) but because it is so popular everyone is rushing to make money before all of the commercial barriers to entry have been removed.  That is why we are seeing so many OpenStack based products and companies popping up that are asking the user to consume OpenStack PLUS “something” to meet customer demands.

Is that a bad thing?  No, it isn’t.  It is different.  Nothing more, nothing less.


July 31, 2013  6:19 PM

Comparing Open Source Project to Products



Posted by: Aaron Delp
Cloud Applications, Cloud Operations, DevOps, Open Clouds

There has been a lot of “noise” in the cloud computing community recently that I wanted to write about a bit today.  In order to get to my point I’ll have to describe a little bit of background.

thinking_cloudsHow do most Open Source projects work today?

The main workflow today is the idea of an “upstream” or “core” code base that is open to all.  Depending on the project anyone can download the code and submit changes (think read only access) and these changes will then be reviewed and committed in some fashion back to the code branch (think read/write access).  In this model, ANYBODY can pull the code and they can run it in their environment using the “core” or “master” version.  For most projects (and users) this is considered great to try and evaluate the project but requires a good bit of in house expertise because the support you will get is from the project and the community around the project.  In this example the ownership for support falls to YOU Mr./Mrs. customer.

To increase adoption (and for companies to make a profit) a few alternative profit models have developed to support this.  Here are a few of them:

  • Consulting Services & Support – In this model a company will take the core code from the project and “package” it for the customer, install it, and possibly support it.  They become the “throat to choke” on one side but not the other.  This shifts the responsibility and ownership from the customer to the consultant organization BUT if there is a problem with the code, it is up to the project and community to fix it.  There is less product to maintain in this scenario and the “secret sauce” of the company is based on their ability to remove operational headaches.
  • Create a Product - If the project license supports it, there is the option for an company to take the code and turn it into a product they could then sell and possibly take on all the services as well.  In this model they are the “one throat to choke” completely.  In this model the company has taken on responsibility for the product as well as all of the support and configuration.  In order to differentiate themselves, the company will often “fork” or create a divergent version of the core project to create added value to their product.  The company may or may not give this code back to the community.  The general consensus is if the code makes the foundation (core) project better it is often donated back but sometime this is not the case.

What does all this mean?

It means there are at least three different models out there today and comparisons between the categories is basically useless.  As you evaluate new products and projects based on Open Source, take a moment to step back and think about which category you fall into and if it is a better fit to pursue the same project/product in another category.

Maybe you need some help to get going or you want a lifeline to call for support when the going gets really bad (or maybe your boss insists on it) or maybe you just have a staff that is very active in the community.  There is no universal right or wrong answer here.  The only answer is what is right for your organization and your unique project.


July 31, 2013  5:34 PM

OSCON Wrap Up



Posted by: Aaron Delp
Cloud Applications, cloud computing, Cloud Management, Cloud Operations, IaaS, Open Clouds, PaaS

This is a few days late but I wanted to provide everyone with an OSCON wrap up.  OSCON was a great show and Portland is always a fun city to see and explore.

oscon2013_logo

I’m going to break my impressions up into a few different categories.

Community: What can I say, this was my first “real” Open Source convention and it did not disappoint.  I met some great folks and everyone was amazing.  I was part of the Open Source projects in the Citrix booth and it was really nice to talk Xen Server and Apache CloudStack with so many different folks and better understand what they want to know and where the products are headed.

Keynote Impressions: The keynotes were great and were insightful.  As I’m learning more and more I further appreciate where the thought leaders in this space are headed.  Here are some links to the keynote slides and videos:

Link to all Keynote video

Link to all presentation slides

Session Impressions: For me the highlight was the sessions.  I learned about a lot of new projects.  Here are a few highlights

  • Docker.io – Solomon Hykes over at dotCloud had a fantastic session on Docker.io.  If you aren’t familiar it is a “application container” product utilizing a combination of LXC containers and aufs (AnotherUnion FS) to abstract an application away from the operating system and make it portable and stackable.  I also sat down with Solomon and Ben (new CEO) for an episode of the Cloudcast that will be published in the next few days.
  • Ansible – I believe Michael DeHaan (CTO at AnsibleWorks) has something really cool going on.  He started a project (Ansible) with the goal of an agentless tool for configuration management, application deployment, and continuous delivery.  The project has taken off and I believe is worth a look if you are interested.  I also recorded an episode of the Cloudcast with them that was released this morning.
  • Simon Wardley – As always, Simon gave a very fast (over 200 slides in 45 minutes) but very educational look at the cloud industry, but also on the race to commodity components of any industry over time.  He also went over the mapping of features and efficiency in organizations.  Very fascinating.  One key take away was the idea that all products must continue to innovate to “stay still” in the market.  By this he meant all of your competition will be doing the same thing so if everyone moves at the same speed, you never get ahead.  If you go slower, you fall behind.
  • Apache CloudStack Introduction – Kevin Kluge (VP of development at ElastiSearch) went through a great introduction to Apache CloudStack.  Kevin was one of the original cloud.com folks so he knows the entire history and went through it at great length.  I learned many new things from him and it was great to catch up.

If you are interested in further details on any of the sessions, check out the slides!

Buzz in the Halls: As always the talks in the halls are just as important as the sessions.  I noticed a lot of talk around Docker, Ansible, Salt, and Loggly.  It was also cool to see companies like Riot Games and Twitter there to openly recruit people in the industry.

Overall, a great show and I can’t wait to go back!


July 15, 2013  12:26 PM

What I’m Looking Forward To At OSCON



Posted by: Aaron Delp
cloud computing, Cloud Management, Cloud Operations, CloudStack, DevOps, IaaS, Open Clouds, OpenStack, SDN

oscon2013_logo

Next week is OSCON!  I can’t wait.  This will be the biggest Open Source conference I have been able to attend to date and I’m really looking forward to it for a bunch of different reasons.  I’ll list them out here and hopefully I’ll be able to post a recap after the show is complete.  Here they are in no particular order:

The Exhibit Hall Floor – I love show floors.  I’m strange that way.  Lots of people to talk too and a bunch of projects and topics in one place.  I’ll be in the Open@Citrix booth talking about all things open source at Citrix.  We will have the Xen Project, XenServer, Apache CloudStack and OpenDaylight projects featured and folks ready to talk to everyone.  Stop by and say hey!  Here are the show floor hours:

  1. Tuesday, July 23rd – 5:00pm – 6:00pm
  2. Wednesday, July 24th – 10:00am – 4:30pm & Bppth Crawl Reception – 5:45pm – 7:00pm
  3. Thursday, July 25th – 10:00am – 5:00pm

What is going on with Mobility and OpenSource – As noticed by our spinoff of the MobileCast inside our feed over at the Cloudcast, mobile is seeing amazing growth.  Having grown up in the “Apple world” and having recently dipped my toes into the open source side when I installed Cyanogen Mod (Android) on my HP Touchpad recently, I’m soaking up this world as fast as I can.  It’s very interesting how powerful all of this COULD be (mine is still a little rough around the edges) and if an open model will even fit into the massive cell phone providers of the world when they control the transport layer for the data.  Here are a few sessions that interested me:

  1. Ubuntu Phone & Tablet Development - This session will be an introduction to the design of Ubuntu, a guide to designing your own applications for Ubuntu on phone or tablet form factors, and an overview of the processes that are important for you to understand if you want to contribute to the core OS.
  2. Synchronization is the Future of Mobile Data - Mobile devices are the preferred means of data access today, but databases are stuck in the mainframe era. Learn how the NoSQL document model can be leveraged for off-line synchronization.
  3. Using Android Outside of the Mobile Device Space - The increasing demand and usage of smartphones globally has not just changed the definition of user experience for embedded equipments but has made emerging technologies like touch and display panels, connectivity solutions and infrastructure, affordable to non-phone products segments.

How has the Open Cloud space moved along and how has it matured – At this point I feel like I know the CloudStack world relatively well but I still want to learn about what is going on with other products in this space.  OpenStack will be celebrating their 3rd birthday with a big party at OSCON.  I never stop learning and this space is no different.  Some sessions of note:

  1. OpenStack Tour de Force - Call us crazy, but here is where you stand up an OpenStack cloud, from scratch, in three and a half hours. Running full throttle through the basics of OpenStack, this fast-paced tutorial will whirl through authentication, image storage, networking, and compute at breakneck speed. Not for the faint at heart.
  2. Connecting the Client to the Cloud - Project Sputnik was born of the idea to create an Ubuntu based laptop targeted specifically at developers. The project, which was made possible by an internal incubation fund, was announced last OSCON as an upcoming offering and now is a reality.
  3. The Hitchhiker’s Guide to the Cloud - And while the Hitchhiker’s Guide to the Galaxy (HHGTTG) is a wholly remarkable book it doesn’t cover the nuances of cloud computing. Whether you want to build a public, private or hybrid cloud there are free and open source tools that can help provide you a complete solution or help augment your existing Amazon or other hosted cloud solution. That’s why you need the Hitchhiker’s Guide to (Open Source) Cloud Computing (HHGTCC) or at least to attend this talk understand the current state of open source cloud computing.
  4. Introducing Apache CloudStack - Apache CloudStack helps an administrator or devops engineer build an Infrastructure as a Service cloud. This talk provides a technical introduction to Apache CloudStack.

What is the latest with automation tools to help with cloud operations – I’m always interested in how to make things go faster and be more consistent.  Here are a few sessions that I hope that looked interesting.  Also, look for Ansible on an upcoming podcast.

  1. Orchestration and Configuration Management with AnsibleAnsible is a newcomer in the open source automation space.  Major features of ansible are that it has a minimal learning curve, requires no server side software, and also requires no software to run on remote machines. Machines can be managed simply over SSH, using the Python stacks that are already included in every major Linux distribution by default.
  2. Systems Management with Chef - This is a hands on tutorial that will cover the basics that everyone needs to know about how to use Chef for system and infrastructure management. We will discuss the server API, the code primitives, and the tools required to successfully use Chef.
  3. How to Use Puppet Like an Adult - will explain the guiding principles of responsible Puppet design and architecture, walking you through real-world examples in order to illustrate solid methodological approaches, and illuminate Puppet administrators of all skill levels. As an added bonus, we will also show you how Puppet can be integrated into automated deployment and continuous integration platforms – an increasingly important component of today?s development and operational landscape.

Well, that’s about it.  It’s a pretty long list but the same trends keep emerging time and time again.  Mobility, Open Source Clouds, and DevOps.  Are you headed to OSCON?  What are you looking forward too?  Leave a comment, I’d love to hear from you!


June 24, 2013  10:19 PM

CCC13 Live Blog: Putting the PaaS in CloudStack



Posted by: Aaron Delp
Cloud Applications, cloud computing, Cloud Management, Cloud Operations, CloudStack, DevOps, IaaS, Open Clouds, OpenShift, PaaS

Disclaimer – This is a live blog from the CloudStack Collab Conference. Might have a bunch of errors in formatting, etc.  I’m just typing as fast as I can.  Also, I work for Citrix and I focus on CloudPlatform, the commercial version of CloudStack.  Just want to be up front with everyone.

Title: Putting the PaaS in CloudStack by Steven Citron-Pousty (@TheSteve0) from Red Hat

  • talks.thesteve0.com (hosted from here)
  • This is an OpenShift focus
  • OpenShift doesn’t really care about the underlying infrastructure (makes it CloudStack compatible)
  • Talking about different PaaS and vendors in the market
  • Predicts all development will be PaaS based in 2-3 years
  • OpenShift has three versions: Origin (opensource) upstream repo to Online (public offering by Red Hat) and Enterprise (private)
  • Online is hosted on AWS, Origin and Enterprise can be on others
  • SELinux containers are used for partitioning and containers
  • cartridges are pre-canned instances (or libraries) to add building block pieces and create environments quickly
  • This allows one click products (i.e. WordPress) to be rolled out and everything will be consistent and then development can start
  • version 2 of the cartridges format was just released
  • The goal – create a “peaceful” environment for Devs and Admins (Opes want stability and performance, devs want the new shiny environment quickly)
  • Neither one really wants to talk to each other more than they have too :)
  • Online over provisions resources by orders of magnitude because this way reclamation isn’t needed as much. How many developers give their environments back when done?  Almost none!
  • Now at the command line – shows one command to spin up an entire environment
  • This is more than giving a vm to a dev, this is about splitting a vm into further slices using SELinux into partitions (reminds me of AS/400 LPARs back in the day)
  • Terminology – broker -> management host, orchestrates the nodes
  • node – compute host for gears
  • gear – allocation of resources (slice) on a host
  • cartridge - framework to build applications
  • Each OpenShift Origin server is either a broker host or a node host
  • A broker can host many nodes (i.e the Online version running thousands of hosts uses four brokers)
  • Broker does state, DNS, and authentication over REST
  • Broker then passes an allocation request to a node in a district (a district is a grouping of nodes with like properties)
  • SELinux then securely subdivides the node into instances and creates a secure virtual container called a gear
  • If there is no resource contention for the gear, they can take the entire CPU, when there is contention they get 20%, memory is allocated at 512MB (This prevents noisy neighbor, if you peg a CPU, you will only peg 20% of a CPU) – think of this as network QoS but for CPU’s – The 20% value is configurable, same for other variables
  • Machines spin up and spin down of gears automatically, nodes need to be added/idled by the operator
  • (there was a BUNCH of questions here, I didn’t capture them all, can’t type that fast)
  • hardware pluggable load balancers are a constant request and coming in a near future version, software HA is built in today
  • Flow of a request -> REST API request to broker -> Message bus (ActiveMQ) -> Node spins up a cartridge and gear as requested
  • Once an application is allocated, the broker is out of the flow path and you talk directly to the application node/gear
  • To make a change, developers use git to manage all changes to the environment
  • Multi-tenant networking is built into the product using reverse proxy server internally to the environment
  • Steve discussed the whole flow of data more as questions come in. Great information but too much to type here
  • Want to play and learn more?
  • Installing OpenShift Origin using Puppet and Vagrant here
  • Instructions to run OpenShift on CloudStack here


June 24, 2013  9:09 PM

CCC13 Live Blog: OpenStack Swift with CloudStack



Posted by: Aaron Delp
Cloud Applications, cloud computing, Cloud Operations, CloudStack, IaaS, OpenStack

Disclaimer – This is a live blog from the CloudStack Collab Conference. Might have a bunch of errors in formatting, etc.  I’m just typing as fast as I can.  Also, I work for Citrix and I focus on CloudPlatform, the commercial version of CloudStack.  Just want to be up front with everyone.

Title: OpenStack Swift Introduction – Technical Overview & Use with CloudStack by John Dickinson from SwiftStack

  • What is Swift – It is an Object Storage System – this isn’t block, not a filesystem, designed to be:
  • Highly concurrent, open source, running on cheap commodity hardware and is very developer friendly
  • Swift has a large production user base in both service providers as well as private clouds
  • Swift uses a hash ring for data placement, this allows you to add and remove capacity on the fly (no downtime)
  • John is now going into the hashing algorithm used for placement within the ring – this hash is used to both place and recall data from the “ring”
  • The idea behind this is to make all of the data as unique as possible, when data comes in and is placed throughout the cluster. Each replica for instance is placed on a different server, rack, etc.  The goal here is to handle any failure and still have a copy of the data
  • The hierarchy of this is regions -> zones -> replicas
  • Swift Failure Handling -> How does this work?
  • What if a disk fails? – Swift will auto replace this data without operator interaction, data is rebuilt (think RAID arrays) and the operator can  remove/replace the drive later
  • What if a server fails? -> This is more availability (can’t get to data) vs. durability (data is GONE). In this case it will route around the requests until the server is brought online or taken out of the cluster by the operator manually
  • What if bit rot? (file system corruption) -> Swift scrubs data in the backgrund to make sure the data is consistent and will rewrite data bits if needed
  • Swift’s Design – Proxy Server – Deals with client integration (interface to outside world)
  • Swift Account, Container, Object servers are in the background
  • Objects are stored in containers, a container has a list of objects and associated metadata
  • Accounts are how “who has access to what” happens
  • Last piece – consistency processes – runs in the background and performs care and feeding of the environment
  • There is no shared state in between the services, this allows for fast scaling by adding more nodes at any time
  • This shared nothing design allows it to support a massive amount of concurrency
  • This makes cloud-era applications a perfect use case
  • Recent Features: Global Clusters, Static Large Object manifests, Bulk Requests, Quotas, CORS, Multi-range requests
  • 120 total contributors, 32% growth so far this year
  • What does this mean for CloudStack?
  • Swift has been supported with CloudStack since 3.0
  • For Secondary storage (CloudStack VM snapshots and templates in Swift) – To steal a term from my SAN days, this is the equivalent of SATA “cheap and deep” storage
  • What about Application Storage? – What about Availability Zones? Affinity? etc…
  • John talked a bit about this but no big details at this time
  • Example: User (not disclosed) that uses Swift and CloudStack
  • Application storage for user data – Massive compute cluster supporting a large user base, concurrent access across the data sets
  • Rapid provisioning of the new vm’s – spin up hundreds of vm’s in minutes


Page 1 of 3123

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: