Disclaimer – This is a live blog from the CloudStack Collab Conference. Might have a bunch of errors in formatting, etc. I’m just typing as fast as I can. Also, I work for Citrix and I focus on CloudPlatform, the commercial version of CloudStack. Just want to be up front with everyone
Title: What’s the use Case by Paul Angus @ ShapeBlue
- Use Cases: Test & Dev (of course), Highly Scalable public facing applications, high speed server resource deployment, anytime there is reduced reliance on corporate infrastructure teams
- First One: create replicas of production environment, create replicas on the fly
- This environment is cloned, used for development, then Q&A, then it becomes production
- The code isn’t moved, the environment is moved
- This is a media site
- 10 million unique visitors per month, average of 833 million page views, CloudStack environment is in development
- This use case has very time sensitive use cases, when are people looking at site, peaks at predictable times of the day
- Scaling is happening as the data goes up and down
- Two physical sites, here is the logical architecture:
- RightScale is also used govern between an AWS environment and a internal environment, time based today during predictable patterns
- Second One: overseas gambling site
- The key here is environment templates, rolls out Tomcat, NOSQL, new app to develop – creation, destroy, very easy
- They use Chaos Monkey & Simian Army to test the environment
- Very fast transitions from development to production
- Every thing is automated to flip from one to the other
- Third Customer: Satellite Broadcaster in UK
- VERY LARGE environment
- They had a problem, they wanted multi-tier applications, traffic through the VPC routers wouldn’t stand up to traffic
- Solution, created tiers with policies and broke traffic down into a better flow more than just in/out
- Another issue – they want everything to be as fast as possible, no network hops if possible, looking into affinity and anti-affinity rules in upcoming release to solve this problem
- Bursting to Amazon – Requires VPN/direct link to maintain database consistency, change rules in the load balancers
- Feature Requests – What are customers asking for? More Strage options, Kerberos, RBAC, better affinity / anti-affinity / vApps concepts, public/private cloud integration, security groups in Advanced Zones, multiple VLANS with SG separation, post deployment actions without virtual router DHCP
Disclaimer – This is a live blog from the CloudStack Collab Conference. Might have a bunch of errors in formatting, etc. I’m just typing as fast as I can. Also, I work for Citrix and I focus on CloudPlatform, the commercial version of CloudStack. Just want to be up front with everyone
Chip Childers (Vice President at Apache over CloudStack) is up talking State of CloudStack:
- CloudStack is an Apache Top Level Project! – This was achieved in less than 1 year from entering incubator, amazing for a project of this size
- Over 18,000 code commits by hundreds of developers, 2.5 million lines of code!
- 4.1.0 released – architecture improvements, 20 new features, 24 improvements, 155 bug fixes, helped 2.x users with upgrade path to 4.1
- Working on educating others: CloudStack University Initiative, Google Summer of Code, Apache CloudStack Training Courses (by ShapeBlue)
- Two CloudStack books have been published
- Users Groups have formed around the world (London, Bangalore, Japan, New York City, San Francisco)
- Chip is now talking Japan CloudStack User Group (JCSUG) – hundreds of members across multiple cities, divide and conquer strategy for releases, translation of documents, working on an OSS cloud certification
- In Summary – How is the community doing? We’re stronger than ever, and growing rapidly
- Over 270 CloudStack operators in production
- Both the users and developers mailing lists are going up in parallel, it means operations and development go hand in hand
- Where is CloudStack Focus – support for traditional and cloud-era workloads, flexible deployment options
- Infrastructure is a means to an end. That end is the applications! Our job is to make everything easy to consume!
- Gene talking intro to dev and ops and histories in IT organizations
- What happened with the downward spiral of IT? How did we get here?
- Talking roles of Project Managers, Developers, and Operations and the roles
- We got here by an over promise, then promise bigger in next release, generation of technical debt
- This debt creates a war between Dev and Ops over time
- What about Security in all of this?? No one ever brings them in to the end. It’s a bit like showing up at the end of the buffet and there is nothing left. The rates of burnout in security is up there with first responders, medical professions, and other high burnout professions where you are always “downstream” from the failures with no hand in the solutions
- 95% of all capital projects have an IT component, 50% of all capital spending is technology related! (sorry for the bad picture)
- Dev and Ops think like this – Classic slide!
- And this is what happens to the security folks!
- Take Amazon for example – deployment every 11 seconds on average (bunch more stats here, amazing!)
- What do high performing dev-ops teams do? 30x more frequent deployments, 8000x faster cycle time at 2x the success rate, 12x faster MTTR
- The Phoenix Project was inspired by The Goal. 80’s manufacturing is today’s technology flow
- The First Way: Flow (going from Dev to Ops)
- Ask yourself (or a friend) what is the lead time for changes? Is that measured in minutes, hours, days, weeks, months, quarters??
- We have development environment where deployments hurt. Because it is painful, no one looks forward to it.
- To solve this, make development environments available early in the process – goal at the end of each sprint, code must be working in the environment it runs in
- How to achieve high performance: 89% are using infrastructure version control & %82 are using automated code deployments
- Outcomes from the First Way:
- The Second Way: Amplify Feedback Loops
- Feedback loops are the key to improvement – the more efficient the loop, the more likely it will make a difference
- Fix problems right there to prevent the generation of technical debt
- Creates quality at the source
- The concept of “hygiene” in development and operations, how well it work with others. Puppies are cute but poop on the floor.
- Integration into Continuous Delivery – The days of change management meetings are gone!
- What does this lead too:
- The Third Way: Culture of Continual Experimentation and Learning
- Just because we have done something one way for 25 years doesn’t mean we have to keep doing it
- We need to fail fast and learn from our mistakes
- Sports Athlete: It is always better to practice 15 minutes a day than to practice once a week for 3 hours
- Adrian Cockcroft quote – Do painful things more frequently!
- Testing of code needs to happen in-line, it can’t wait until the end anymore!
- Need to fous on technical debt over time – if not it will eventually be ALL you do!
- Intuit for example – They did 165 experiments in the three months during tax season – why? Increased conversion of the website by 50%, they generated more business when they needed it the most
- Repetition matters: You have to practice all the time to get better, if not a downward spiral develops
In summary – Go get the book, Gene is always awesome and it is an exciting time in our industry!
I had a very interesting conversation with a partner last week about the concept of solutions as surface area. Think about an organization as a flat surface, a table for instance. On the surface of this table are all the characteristics and potential issues that need a solution. This could be serving the users, providing chargeback/showback, security policy, you name it. The table is an all inclusive representation of IT services that are needed to support the business. The goal of your organization is to cover as much of the surface area on the table with as few products and solutions as possible. How do you build a “table cloth” of solutions? As I’ve stated before there probably isn’t one solution to rule them all but what about a table cloth made up of a few patches representing solutions interconnected to cover the table?
- Too many solutions (too many patches) and the tablecloth is complex, increasingly difficult to support and operate, and probably isn’t operating at peak efficiency. This is the classic organization that probably has more applications than employees and has trouble keeping the systems stable and running, more less having anytime to add new solutions and services into the organization. If the patchwork table cloth has too many small pieces (and maybe a few holes) and isn’t meeting the simple need of being a tablecloth, your organization is in trouble. Plus, it is butt ugly (can I say that here?) to look at. No one wants it.
- What if the tablecloth hasn’t covered the entire table and there is table sticking out? It means your IT organization is exposed and you aren’t meeting the needs of the business. There are gaps you need to cover. If you don’t cover them, somebody else well. This is “Shadow IT” or the concept of the business going around IT to “cover” a need that IT currently isn’t serving today. I’ve often been asked why anyone in a line of business would go out on their own to purchase outside services (more tablecloth coverage). Operations isn’t meet the day to day responsibilities of the business.
- Here is the hard part: This tablecloth and the table are always changing. Pieces are added in; replaced, maybe four or five pieces are replaced with one large piece to increase operations efficiency. In using this analogy you can see if your organization is getting better by covering the maximum area with the minimum amount of solutions or is starting to head the wrong way because the table isn’t covered, or maybe even getting bigger. No one ever said the table wouldn’t grow and it almost certainly never shrinks.
I noticed a pretty interesting trend while I was at Citrix Synergy last week meeting with customers, specifically Enterprise customers. This trend continues a pattern I have been seeing for some time in the Enterprise: Sometimes you can’t call Cloud a Cloud. Many customers want (I would even say NEED) all the operational advantages of cloud computing but they don’t want “Cloud Computing”.
Most Enterprise customers are looking for the advantages of cloud computing but don’t want a product labeled cloud computing. Think of it as “cloud washing back lash”. Most customers will adopt cloud computing over time based on what is most painful to the organization and address one or more cloud computing characteristics they would like to adopt.
Let me elaborate with a simplified theoretical customer conversation that followed the pattern of many I had last week.
Me: “Are you interested in cloud computing for your organization or your customer?”
Them: “No. I have a cloud project my CIO is forcing me to look at but I don’t have time for it.”
Me: “What are some of the challenges you are facing in your organization today?”
Them: “You name it, tight budgets, overworked staff, under staffed, too many projects, too much overhead on existing operations. I have to do more with the same or less. I don’t have time or resources to take on a cloud computing project.”
Me: “Do you have a challenge building/provisioning new resources today? Do you have challenges around your daily operations and support? Are your customers (internal or external) happy today?”
Them: “Yes. Yes. No.”
Me: “These are all basic characteristics of cloud computing. Let’s take the word cloud out of the conversation for a moment and just talk about your operational issues. Want to hear a little more about how we could help your existing operations”
Them: “Absolutely. I thought cloud was this new “thing” that would require me to replace everything. Let’s talk…”
I personally turned more Enterprise customers around this week than I could count. I did it by not calling it cloud computing but calling out the operational benefits of cloud computing. There is an episode of Engineers Unplugged that will see the light of day in the near future that I recorded with Giles over at Shape Blue that expands on this concept further. To me cloud computing isn’t about a rip and replace of infrastructure and products as much as it is a long-term operations goal that must be managed over time. It is about increasing IT operations efficiency through offering services instead of products to your users/customers.
There will be those clouderati type folks that will pooh pooh what I’m saying. They will say you need to rip and replace everything, change your entire infrastructure (commodity hardware, SDN, object storage) and your entire operations (fire all the operations folks, they stink) and replace everything with go fast DevOps and every day you don’t do it is a day your competitors are gaining ground. That simply isn’t a realistic scenario for 99% of customers today. The legacy infrastructure has to continue to operate but should be replaced over time at a pace that fits the needs of the business. This isn’t a rip and replace (build big walls around the old) as much as it is a “starve the old gradually, build the new over time as needed” mentality.
Enterprises in most cases aren’t looking for a shocking dramatic shift that happens overnight, even if the most of the vendors and service providers out there want this to happen. Customers are looking to ease into cloud computing over time and the best way to this is to treat cloud computing as a long term goal and address what is most important to the Enterprise instead of telling them they are doing it wrong.
It’s technical sales 101, find a problem, fix a problem, don’t get in the customers face (they are the customer after all and are writing the check). Don’t call it a cloud.
A few weeks ago we had the privilege of talking with Gene Kim on The Cloudcast (.net) about his new DevOps book, The Phoenix Project. During the show I had a bit of a “lightbulb moment” and immediately went out and downloaded the book. I’m almost finished (page 206 as of today) but I wanted to share a few points from a Cloud Operations stand point based on my past experience, the podcast, the content of the book so far, as well as a recent post by Bart Copeland over at ActiveState.
First, a little bit of background. My initial job out of college was outsourced IT Operations for IBM Global Services supporting one account, Kodak. This was the late 90’s and for those of you that remember, cameras were not always digital. In the pre-digital days cameras used 35mm film (look it up kids) and each roll held 24 or 36 exposures. When you wanted to see your pictures, you dropped the roll of film off at your local drugstore or grocery store. Everyday around 4:00 the film for the day was collected by trucks and sent to about 70 large processing factories around the United States, Mexico, Puerto Rico and Canada to be developed. All of the film would be transported, unpacked, developed, billed, packaged, and sent back out to the store by 8:00 the next morning. At the height of 35mm photo finishing, Kodak held about 85%-90% of the market and overnight photofinishing processed somewhere in the neighborhood of 3 MILLION photos per night, all in a 14 hour window every night. My organization supported the IT (applications, servers, network, everything) for the factories.
While reading both Gene’s book and Brent’s blog I had a sense of deja vu because while they compared IT Operations to factory work, I actually lived a combination of both: supporting IT Operations IN a factory. I didn’t realize it until recently but this unique perspective has really shaped my approach to IT Operations over the years and many DevOps concepts just “made sense” to me. A few examples from IT Operations in a factory:
- The process is just as important as the work - While there are many courses today (and certifications to go along with them) for technical products, there usually isn’t a large focus in Operations on the HOW and WHY to get work completed. You can take a Microsoft, VMware, etc. class that will teach you to perform administration, but what about the questions you should be asking before you start the work? In any factory environment the concepts of flow and efficiency are paramount and this naturally bled over into our technical operations.
- Unplanned Work and WIP (Work in Process) can be the death of Operations – Gene discussed the concept of Unplanned Work (work that isn’t scheduled that ends up taking priority) and Work In Process (work that has been started but isn’t completed) as a killer to an organization. We supported a system spread across 70 locations that is processed over 100,000 orders per night. In our environment, things needed to run smoothly at all times because our time window was so short. Because of this, we mastered the concepts of flow very early on to make sure there were minimal constraints in our systems.
- Everything was Cookie Cutter – It took a massive team to support an infrastructure this large spread across 4 countries and 70 locations. Remember, this is pre-virtualization and pre-remote control/screen sharing days, all support was over the phone with a remote operator (often unskilled technically) typing in commands and reading back the screen output. When something broke, it was PAINFUL. Because of this, everything was tested and documented before it went out the door. We did this to remove the support system as a constraint to the Operations. Anybody in the factory could pick up the process and anybody on our end could remotely support the systems. We attempted to remove unplanned work (something is broken) as a constraint as much as possible.
As I read both Bart’s post and Gene’s book it dawned on me that most of the Operations world doesn’t see things this way. The concepts of flow, work in process, constraints, resource allocation and performance metrics were just how I was “raised” in Operations. I was taught the process is just as important as the support methods.
With Factories out of the way, let’s move on to Marathons. Both Gene Kim and Nick Weaver expressed on the podcast that DevOps as a model is often more of a journey than a destination for most organizations. The only way to get better is to practice over and over and to strive for constant improvement. This is very similar to training for a marathon. You don’t go from sitting on the couch to running a marathon over night. It takes breaking the activity down into smaller steps and mastering smaller goals over time to accomplish your larger goal.
Lastly we have Snowmen. How do you build a snowman? You start with a small snowball and roll it in the snow until it gets bigger and you have the size you need. Building a snowman is all about momentum and building on past progress. Building a DevOps practice is also all about momentum. Success breeds success. As I trained for a half marathon (no full marathon for me yet) I found that the more I ran, the more I wanted to run. It is painful at first but once you get past the initial resistance the snowball starts to build.
In summary, read Gene’s book, learn from factory operations, put a plan in place that starts small and builds over time. Change (especially in most IT Operations) doesn’t happen over night but it can happen. I was fortunate enough to work in a highly efficient Operations environment over fifteen years ago and it has stuck with me ever since.
Earlier this week I wrote an article about what I was looking forward to at the AWS Summit in San Francisco. Now that the conference is behind me I wanted to provide some feedback on the event and break down each of my expectations and how Amazon scored in my eyes.
General Impressions – (My Grade: A) Wow! Ton of people, there was a great mix of customers and vendors. There were far more customers than vendors and the emphasis was really about how to operate and maximize the AWS products. I learned a lot about the AWS ecosystem and architecture. I did a live blog of the Opening Keynote as well the sessions I attended (except one). Here is a list of those blogs:
- AWS Summit Keynote Live Blog
- Introducing AWS OpsWorks
- AWS Cloud DR & Backup
- RightScale Hybrid IT Design
Andy Jassey’s Keynote – (My Grade: B) See my live blog link above for the full stream impressions as they happened. AWS has improved their message since the AWS: ReInvent conference last November. Two big take aways I got from the keynote: AWS wants to be the Walmart of public cloud (the low price leader) and they have embraced the fact that Enterprise will continue to operate some critical workloads “on-premise”. Andy was fast and furious with the stats to impress the audience (almost too fast, it was hard to keep up at times) but it really is amazing to see how far they have come and the amount of momentum they have generated in the market and how fast they are introducing new features. It is very obvious that the dirty words “public cloud” have been banned from all keynotes and sessions, they will only refer to everything as “legacy on-premise resources”. Hey, when you’re a hammer, everything is a nail…
Technical Boot Camp – (My Grade: A+) I attended the technical getting started boot camp and it was easily the highlight of the entire event for me. The boot camp was an all day session and was a great overview of the major services (EC2, S3, ELB, etc.) and everything was presented in a very straightforward and concise fashion. The labs were the best labs of any technical training I’ve ever attended (and that is A LOT of training over the years) and served to really drive home the lectures. The time was spent about 50/50 lab to lecture. Great stuff, I recommend it to anyone.
Tools and Ecosystem Partners – (My Grade: A) The solutions floor was very active, a ton of booths and sponsors and everyone was pretty excited about the products and combination of partners and AWS solutions.
No Mention of NetFlix – (My Grade: A) AWS actually restrained themselves and only brought them up once in reference to them as a destination and model for the Enterprise cloud journey instead of the usual you need to be like them or you suck message. AWS is learning…
Operations Architecture and Best Practices – (My Grade: A) The afternoon sessions were exactly what I was looking for. I attended a Hydbrid Cloud session sponsored by RightScale. It was probably the most interesting session of the day for me. In addition I attended two DevOps sessions and a Cloud Backup and DR session. Everyone did a really good job of walking the fine line between technical content but keeping the sessions open to newcomers.
And now the not so good…
Hands on Labs – (My Grade: F) There were two hands on labs that I wanted to do but I never had a chance. AWS clearly underestimated the demand for the labs because by 10:30 there was a line down the hall (see pic) and the wait was expected to be 1-2 hours just for a lab seat. They did offer wristbands and times for later in the day but due to the tight schedule of the Summit I really would have only had time mid-morning during the long lunch break or in the evening when the show floor was open but they closed them at the end of the day when the sessions were over.
Minor glitches aside it was a great event, I learned a ton, talked to some great customers, and I look forward to the next one!
As I’m writing this I’m on a plane bound for this week’s AWS Summit in San Francisco. Even though AWS is not what I would consider an “Open Cloud” by any stretch of the imagination, I do find the operations aspects of AWS very intriguing. Here’s a bit of what I’m looking forward to this week:
Andy Jassy’s Message: As I mentioned on my other blog, I attended the AWS: ReInvent conference back in November. I found out quickly while trying to do podcasts from the event that the corporate message is VERY tightly controlled over at AWS. No one will say a word either on or off the record and everyone just points you to Jeff Barr. Fair enough, it worked. The conference went very well and was a huge success. The only complaint I heard was criticism of Andy’s keynote where he proclaimed if it isn’t a public cloud, it isn’t a cloud. As we say here in the South, that went over like a fart in church. Large Enterprises don’t like to be told what to do and it came across that AWS only has a hammer and you are a nail. I had conversations with more than one AWS customer that came away from that keynote unimpressed with Jassy’s attitude and concerned that AWS truly doesn’t understand the Enterprise with that keynote message. It will be interesting to see if AWS learned from this minor black eye and how they might have modified their message in the last five months.
Technical Boot Camps: I’m signed up for the Technical Boot Camp today and really looking forward to getting more hands on with AWS. I’ll report back on my progress and the topics covered in the near future. I do wonder how up to date the training material will be. AWS has been killing it in introducing new features lately so I have my doubts if the official material has kept up.
Tools and Ecosystem Partners – The Dark Side: My one big fear of Amazon from an operations stand point is because they are such a closed system, they have the ability to eat their ecosystem and partners at any time. If they want to close the API or develop a product to replace a small startup partner, they really could do it at any moment and who is going to stop them? Lock in to AWS is an increasing concern.
Tools and Ecosystem Partners – The Positive Side: I’m very interested in the Solutions Exchange and Partner Expo at the end of the day on Tuesday. While I have spoken to a few companies trying to make a go around the AWS ecosystem, it will be great to see exactly how they are innovating and staying ahead and providing additional value to AWS.
No Mention of NetFlix: Yes, we all love them as an example, but we are also sick of hearing about them. I want to hear about new customers and new use cases. Luckily the keynotes appear to have a different flavor and some different customers and I don’t see Netflix mentioned in the title of a single breakout session.
Operations and Architecture Best Practices: As I listen to everything over the next two days I will try to take everything in and process what this means to Cloud Operations and how the architecture of a typical customer would consider and possibly benefit from the message. It is all too easy to get caught up in the Cloud Hype but most people aren’t there yet. I’ll be sure to compare what Amazon is selling vs. what most users today are buying.
I had some great conversations with the Red Hat folks at the OpenStack Summit last week. Even though the focus of the event was around IaaS (Infrastructure-as-a-Service) and OpenStack specifically, I sat down with the Red Hat OpenShift PaaS (Platform-as-a-Service) folks and recorded a great podcast.
This article isn’t a pitch for Red Hat, but my conversations with them got me thinking. For those that have been following along I’ve written a few articles recently on common workloads and cloud adoption (articles here and here).
I have seen Enterprise customers begin to poke around the edges of cloud computing but the most common sticking point I see is most workloads today don’t fit. The “cloudy” folks will jump up and down and tell the Enterprise they just need to rewrite everything and evolve or die. That is easier said than done, especially if those applications are the “crown jewels” and considered critical to the business. There needs to be an easier way for the Enterprise to update their applications than to just start from scratch. They only way this will happen is if the Enterprise is given the proper development tools and the barrier to entry is lowered to allow this evolution to happen more quickly.
This is where PaaS potentially comes into play. Even if an Enterprise company wants to be like the Netflix of the world, many organizations simply don’t have the resources (budget, developers, time) to make it happen and they don’t want to reinvent the wheel. Of course, I have to ask the question, “Why should they?” If there is a PaaS platform that allows them to develop their applications and grow over time, isn’t this a better solution? One of the main use cases for cloud workloads is an elastic application that grows/shrinks over time. For some cloud applications, IaaS may not be enough. The workload may require more than spinning up virtual machines as needed, the application (or the platform layer) would need awareness into the infrastructure layer for this to be meaningful. Many of the early successful clouds are custom written applications straight to an IaaS layer; they are in effect their own PaaS.
We don’t expect the Enterprise today to build and operate their own IaaS services because products have come along (CloudStack, OpenStack, Eucalyptus, etc.) to ease this requirement and to serve a certain set of use cases (test/dev, scalability testing, etc.). As the PaaS market matures I believe we will continue to see increased adoption and new use cases develop. I really see moving up the model to a Platform-as-a-Service model in the Enterprise as the next logical step and this will open up a new set of use cases, allowing the Enterprise to further embrace cloud computing in general. PaaS is often associated with developers and allowing them to serve themselves and become more agile. At a low level that is correct, but at a high level PaaS is about opening new doors for the Enterprise to react more quickly to an ever evolving market. I see PaaS as the next tool in the Open Cloud tool box.
Yes, I know this is a little late, it’s been a long week…
Last week I was able to attend the OpenStack Summit in Portland. At first I raised a lot of eye brows at the conference (disclosure: My day job is working for Citrix in the Cloud Platform Product team and today we both compete (Citrix CloudPlatform) and embrace (Xen Hypervisor) OpenStack as a project) I went into the week with a fresh pair of eyes and here are some impressions from sessions and conversations in the halls and at the various events.
Overall Impression: I was impressed. OpenStack as a project appears to have reached critical mass and congratulations to them on the accomplishment! The amount of attendees was impressive but the breakdown of actual attendees was a little disappointing (more on the later). The sessions were a mixed bag, some were incredible, some… not so much. The OpenStack Foundation has done a great job of moving the project forward and gathering momentum. As always, the best part of these types of events is the conversations in the hall and this show was no different.
User Case Studies Were a Focus: All of the major vendors of OpenStack distributions/releases had customer case studies to bring forward. Go check out the videos from the keynotes as well as the press releases and you will see the conference had a larger than normal number of customer references to show off. My favorite customer keynote was the NSA session by Nathanael Burton. If you peel back the onion the customer stories were a little telling as well. Most were on older versions of OpenStack (upgrades are still an issue) and typical cloud scale use cases (NBC Comcast being a cool exception with the set top box demonstration). All in all, the project has made very good progress in the last year and it appears OpenStack is running in production.
Vendors Still Dominate: I think they took it down now but there was a roster on the OpenStack Summit site with the number of attendees and which company they worked for. I looked but I can’t find it anymore. Because of that, I have no numbers to offer as proof but vendors dominated the show and the sessions were often vendors presenting to vendors. I would have rather seen more sessions that were vendor neutral and more based on the OpenStack Foundation core products.
The Design Summit Needs To Be Broken Off: It was very apparent that critical mass has been reached and the need to break off the users from the developers has come. The design folks wanted nothing to do with the user sessions and made it very clear with signs sending the message that unless you were a developer, you needed to stay out. Looking at the schedule there were a few sessions I would have loved to sit in and be a fly on the wall but I didn’t dare cross that line.
Upgrades are Still an Issue: You can tell the project is making progress when people are worried less about getting the product running and more about how to go form one version to another. From an operations stand point this was one of my main interests this week. With new versions coming every 6 months, most enterprises and providers will fall behind quickly if they can’t easily upgrade from one version to the next or more likely skip a few versions and then jump back into the newest version. This was a common topic of discussion in the halls.
Performance/Scalability/Security Testing Needs to be an Issue: As any product matures from an operations standpoint we move from the basic binary decision does it work or not. Think of this as a red light/green light moment. You are either working or you aren’t. Grizzly is a huge improvement in this area but the focus of most sessions was still does it work or not. As the product matures it will move into more common operations areas of figuring out how to extend the product more. How do you secure the product? How far does each component scale? Where are the bottlenecks in the architecture today? These are all questions I’m more interested in and I would love to see some progress on this in the next 12 months.
Good Members / Bad Members (The community is watching): I won’t give out names to protect the innocent but the community is watching very closely which vendors are being good and bad citizens in the community. This was a common topic over beers in the evenings. Some vendors are not contributing code back to the project; others are not offering interoperability in their offerings, etc. It is to be expected as each company tries to maintain an upper hand but it also appears the OpenStack Foundation will be cracking down on this in the near future. If they don’t, the community and codebase will fracture and it will be game over.
I look forward to watching the community, foundation, and product continue to grow and evolve over time.
I’ve been having some great discussions with customers recently that led to this interesting title. Too many customers are looking for “THE Cloud” (as in the one and only technical solution) and want to know how to compare one cloud against another. Because they are looking for a technical solution and not a business solution, many are looking for something that is just out of reach.
Why is “THE Cloud” so elusive? One simple word: workload. Let’s take a few examples. The first two examples come straight from my previous post about DevOps and software generations. The third example comes from a discussion of public vs. private I had with a customer today.
The first example I’ll present is your traditional “enterprise legacy” workload. When you think virtualization, you probably are thinking of this workload architecture. Take some servers, networking, storage and virtualization from your favorite vendors and run the “old stuff” on it. This could be Exchange, SAP, Oracle, etc. This production workload tends to be critical to your business, not public facing, and runs in a more or less steady state. The workload is pretty constant and even if it isn’t, tends to be predictable. Because of this, you may not have a need to “go faster”, you just need it to work. Reliability at the infrastructure layer is more important than agility & scalability. This workload may not need to be “moved to the cloud” but it can benefit from management in a cloud-like way if architected properly. This is the old reliable. As Rodney put it once upon a time, it may not be sexy but it drives most business today.
The second example is Netflix. Just kidding. What I mean by this example is a workload that follows the Netflix model but it can be either private or public. This is a production workload that tends be very elastic, follows a DevOps process, and the underlying architecture is VERY different (object storage, software defined networking, commodity servers, etc.). Because Agility & scalability at the infrastructure layer is more important than Reliability, the management style also tends to be very different from the previous example. This is a persistent application built for one purpose (typically to generate revenue in some way) that is always changing to go faster. This is the thoroughbred racehorse.
The third common example I see is a test & development environment in a public cloud. This is a non-persistent application that only needs to serve a purpose for a limited time but will be destroyed or recycled after use. This environment is non-production, and benefits from fast provisioning and decommissioning over reliability. The application doesn’t need agility or scalability as much as it needs to be set up and torn down quickly. I know I mentioned a great use case on the Cloudcast (.net) podcast back in November after the Amazon AWS conference. I spoke to a user that would spin up hundreds to thousands of AWS instances for scale testing and then would destroy them when complete. This type of environment was too large to purchase and host on premise and was only used a few times a year. By moving the test and development to the cloud, the company was able to achieve something that was impossible before. This is The Flash.
As you can see, different workloads have different characteristics, different operations and management needs, and different infrastructures to support them. What if you combined some or all of the examples? This is how the Hybrid Cloud model became popular. A hybrid cloud is two or more unique clouds stood up under the same management point.
When evaluating cloud solutions, always start with your workload, determine the requirments to support your operations and business process, THEN evaluate products against your unique needs. Your goal here is cover as much “solutions area” as possible with the least amount of products to keep the operations as simple as possible.