From Silos to Services: Cloud Computing for the Enterprise

January 24, 2016  4:34 PM

A Look at 2016 – “Hey You Kids, Get Off My Lawn!!” Edition

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cisco, Dell, DevOps, Docker, Hardware, HP, IaaS, PaaS, Public Cloud, VCE, VMware

2016 is going to be an interesting year in technology. I’ve predicted that it’s the year where the Public Cloud markets begin to make the rules of the IT industry and everyone will need to figure out how they survive or fail under those new rules.

  • There are Presidential Elections happening in the US, which causes leaders to make projections on how a new administration might impact the economy.
  • Interest rates have recently risen (albeit slightly) in the US, which impacts investments and overall risk-tolerance for companies.
  • It was about 8 years between tech-bubble burst of 2000/2001 and housing market crash of 2008, and now it’s been 8 years since that event. VCs are already beginning to back off new funding rounds and people are calling for the end of the Unicorn Era.

Screen Shot 2016-01-24 at 2.28.14 PMSo I thought that I’d put on my “Hey You Kids, Get Off My Lawn!!” hat and take a look at the technology landscape of 2016. NOTE: I’m not endorsing any/all of these perspectives, but it’s a useful exercise to occasionally view the world from a 180* different, contrarian perspective.

Hardware: Yes, it’s a commodity. Yes, the leading companies that supply it are slowing their growth and beginning to pay dividends in a model that seems more like a public utility and less like a tech rocketship . But all that software needs to run on something, and the consolidation within this segment of the industry is already happening. And customers have tons of inertia (buying patterns, technical skills, existing data-center facilities, compliance models, etc.) All of the major companies now sell almost all the hardware elements, and many systems are consolidating around common x86 or ODM elements. We’ll probably see a few more companies fall off the playing field, but 5-6 big ones should remain for quite a while.

Public Cloud: AWS might be bigger than the next 14 competitors combined (see: Gartner IaaS MQ), but it’s still only expected to have done $7-8B in revenues in 2015 and it’s an 8yr old organization. It’s trying to displace IT leaders such as Microsoft, Oracle, EMC, HP, Dell, Cisco and others, who have massive cash reserves to fight long battles. Competitors like Google still haven’t gotten fully-engaged and large potential threats like Facebook and Apple haven’t really entered the game. Then throw in the inertia of trillions of dollars of legacy applications, systems and people-skills and it makes Public Cloud a long game that is nowhere near being decided. Has the industry ever seen a single company command such a dominant perspective from a sub-$10B revenue base? Which sets of rules will everyone play from in 2016, or do we continue playing a game with multiple sets of rules?

Cloud Native Applications: While some experts are calling Pivotal (and Cloud Foundry) the early leader, we still don’t see many revenue announcements from the leading PaaS players. Most announcements are still focused on vendor investments, community memberships, code contributions and early customers logos. And the market seems to have moved away from the polyglot, cool new languages focus of 2013/2014 and is now re-focused on Java and .NET for on-premises Enterprise applications. It’s middleware replacement that needs operators that need to learn how to manage the underlying system. And the container management argument seems to dominate the discussion (structured vs. unstructured; DIY vs. pluggable vs. containers-as-a-service) – does this create too much distraction from the previous goal of driving “software is eating the world” and “digitize the economy” and build applications faster? Does the Enterprise spend more on Public IaaS vs. Private PaaS, or does it follow the Public trend up the stack, or is Public too risky for large Enterprise spending?

Containers: The leading company, Docker, has a (reported) $2-3B VC valuation, but hasn’t made any earnings announcements or given earnings guidance – and they just bought a company focus on “the next thing” – unikernels. The market is getting extremely crowded with companies that do somewhat similar things – various forms of infrastructure for containers or microservices-based applications – such as Cisco Mantl, CoreOS, Hashicorp, Rancher, Red Hat OpenShift, and many, many others. And some early data (here, here) suggests that adoption in production environments is still not at levels that will disrupt VM usage. Does it disrupt VMs, or Infrastructure, or Config-Management, or all of the above, or just the PaaS ecosystem….or none of the above?

DevOps: Does it come in a box, and what size do I need to order to get my technical organizations onto a single sheet of paper org-chart? If a SKU for DevOps doesn’t exist, can I get a SKU for NoOps, or OpsDev, or SecOpsDev? Where is the macro on my spreadsheet to summarize the ROI for empathy, or the HR policy that’s need to remedy the need for counseling if burnout from pager-duty exceeds the unlimited vacation policy of my SREs?

What other areas needs the “Old Man Shakes Fist at Cloud” treatment?

A Dose of Reality?

More than anything, I’d just like to see some revenue numbers out of companies chasing the buffet line of software that is eating the world. We got that in 2015 from AWS and a few others, which made many people rethink how they thought about the shifts in the marketplace. Will we see that in 2016 for other segments of the market?

January 24, 2016  2:52 PM

How Many Engineers Does it Take to Build a Cloud?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, OpenStack, Operations, Private Cloud, Public Cloud, Red Hat

I came across this old picture the other day, which showed a group of people (circa 2010-2011) that were assembled in a room with the task of building a VCE Vblock. This was a team of EMC vSpecialists doing a training session on this “newer” technology. I did some sloppy editing to obscure their faces – but the names and faces aren’t important here.

Building a Vblock, Circa 2010

Building a Vblock (from 2010-2011)

By my count, there are 12 people in the room, and that doesn’t include anyone that’s outside the frame of the picture. This was a collection of highly specialized engineers, with background in servers, networking, virtualization and storage. With this training structure, it typically took them a week to build a Vblock system. All of this coordination was needed just to get the system to the point where a company could begin to deploy applications on this rack of equipment. Essentially Day 0.

Fast-forward to 2016 and now most of that configuration and complexity gets done at the factory. In essence, 12 people replaced by a set of scripts. And there are now many other offerings in the marketplace today that will deliver a similar Day 0 experience, without all the people that need to come on-site to build it.

This isn’t a commentary on the technology or the company behind it.  It’s an evolution of our marketplace, and the evolution of business expectations. The days of business patience to wait for infrastructure to get built or prepared in order to run an application are gone. SaaS applications and public IaaS services (e.g. AWS, Azure, etc.) are defining the expectations for business users, not IT departments.

IT Inefficiencies?

Maybe you look at that example and think, “uh oh, that’s going to mean a lot less jobs for IT in the future.” While this is possible, although not likely due to things like Jevons Paradox, let’s look at this through another lense. Let’s look at it through inefficiencies of costs. With the example above, there were inefficiencies of cost in building data center systems. The cost of having all 12 of those people in a room for a week would be $33,600 (@ $100k/person – calculator), and for more complex skills it could easily push it to $50,000. That’s before any applications were running.

But what about the costs of day-to-day operations? This past week, Red Hat released an interesting study of the operational costs of running a Private Cloud. At the core of the study are metrics that show the operations cost of a Private Cloud (in this case, based on Red Hat technology). In Year 1, the cost is $13,609 per VM. In Year 2, the cost is $8,043 per VM. In Year 3, the cost is $6,264. By Year 6, the cost is $5,200 per VM.

Three years to be able to gain the operational expertise needed to reduce the cost by 50%. Another three years to reduce an additional 15% from those operational costs. 

To put that in perspective, the Year 1 cost is equivalent to $1.55/hour. For $0.662/hour, someone could get a m3.2xlarge AWS EC2 RHEL instance in US-East using On-Demand pricing. Reserved Instances pricing for that instance would be $3979. That pricing is for a large virtual server that could then be subdivided into many VMs.

Will businesses put up with those cost levels, when external options to manage VMs are readily available in the marketplace today?

It’s been happening for a while, but expect to see a much greater push by the marketplace to attack those levels of operational costs, and the learning curves of so many individual companies trying to gain those capabilities themselves.

Does these costs levels represent an inefficiency that we’ll talk about in 5 years like that room full of engineers it took to build a converged Vblock system in 2010-2011? Curious about your feedback…

January 9, 2016  3:14 PM

The 5 Cs of Solutions

Brian Gracely Brian Gracely Profile: Brian Gracely

download (1)Looking back at my career, I’ve had the opportunity to work for a number of groups (within companies) that have decided to expand their focus from product-centric to include more solutions-based offerings. In some cases, this combined more of their own technologies together. In other cases, it combined their technology with industry leaders.

We continue to see the IT industry attempt to bring more solution-centric offerings to market, both for existing application environments, as well as more modern Cloud-Native environments.

A move to solutions is usually done for a couple reasons:

  • Competitive pressures within the market, where customers are looking to reduce the complexity of what they buy and implement.
  • Sales/Revenue pressure to sell more products, or get better leverage from a broad portfolio.
  • Aligning your products with a large industry trend (e.g. virtualization, cloud computing, containers, etc.).

While it may seem logical to try and integrate (or just bundle together) more products within a portfolio, a move to solutions can often be difficult for companies. Here are my 5 C’s to getting solutions-based selling right at any technology company.

Coverage – By their nature, solutions are often more complex to sell than individual products. They often cross-over between buying centers and budgets. Some companies ask their core sales team to also sell solutions, while others will create a specialized “overlay” team to focus on solutions-selling. It’s important to note that most existing sales teams won’t have the depth of knowledge to immediately sell a solution, so having an overlay or specialized team to augment their field coverage is essential. But it’s equally important to plan for that overlay/specialized team to eventually either be disbanded, or folded into the core sales team overtime. Their role is as much about training the core team as it is about selling the solution.

Compensation – As solutions grow larger or more complex or more costly, they often take longer to sell. When solutions extend the traditional sales cycle, sales reps are put in the complicated situation of putting their quotas at risk by trading-off short-term deals vs. longer-term strategies deals. If companies want solutions-based selling to be successful, they need to adjust their compensation structure to encourage the sales teams to find a better balance of short and long-term goals.

C-Level – This area is both an internal and external focus. Internally, companies need buy-in from their C-Suite that solution-focused development and selling is a priority, because it will disrupt the current way of building and selling. Externally, one common mistake for companies that attempt to sell solutions vs. products is to continue to target the same buying-level within an organization. For example, hoping that the storage or networking team (alone) will own the budget for a converged stack is a mistake. Solutions-selling is done at a higher level in the organization, to groups that own large chunks of budget and a broader architectural responsibility. If that model doesn’t exist at customers, then selling solutions will be very difficult or impossible. Solution offerings not only need to align to customer problems, but the ability for customers to buy solutions to those problems. Don’t underestimate the complexities of engaging new purchasing departments at customers.

Cracking the (internal) Culture – This sounds terrible, but most technology companies are organized according to a Business Unit or Technology Unit structure, and those groups are rarely incentivized to work closely together. And solutions typically try and pick “the best” from internal technologies, which can often create animosity amongst the groups that are not included in a given solution. This can lead to all sort of internal politics and competition that was unexpected. This is an area that is often overlooked or ignored (not on purpose) until the problem becomes a big problem. Formal models should be put in place to ensure that the solutions teams and product teams have some sort of common goals that are measurable, incentivized and tracked to drive collaboration between those teams.

(Ac)Counting – If products A, B, C and D are all pulled into a single SKU and sold as a solution, who should get to account for that revenue within a company? Not sure? Welcome to the challenge of solutions accounting. Since revenues drive many upstream decisions (Sales coverage, Marketing budgets, R&D budgets, etc.), it’s important to consider which groups will take credit for sales. Trying to assign a metric to “influenced a deal” is very difficult and typically leads to more disputes than collaboration.

As the Public continues to grow, expect to see more (non-Public-Cloud) companies expand their solutions offerings. Getting solutions right is a complicated model, but the good news is there are plenty of success and failure stories to learn from.

December 23, 2015  9:32 PM

2016 Guarantees for the Tech Industry

Brian Gracely Brian Gracely Profile: Brian Gracely


  1. Every vendor will have a “Digital Transformation” story, regardless of whether they help customers build applications or not.
  2. Every vendor will have an IoT story, regardless of if they sell anything that is IoT related.
  3. Every vendor will package their software in a Docker container for easier distribution and early demos.
  4. Every (other) vendor will attempt to downplay the growth or profitability of AWS.
  5. Lots of discussions about Cloud-Native Apps that integrate with legacy systems instead of just being greenfield.
  6. While new examples will (hopefully) emerge that aren’t greenfield, expect to see Uber, AirBnb and Tesla mentioned in the majority of presentations as examples of how to run your IT organization or avoid being disrupted.
  7. While interest rates are expected to rise, we’ll see just as much or more M&A activity in 2016 as we did in 2015.
  8. Some vendors will incorporate US political campaign issues or ISIS fears into their marketing campaigns, most likely around security or disaster-recovery.
  9. Microsoft will announce something that makes you say, “That’s not the Microsoft we’re used to under the old CEOs.”

So what’s #10? Let me know your guarantees in the comment section.

EDIT: I thought about #10 for a while last night. Let me go ahead and add “Company executives will publish blogs at this time next year about how all of their predictions are exactly aligned to their company’s portfolios, and how they are 75-95% right, even though VC only get technology transitions right 10% of the time.”

December 23, 2015  9:17 PM

Applying 2015 Life Lessons to IT Planning

Brian Gracely Brian Gracely Profile: Brian Gracely
Compliance, DevOps

For me personally, 2015 was a VERY INTERESTING year in lots of ways. As I take a little time off, I started wondering if there was anything I learned from my personal life that could be applied to the technology world.


We lost some family members in late 2014 and as a result I was asked to take over some legal and financial responsibility for the remaining family members. I use the word “family” loosely, because this was family that was several levels removed and we had different last names. When I agreed to this responsibility, I thought I had reviewed the associated paperwork (wills, trusts, policies) properly. And then I got the opportunity to put my thoroughness into action by having to go to battle with multiple law firms, insurance companies and government agencies. While I was fully capable of executing the necessary actions, we quickly learned that the execution path was much more complicated than the original paper trail would have suggested. LESSON LEARNED – Lots of people will tell you that the “Recovery” part of Backup and Recovery is the most important aspect. I’d also throw in that simply testing recovery might not be enough. Test things like “person in charge of encryption keys has left the company” or “our company went through a merger recently and wants to change the naming scheme for internal systems“. Did your recovery model hold up to those semi-technical issues as well as the heavily-technical aspects?


We decided to renovate portions of our house. In today’s world of Houzz, 24×7 DIY shows on TV and Angie’s List, it could be very easy to believe that anyone with basic organizational skills or the ability to click on pictures could produce a beautiful new kitchen or sunroom. Just use the on-demand services that are available on the Internet. What could possible go wrong? LESSON LEARNED – The world is becoming enamoured with DevOps and Agile development and things like “infrastructure is a commodity”, but many DevOps teams are still small (see Slide: 37) and co-located in one main location. Once many groups gets involved in building new technologies and services, communications and documentation of standards becomes even more critical. CI/CD and Cloud-Native App platforms help with this, but don’t underestimate the need for great people both in planning and building the foundational infrastructure.


Throughout 2013 and 2014, I found myself frequently needing to borrow large vehicles to be able to move or haul “stuff” (firewood, furniture, building materials, etc.). While I had friends that would allow me to borrow their vehicles, it became a time consuming and often complicated engagement. So in 2015 I bought myself a pickup truck. It’s an older model (1994, F150), but it runs consistently and does the “heavy lifting” and “dirty jobs”. No bells or whistles needed. LESSON LEARNED – While the industry is often caught up in the latest trends and fads, there will always been a need to have equipment do the ugly work. Maybe it’s batch processing or security monitoring or just structured cabling. Whatever it is, it’s OK to make basic investments that are specific to those types of needs. They don’t get any headlines, but they cover the necessities of the business.

December 12, 2015  12:51 PM

10 Important Container Areas to Watch

Brian Gracely Brian Gracely Profile: Brian Gracely
AppDynamics, AWS, Azure, CoreOS, EMC, Google, IBM, Kubernetes

Screen Shot 2015-12-12 at 9.45.59 AM

[1] Docker’s “Batteries Included But Removable” Strategy  Before there was Docker, there was dotCloud. dotCloud was a PaaS company. Eventually they decided to separate out the technology that made setting up containers easy, and Docker was born. But that team knows how to integrate all the other elements needed to build a platform (networking, storage, scheduling, security, etc.) and have been adding those elements piece-by-piece into the Docker, Inc. portfolio. But instead of making them into a monolithic piece of software, they are making them modular and removable (or interchangeable) with 3rd-party extensions that integrate with Docker’s APIs. This is a similar approach to what VMware did in the past with vCenter plugins and APIs like VAAI. It will be interesting to watch how the market adopts the native Docker elements (Docker Networking, Swarm, etc.) vs. 3rd-party extensions.

[2] VMware’s Container Strategy – As Docker grew in popularity, many “Docker is a VMware killer” headlines were written. While VMs and Containers serve different functions and are mostly used by different groups (Ops vs. Devs), the narrative was out there. But VMware came back strong in 2015 with their VMware Integrated Containers strategy and products (some commercial, some open-source). VMware is quickly evolving to understand containers, open-source and the needs of developers.

[3] Microsoft’s Container Strategy – Microsoft and Docker have had an evolving relationship throughout 2015, and Microsoft has continued to add container-centric functionality to both Windows Server and Azure throughout the year. As they become more OS agnostic, Microsoft has the ability to rekindle their relationships with new and previous developer and ISV groups.

[4] Container Networking – While Docker solidified their networking stack in 2015 with the acquisition of Socketplane, 3rd-party companies such as WeaveWorks have built excellent native-container networking stacks that are being used by many Enterprises and Service Providers. And with the libnetwork functionality, Project Calico and Docker Networking APIs, additional 3rd-party companies can integrate networking.

[5] Container Storage – Initially, the thinking around container storage was that either a file-system was sufficient (e.g. NFS, BTRFS, etc.) or it would be stateless and the data would be kept in non-container locations (e.g. bare-metal or a VM). But as 2015 evolved, companies and projects like ClusterHQ (Flocker), Portworx, Rancher Persistent Storage Services and EMC REX-Ray emerged to offer persistent storage that was deeply integrate with container environments. Docker also extended their Storage API. Continued »

December 10, 2015  10:36 PM

10 Biggest Cloud Stories of 2015

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Cisco, Cloud Computing, Dell, Docker, EMC, GE, HP, HPE, IBM, iot, Microsoft, Open source, Oracle, Pivotal, Public Cloud, VMware

downloadIt was a busy year in Cloud Computing. It was the year that Public Cloud really showed the market how fast it could grow, and how badly it could fail. It was also the year that existing Enterprise companies had to made big bets to determine if or how they will compete moving forward.

So in no particular order, here are my Top 10 Cloud Computing stories of 2015:

  1. Amazon announces Earnings and Growth Rates – For the first time, Amazon broke out their AWS earnings. Nearly $8B in revenues, 80% YoY growth and 20-25% operating margins. This sent shockwaves through the industry, especially for competitors that called AWS a “commodity cloud” and expected it to have margins similar to retail business.
  2. Dell acquires EMC/VMware – While the deal isn’t officially finalized yet (awaiting the 60-day go-shop clause to expire), but the $65B price-tag and moving both companies private had many people scratching their heads. There are still financial questions to answer investors, but the $80B combined company will be a powerhouse in Enterprise IT, unless this deal is a sign that the public cloud is really destructive to legacy IT.
  3. Microsoft supports Docker, Mesosphere – Microsoft made a ton of announcements in 2015 that reflected the new thinking in Redmond, but supporting Docker and Mesosphere (both in Windows and Azure) showed the industry that they were serious about attracting developers and new applications again.
  4. VMware announces Container strategy – For many people, Docker was supposed to be a VMware killer. Instead, VMware embraced the container technology and laid out a strategy (VMware Integrated Containers; Project Photon) to show Enterprise customers a path for using both VMs and Containers.
  5. Rackspace supports other Public Clouds – When OpenStack was being created, Rackspace was a major contributor and hoped to compete with AWS in the public cloud. Rackspace has now altered their previous strategy and is now offering “Fanatical Support” for both AWS and Azure.
  6. HP cancels HP Helion Public Cloud – HP was in the Public Cloud business, then they were out, then back in again, and finally they shut down operations of their HP Helion Public Cloud offering. A core part of their Hybrid Cloud strategy, HPE is now shifting the Public Cloud portion to either be AWS or Azure, or partnering with HPE-powered Service Providers.
  7. EMC acquires Virtustream for $1.2B – Even with VMware vCloud Air in the EMC Federation, EMC decided that it needed a more Enterprise-focused offering, so they acquired Virtustream, one of the leaders in that area. EMC has since announced that Virtustream will merge together with vCloud Air and be run in a single organization reporting to Rodney Rogers.
  8. IoT Strategies begin to form – AWS, Microsoft, Google, Cisco, GE, Oracle, Pivotal, IBM, SAP and many others laid out their IoT strategies and began to roll out commercial services. It’s still early days for IoT, but the frameworks for global platforms are starting to get created.
  9. Oracle announces Cloud strategy – At Oracle OpenWorld 2015, Oracle laid out their SaaS, PaaS and IaaS strategy, as well as a number of available services. Details are still emerging in many areas, but like Microsoft, Oracle has a massive installed base of customers and applications to help transition to their cloud.
  10. Open Source Software  All of the major Cloud provider not only build their services on open source software, but in many cases are now contributing back to large projects. Microsoft made major contributions. Apple made a late-year push, as did legacy vendors Cisco, HP, EMC and VMware.

What major stories did we miss for 2015 that were game changers for you?

November 30, 2015  10:24 PM

Things to Watch in 2016

Brian Gracely Brian Gracely Profile: Brian Gracely
AppDynamics, AWS, Azure, Dell, Docker, EMC, Google, OpenStack, PaaS

downloadIt’s too early to write a predictions blog. Besides, after AWS announced revenues of $6-7B/yr and 80% growth and Dell acquiring EMC for $50-60B, I’m not sure we need a bunch of predictions right now. There are plenty of groups that need to execute against some business plans, and lots of people are trying to figure out the new economics in technology.

But there are some things that I will be watching closely in 2016:

All Roads Lead to Austin

Far too many technical conferences are in San Francisco or Las Vegas. They are cities that rob your soul and don’t give you any energy back. But 2016 offers a glimmer of hope. Both OpenStack Summit and OSCON are going to be in Austin, TX in 2016. I was hoping that Dell might convince EMC to move EMCWorld there too, but apparently that dream never got off the ground. Nevertheless, there will be sunshine. There will be temperatures that won’t felt you. And there will be BBQ. Glorious, glorious BBQ for as far as the eyes can see!!

Everybody Wants to Monitor Containers or Micro-Services

There is a growing number of companies in this space. Some of them want to manage the infrastructure (e.g. Appformix, Datadog, Sysdig, Weave, etc.) and others that want to monitor the applications (e.g. AppDynamics, NewRelic), while others monitor aspects of both (e.g. SignalFX).  Continued »

November 29, 2015  2:06 PM

Lessons from an Intra-preneur

Brian Gracely Brian Gracely Profile: Brian Gracely

shutterstock_139401506In my career, I’ve had four opportunities to either lead or be part of projects that would be consider “internal startups” within larger companies. Sometimes the people that are involved with these projects are called “Intra-preneurs”, because there are some similarities to the experiences of people that create new businesses (e.g. entrepreneurs).  The projects were:

  • In 2006, the “Linksys One” group that was spun out of Cisco and into Linksys, to focus on building early SMB solutions.
  • In 2008, the Virtualization and Grid Infrastructure group at NetApp that was focused on integrating virtualization technologies (VMware, Red Hat, Microsoft) with NetApp technologies.
  • In 2010, the pre-VCE group (from VMware, Cisco, and EMC) that was focused on the Vblock technologies before they were formalized into the VCE company.
  • In 2014, the EMC {code} group which focused on open source technologies and communities.

I contrast these experiences with my time at Virtustream, as they transitioned from a Cloud Provider to also being a technology/software provider.

The technology industry has an unusual fascination with startups, entrepreneurs and how it measures success and failure. I suspect that it goes back to the original HP garage and the excitement around new ideas. But the reality in the technology industry is that new ideas are and will always be plentiful, but the execution of those ideas into businesses is incredibly difficult. The odds against making something new come to fruition are astronomical, to the point where most logical people would discourage anyone to do it. Big companies have more money, more engineers, existing sales and distributions channels and overall brand awareness. But yet people keep trying to build better mousetraps and they keep climbing mountains because they are there to be climbed.

Lessons Learned

Over the last couple months, I was drawn back into thinking about this based on a couple things I read and listened to:

  1. Chad Sakac on The Geek Whisperers podcast, talking about how he created, grew and scaled the “Chad’s Army” vSpecialist group into the much bigger role he has today.
  2. Lucas Carlson’s newsletter series about entrepreneurship and failure.

Between the two, it was a nice mix of internally and externally-focused activities, small company and big company challenges, and successes and failures…..and learnings. Continued »

November 22, 2015  1:32 PM

Some Cloud Native Terminology you should Learn

Brian Gracely Brian Gracely Profile: Brian Gracely

If you’ve been following the discussion about how applications are now being written to be more “cloud native”, you might have seen some terminology that was strange or unusual. This is because the operational model around these newer types of applications makes a few assumptions which are very different than applications and infrastructure in the past. Let’s look at a couple of the concepts:

  1. Infrastructure as a Software: While there will still be underlying hardware for servers (some acting as storage) and networks, the core idea is that the way to interact with these systems is if they are built primarily from software, which is decoupled from the underlying hardware.
  2. Infrastructure as Code: Using software development techniques to define, test and manage the infrastructure software. This means that teams use structured automation frameworks (e.g. Chef, Puppet, Ansible, etc.) instead of random scripts, and the code is kept in versioned, software repositories.
  3. Avoid Patching: Instead of patching existing systems, the core believe is that new “versions” of a system should be deployed instead. This avoids the feature-patch-dependency creep.
  4. Abstract where Possible: Being able to abstract specific components within an architecture helps avoid lock-in to a specific element, and eventually leads to more programmatic ways to interact with more parts of the system.

Terminology to Learn

Durable – “survives failure of its parts” – This aligns to Abstract where Possible. Elements will fail and this is expected, but the overall system should be durable enough to survive either HW or SW failures.

Declarative – “user of the abstraction declares the desired state, and the service providing that abstraction attempts to maintain the abstraction in that desired state.”  – This aligns to Infrastructure as Software and as Code. The owner of a system defines (in programmatic software) what it wants it to look like and how it should act. The system should be managed not to vary from this state or “promise”.

Immutable – “unchanging over time or unable to be changed” – This aligns to the idea of Avoiding the Patching elements of an applications, or the popular “Pets vs. Cattle” notion of managed resources.

Ephemeral – “lasting for a very short time.” – This aligns to both the idea of Infrastructure as Software and Abstracting where possible. Some elements are only needed for short periods of time, or will only remain as long as a dependent element is in use. This is partially why people are beginning to like using containers, as they are designed like processes, to bringing them up or down is simple and fast.

Remotable – “is an interface that allows IPC (Inter Process Communications) among other things. As you may know, all apps (most of the time) run on their own process and cannot directly interact with apps running in other processes directly. One method you can use to create an intercation between is by using a IBinder. IBinder allows communication between those “remote” objects.

We discussed many of these topics on a recent The Cloudcast podcast –

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: