From Silos to Services: Cloud Computing for the Enterprise

Page 2 of 2312345...1020...Last »

August 10, 2017  4:23 PM

Bringing DevOps to Highly Effective People

Brian Gracely Brian Gracely Profile: Brian Gracely
Automation, Deming, DevOps

What aspect of DevOps poses the biggest challenge for IT leaders? devops-toolchain-svgAs I discussed DevOps with hundreds of leaders during a recent event series with Gene Kim, an author of The DevOps Handbook, I heard one answer loud and clear: It’s the culture change. The concepts of blended or shared responsibilities, blameless postmortems, and speed vs. stability often run counter to the principles you’ve been taught about leading IT. You realize that the future of the business will be highly impacted by your ability to deliver software faster (via new features, new products, and new routes-to-market) – but you struggle to find a language or framework to communicate to your teams (and peers) about how to make DevOps and these results happen.

Far too often, we ask IT leaders to learn the language of lean manufacturing or the principles taught by W. Edward Deming. While these approaches are valuable, and can draw a parallel between the factories of physical goods and 21st-century bits factories, they require yet another learning curve before change can occur and take hold in an organization.

So with that in mind, I thought it would be useful to adapt a popular business framework, Steven Covey’s “The 7 Habits of Highly Effective People”, into a model that can be used by business leaders trying to bring DevOps culture into their organizations. Let’s see if this can help solve your language struggle.

1. Be proactive

Let’s start with the one constant in IT: Change happens, constantly. If you started 20 years ago, client-server was just beginning to gain momentum. If you started 10 years ago, the iPhone had barely been introduced and AWS had 2 services. If you started 5 years ago, Linux containers were still overly complicated to use and web-scale companies weren’t open sourcing important frameworks.

We’ve seen massive changes over the last 50 years regarding which companies lead their respective industries, and which market segments are the most valuable (hint: today it’s technology). Business leaders must recognize that technology is driving these changes, at a more rapid pace than ever before and be proactive at being prepared for the next round(s) of changes. Be the change agent that the business requires. Be responsible for behavior, results, and growth.

screen-shot-2017-08-10-at-5-21-20-pm2. Begin with the end in mind

No business executive wakes up and says, “We have a DevOps problem!” Instead, you lose sleep over reducing time-to-market for new capabilities, reducing security risks, and other metrics that can be directly tied to the top or bottom line. This is why I believe that “Confidence in Shipping Software into Production” is the most important DevOps metric.

At its core, this metric begins with the end in mind: Can we get software into production safely and reliably? From there, you work backwards to determine how frequently this can happen, which leads to an examination of existing skills (people), ability to manage deployment frequency (culture), and if the right tools and platforms are in place (technology). Focus your time and energy on things that can be controlled.

screen-shot-2017-08-10-at-5-22-36-pm

3. Put first things first

You need to execute on the most important business priorities. While it’s easy to imagine what a greenfield organization would do to make “DevOps” the default technology culture, the reality is that this is not an immediate reality for most organizations. Their org charts are optimized for yesterday’s business model and distribution strategy. They have many application platforms, often siloed for different lines of business. And they need to adapt their applications to become mobile-native in order to address emerging customer expectations.

Putting first things first, these core elements need to be in place before the business can expect to be successful at rapidly deploying software into production.

  • Automation – It is critical to build core competency in the skills and tools needed to automate repetitive tasks, both for applications and infrastructure. For many companies, it’s valuable to begin with a focus on existing applications (both Linux and Windows) and infrastructure (e.g. networking, storage, DHCP/DNS), and then evolve to automating new applications and services.
  • CI/CD Pipelines – Just as factories (since the early 1900s) have been built around assembly lines, building modern software is driven by automated pipelines that integrate source code repositories, automated testing, code analysis, and security analysis. Building skills to manage pipelines and the process around frequent software updates is critical for building a framework to manage frequently updated software applications.
  • Application Platform Once applications have been built, they need to be deployed into production. In today’s world, customers expect to get updates to their software on a frequent basis (e.g. mobile app updates each week), so it’s important to have a repetitive way to deploy application updates and scale to meet the business demands on the application. Managing the day-to-day activities of applications is the role of an application platform. For many years, companies tried to build and maintain their own application platforms, but that approach is rapidly changing as companies realize that their value-add is in the applications, not the platform.

Once these elements are in place, many IT teams are ready to start containerizing their existing and modern applications.

screen-shot-2017-08-10-at-5-24-14-pm

4. Think win-win

Far too often, the DevOps discussion is framed as the tension and disconnect between Development and Operations teams. I often call it an “impedance mismatch” between the speed that developers can push new code and the speed that operators can accept the updates and make sure that production environments are ready.

Before we blame all the problems on operations being too slow, it’s important to look at why it’s believed that developers are so fast. From the 2017 State of DevOps report, we see that Gene Kim (and team) measure the speed at the point when developers push code into source control (e.g. Git, GitHub.)

They aren’t measuring the speed of design and development. Even in a microservices environment, it can take several weeks or months to actually develop the software features.

So how do teams potentially get to a win-win scenario? Here are a few suggestions:

  • For Operations teams adopting automation tools and Infrastructure-as-Code principles (e.g. using source control for automation playbooks), both development and operations are beginning to use common practices and process.
  • For Development teams, insist that security people are embedded within the development process and code review. Security should not be an end-of-process step, but instead embedded in day-to-day development and testing.
  • For both teams, require that automated testing becomes part of normal updates. While many groups preach a “cloud-first” or “mobile-first” policy for new applications, they should also be embracing an “automated-always” policy.

screen-shot-2017-08-10-at-5-25-35-pm

5. Seek first to understand, then be understood

Six or seven years ago, nearly every CIO said that they wanted to try and emulate the output of web scale giants like Google in terms of operational efficiency (1000 servers per 1 engineer) and be more responsive to developers and the business. Unfortunately, at the time, it was difficult to find examples of companies outside of Silicon Valley that could actually execute at a similar level. And the technology Google used was not publicly available. But times have changed significantly over the last few years. Not only is Google’s technology (e.g. Kubernetes) readily available via open source projects, but the examples of enterprise companies implementing similar success are plentiful.

So before you send a few architects out to Silicon Valley to study with the masters, it might be more valuable to study similar companies to your own. This will surfaces experience that are industry-specific, region-specific, and use-case-similar. It will also help answer the question of, “But, how can we do that without hiring 100+ PhD-level engineers, or $1M+ salaried employees?” Sometimes the right answer is to leverage the broad set of engineers working on popular open source projects.

screen-shot-2017-08-10-at-5-26-39-pm

6. Synergize

I’ve often said that everything someone needs to be successful in DevOps they learned in first grade. For example, “play nicely with others,” “share,” and “use your manners.” The challenge is that org charts and financial incentives (such as salaries, bonuses, and promotions) are often not aligned between Dev and Ops teams in order to accomplish these basic goals.

Some knowledge of Conway’s Law comes in handy here. If the goal is a specific output (e.g. faster deployment of software into production), make sure that the organizational model is not the top barrier to accomplishing the goal. Cross-pollination of ideas becomes critical. Teams need to share their goals, challenges, and resource availability with other teams. You want to innovate and problem solve with those who have a different point of view.

screen-shot-2017-08-10-at-5-31-50-pm

7. Sharpen the saw

It would be easy to say that IT organizations need to make sure that their teams are keeping up-to-date on training and new skills. But all too often, this becomes a budget line-item that sometimes get ignored. The proper way to address the need for “skills improvement” is not to think about it as “training” (perhaps attend a course, get a certification), but rather to incorporate it into an actual work activity.

We’re moving into an era in IT where all the rules and best practices that have been stable for the last 15 to 20 years are being re-written. This means that it’s critical to leverage modern ways to learn new skills. Encourage employees to seek the best way for them to learn (such as side projects, meetups, and online learning) and then have them bring those new skills back to the rest of the team. Make it a KPI to improve the skill levels and skill diversity of the entire team, with incentives for individuals and the overall team to get better. Bottom line: Seek continuous improvement and renewal professionally and personally.

screen-shot-2017-08-10-at-5-33-50-pm

The importance of storytelling

The 7 Habits framework has proven to be successful in helping individuals and groups improve interpersonal skills. Those skills are at the core of any cultural transformation.

screen-shot-2017-08-10-at-5-30-34-pm

Beyond the 7 habits, one more skill should be on every leader’s mind. One of the most important skills that IT leaders can leverage as they drive transformation is the ability to tell stories of success and change. The storytelling skill can inspire emotions and can help spread successes from group to group. Storytelling also helps a group to personalize its challenges, and adapt solutions to suit the group’s particular cultural nuances.

July 23, 2017  8:00 PM

The Cloud continues to shift Microsoft’s strategy

Brian Gracely Brian Gracely Profile: Brian Gracely
Azure, cloud, Microsoft, Office365, Open source software, Satya Nadella, Steve Ballmer

microsoftMicrosoft’s journey over the past 3-4 years, since the appointment of CEO Satya Nadella, has been a fascinating example of how large companies transform. Like many established IT vendors (e.g. Oracle, Cisco, HPE, EMC, NetApp, etc.), the challenge of adapting to a world that is more software-centric, more cloud-centric, and includes heavy doses of open source software has been a difficult challenge. It requires rethinking almost every aspect of the business, and making some difficult short-term decisions.

One of the first things that Nadella did was abandon former CEO Steve Ballmer’s three-pronged plan to move Microsoft into devices and services. This meant that they needed to move away from the massive Nokia acquisition, and move away from the mobile phone business which is dominated by Apple and the Android ecosystem). It’s unusual for a new CEO make such a significant shift from the prior CEO, especially before they had built up significant credibility with Board of Directors and Wall Street.

During the 80s and 90s, Microsoft was almost entirely defined an Operating System company, defining and dominating the growth of the PC market. But since 2000, with the massive expansion of the Internet, the relevance of a desktop PC has become less and less important. I’ve written before that Microsoft would be smart to think about how to be less focused on the Operating System and more focused on application developers. The rise of the smartphone and the decline of the PC has defined the past decade of computing. And unfortunately, Microsoft missed the opportunity for the smartphone OS. Windows is no longer a valid OS for smartphones.

While it took Microsoft a little while to adapt and encourage their largest revenue base (Microsoft Office Suite) to move to the cloud and be delivered as a SaaS offering, the growth of Office365 now appears to be strong. This allows Microsoft to continue to have the cash-cow business that drives the rest of their investment.

And that brings us to Microsoft’s public cloud strategy – both Azure and AzureStack. When it was originally designed, Microsoft Azure was focused on PaaS (Platform-as-a-Service), while AWS focused on core IaaS (Infrastructure-as-a-Service). Microsoft expected to leverage their large base of .NET developers, but the offering was the wrong time and had the wrong services. They fell behind AWS. But over time, the Azure cloud has refocused on a broader set of service (IaaS, PaaS, Big Data, non-Windows workloads) and has begun to grow quite large. By most accounts, it’s now the #2 public cloud provider. The next big step will be to break out the Azure revenues independently.

Recently, Microsoft took yet another step in their cloud journey. A step that many existing IT vendors have struggled with. They laid off a number of their existing sales teams, with a refocus on how to specifically sell to a base of customers that want to buy from the cloud. They also signaled at their Inspire conference that they plan to start paying their sales teams, and partner sales teams, based on consumption instead of just purchase volume. This will be a very interesting move to watch, as this will require a completely new approach by sales teams. Instead of just selling large ELAs (Enterprise License Agreements), which could result in shelf-ware, they will be forced to be more in-tune with actual customer usage. It will also create some interesting scenarios where they may encourage customers to use more resources than they actually need, in order to drive their own compensation. In addition, it will force customer to begin learning how to budget for IT consumption in an on-demand world – something they’ve never had to do before. For many companies, this has proven to be extremely difficult.

History has shown us that the leaders from one era of technology rarely remain the leaders into the next era. Microsoft is trying to make that transition, so it will be interesting to watch how well some of the significant changes are accepted, and where their competition is able to maneuver around them without their legacy baggage.


July 16, 2017  8:40 PM

Confidently measuring DevOps success

Brian Gracely Brian Gracely Profile: Brian Gracely
confidence, containers, Continuous deployment, culture, DevOps, Metrics, Software

Over the past few weeks, I’ve had the great privilege to partner on a series of roadshows with Gene Kim (@realgenekim) author of “The Phoenix Project”, “The DevOps Handbook” and the annual “State of DevOps Report”. The events are called “Culture, Containers and accelerating DevOps, the path to Digital Transformation” and they provide us with an opportunity to speak with developers, enterprise architects, IT operations engineers and business executives about how they are implementing technology and culture changes to help them deliver software faster into their markets.

During Gene’s presentation, he highlights a series of lessons that he’s learned since writing The Phoenix Project. Some of these include:

  1. The Business Value of DevOps is higher than expected.
  2. DevOps is as Good for Ops, as it is for Devs.
  3. The Importance of Measuring Code Deployment Lead Times.
  4. The Surprising Implications of Conway’s Law. (organizational structure)
  5. DevOps is for the Unicorns, and the Horses too.

The lessons are supported by a series of stories, examples and data from businesses that Gene has interacted with over the past 4-5 years, as they navigate their DevOps journey.

What’s the Most Important DevOps Metric?

At some point in every event, someone from the audience will ask the question, “If you had to boil it down to a single thing, what is the most important DevOps metric for us to track?” Gene’s answer is often the topic #3 (above), focused on measuring code lead times. It comes from his experience studying the Toyota Production System and it’s approach to flows-of-work, managing and correcting defects, and empowering employees to make the on-going changes needed to improve production of the end product. In essence, he highlights that today’s data centers have become 21st-century bits factories, with the goal of producing high-quality, agile software applications.

For the most part, I’d agree with Gene that this is an important metric to track. But with all due respect to his expertise in this area, I actually believe that there is a better metric to track. And Gene actually calls out this concept in his talk: An Organization’s Confidence Level of Shipping Software into Production. 

 

software-confidence-blog

In my opinion, this concept is more appropriate than any specific metric, because it forces the technology team to think about their actions in business terms. It also allows them to have a conversation with the business leaders in a way that is focused on impact to customers. It directly aligns to every company becoming a software company, and the importance of making software “go to market” becoming a core competency of the business.

It allows the technology team to begin with the end in mind, and work backwards from the goal of safely shipping software into production. Their level of confidence in the end goal will also force them to consider why their current confidence level may not be the highest it could possibly be.

  • Are we investing (people, technology, partnerships) for success towards the goal?
  • Are we designing our software, our systems and our platforms to handle the emerging needs of the business?
  • Are we enabling a culture and set of systems that allow us to learn from our mistakes, and make improvements when needed?

When I think about this concept, I’m encouraged by the level of confidence from John Rzeszotarski – SVP, Director of Continuous Delivery and Feedback – at KeyBank

John talked about the DevOps journey at KeyBank and how they focused on culture, continuous integration pipelines, automation, containers and their container-application deployment platform . This was a 12-18 month journey, but where they are today is pretty remarkable. He summed up his talk by telling a story about how they recently re-launched services from one of the banks they had acquired. The highlight was that they were able to deploy 10 new updates to the production application, with 0 defects, during the middle of the day. That is a very high level of confidence in shipping software into production, and in the elements that make up their DevOps culture.

The KeyBank story is a great example of making a significant impact to the business, and measuring the technology in terms of business success and agility.

NOTE: Below are my slides from the Culture, Containers and Accelerating DevOps roadshows.


July 6, 2017  1:37 PM

Why Developers and Operators are using Containers

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Foundry, containers, DevOps, Docker, Kubernetes, Open source, OpenShift, Red Hat

container-1097206_1920_1This week, I was listening to an episode of the “Speaking in Tech” podcast and the guest was talking about why he believed that container usage may be overhyped and not necessary a good thing to be exposed to developers or operators.

When having these types of discussions, I believe it’s important to look at not just the evolution of container technology, but the evolution of container platforms. From this, we have learned a few valuable lessons:

Give Developers Flexibility

Whether we’re talking about PaaS (Platform as a Service) or CaaS (Containers as a Service) technologies, the end goals are fairly similar – make it simpler for developers to get their software into production in a faster, more stable, more secure way. And as much as possible, hide/abstract away much of the complexity to make that happen.

Early PaaS platforms got part of this equation correct by delivering Heroku-like “push” functionality to developers. Have code, push code, run code. But they also got part of the equation wrong, or at least limited the developer experience too much. In other words, the experience was too opinionated. The early platforms limited which languages could be used by developers. They also forced developers down a path that limited the versions or variants of a language that they could use.

By letting developers to use standards-based containers as a packaging mechanism, it allowed them more flexibility than the original PaaS platforms allowed. It allowed for more experimentation, as well as letting developers validate functionality using local resources (e.g. their laptop).

Align Technology and Culture

The guest on Speaking In Tech was correct in saying that no single technology, nor a single culture shift, will give a company a technology advantage in the market. It requires a mix of technology evolution and cultural evolution, geared towards delivering better software for their business. Containers plays a role in this. As container can be the unit of packaging and unit of operations, it begins to create a common language and set of processes for both Developers and Operators. It’s not everything, but it’s a starting point. It needs to be augmented with expertise around automated testing, automated deployments and the CI/CD pipeline tools (e.g. Jenkins) that allow for the consistent movement of software from developer to QA to production, and the on-going operation of that software. By hiding the visibility of containers from either developers or operators, it requires more effort to find commonality between the evolving technology and culture.

Container Platforms Have Quickly Evolved

As container usage grows, we’re learning quite a bit about making them work successfully in production. At the core of that learning is the maturity of the underlying container platforms and how to manage them. The reality is that successful container platforms are a combination of the simplicity of PaaS and the flexibility of CaaS. They allow developers to push code, binary images and containers into the platform (directly or via CI/CD integrations) and they give operators a stable, multi-cloud platform to run those applications. We’ve seen the number of developers working on platforms like Kubernetes grow significantly, and businesses are adopting it around the world. And with the evolution of Kubernetes Federation, we’ll begin to see even greater adoption of truly hybrid and multi-cloud environments for businesses.

Containers have experienced a meteoric rise in interest from developers over the last few years. It’s enabling greater flexibility for developers, it’s bringing together developer and operations teams with common technology, and it’s enabling multi-cloud deployments which are expanding interactions between companies and their marketplaces.


June 30, 2017  7:01 PM

Open Source Software and the Grateful Dead

Brian Gracely Brian Gracely Profile: Brian Gracely
Open source software

long-strange-trip-logo-1480x832For the last few weeks, I’ve been traveling quite a bit, so I’ve spent a decent amount of time on airplanes. When airplane WiFi is poor (quite frequently), so pass the time watching movies. For me, I’ve been watching the excellent “Long Strange Trip” documentary on Amazon Prime, about the history of the Grateful Dead. If you like music, or history, or just enjoy good storytelling, I highly recommend the series.

Coming up on my 1yr anniversary of working at Red Hat, it struck me how many parallels there are between the evolution of the Dead and how open source software communities [LSD trips and being under the constant influence of drugs excluded]. The Grateful Dead have often been characterized as “a tribe of contrarians who made art out of open-ended chaos”. They phrase could easily apply to many open source communities.

[Episode 1] Committed to Constant Change

The Grateful Dead are known as being a touring band, not one that spent time focused on commercial success via studio albums. Like open source software, their music was constantly evolving, and it was interpreted differently by nearly everyone that saw them perform live. As they began to slow in their contributions, their model was “forked” and replicated by touring bands like Phish and Widespread Panic. Similarly, open source software is less about a single project than a style of development and collaboration that is constantly evolving and the principles being copied (and evolved) by many other projects.

[Episode 2] Finding Success on Their Own Terms 

While the record labels wanted them to conform to their recording and sales models that were used by most other bands, the Grateful Dead decided to adopt alternative business models. At the time, selling albums would have been more profitable, but they were actually ahead of their time in focusing on live events and allowing their music to be fragmented and easily copied (bootleg tapes). Similarly, many analysts would like to see open source companies deriving revenues in similar ways to proprietary companies, but that model hasn’t been fruitful. Successful open source companies have adopted support models and SaaS models to drive revenues and success.

[Episode 3] Let’s Join the Band

While the Grateful Dead had 5 or 6 original members, the documentary highlights how Donna and Keith Godchaux “just decided to learn the music and join the band” in 1971. Random fans of the Dead actually joined the band and stayed with them for many years. This is not unlike how anyone can join an open source project just by showing interest and making a meaningful contribution.

[Episode 4] Who’s In Charge Here? 

For many people, the connection between Linus Torvalds and the Linux project is the model that they expect all open source projects to have. They expect a BDFL (Benevolent Dictator for Life). In most projects, the BDFL role doesn’t really exist. There might be strong leaders, but they realize that broader success needs many leaders and tribes to emerge. This same dichotomy emerged for Grateful Dead, where Jerry Garcia was the visible leader, but he didn’t want to set all the rules for how the band (or their audience) needed to behave.

[Episode 4] and [Episode 5] I’ve yet to see these episode yet (the next airplane flights), but looking at the previews, they appear to have similar open source parallels. They focus on the growing success of the band and how people set higher expectations than the band wanted to take on themselves. This can often happen with successful projects, where commercial expectations begin to drift from core community expectations. This is where strong leadership is needed just as much as the early days of the project.

If you’re interested in open source software, or some insight into how communities ebb and flow, I highly recommend this documentary. And the music is obviously great too.


June 26, 2017  10:24 PM

Walmart vs. Amazon – Battling Outside the Box

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, AWS, Azure, Google Cloud, Jeff Bezos, Public Cloud, Walmart

141201144920-walmart-thumb-1024x576This past week, Walmart issues a statement to their retail partners, suggesting that they should not run their technology stack on the AWS cloud. This is not an unprecedented move for Walmart, who has required that their partners have a physical presence in Bentonville, AR (Walmart HQ) for many years, in order to simplify meetings and reduce travel costs for Walmart.

It’s understandable that Walmart wants to keep valuable information about their business trends and details about their partners away from AWS (and indirectly, Amazon). This is not to imply (in any way) that customer data is collected by AWS, but there is no way to determine how much meta-information that AWS can collect about usage patterns that could influence the services they offer.

What’s interesting about this statement from Walmart is that they don’t offer a Walmart-branded hosted cloud alternative to AWS. This brings up an interesting dilemma – [1] Does this create a unique opportunity for the Azure cloud or Google cloud?, [2] Does Walmart have concerns about Google’s alternative businesses (e.g. Alphabet) collecting data patterns about their partners?, [3] Will Walmart partners be swayed by this edict, especially given Amazon’s growing market share in retail?  [4] Will this force Walmart to get into the hosted cloud business? Do they keep enough cash on their balance sheet to compete in that market?

Back in December, I predicted that the Trump administration would pick a fight with Amazon, as proxy for Jeff Bezos’ ownership of the Washington Post. That hasn’t materialized yet, although the year is only half way complete.

This action by Walmart ultimately brings up the question: Can non-traditional tech companies begin to impact AWS in ways that traditional tech companies have been unable to do – e.g. slow down AWS growth? The reach of companies such as HPE haven’t been able to slow it down, but maybe Walmart’s massive reach can have a different impact on the market. It will be interesting to see if Walmart reports this in their quarterly reports, or begins to make this a public issue with their Office of the CTO.

Beyond Amazon vs. Walmart, this bring up yet another interesting question – Will we see existing companies with large ecosystems or supply-chains (e.g. automotive, healthcare, etc. ) apply cloud guidance to their partners (e.g. must use XYZ cloud), or has the world of APIs completely changed what a modern supply-chain now looks like? The concepts of “community clouds” have never really taken off in practice.


June 13, 2017  11:15 PM

OpenStack was a pivotal time in IT

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Open source, OpenStack, Private Cloud, Public Cloud, VMware

openstackThis past week, we did some reflection on The Cloudcast about the evolution of technology over the last 6+ years. One of the topics were discussed was the impact that OpenStack had on the industry. People has various (strong) opinions about the level of success that OpenStack has achieved, but we discussed how OpenStack changed the IT landscape in a number of significant ways.

Announced and launched in 2010, OpenStack was designed to deliver an API-driven cloud infrastructure, similar to AWS EC2 compute and S3 storage. At the time, there was a split about whether the project(s) would focus on being a VMware replacement, or an open version of AWS services. This was heavily debated by groups focused on both agendas.

Software Defined Infrastructure

While OpenStack was by no means the first implementation of infrastructure services (networking, storage, firewall, proxy, etc), it was the first significant time when this approach to technology was embraced by Enterprise-centric vendors. Until then, both Enterprise-vendors continued to provide hardware-centric offerings that complimented offerings like VMware virtualization. Since then, API-centric infrastructure is becoming more commonplace in the Enterprise, especially with the emergence of containers and container platforms.

Open Source in the Enterprise

While companies like Red Hat, SUSE and Canonical had been selling commercial open source to the Enterprise for many years, OpenStack was the first time that companies like Cisco, HPE, NetApp, EMC and many others were attempting to combine proprietary and open source software into their go-to-market offerings. Since then more IT vendors have been building open source offerings, or partnering with open source centric companies to bring offerings to market for customers that are demanding open-first with their software.

Who’s in Charge of OpenStack?

While Rackspace may have wanted to leverage all the engineering talent to take on AWS, it wasn’t able to maintain ownership of the project. The OpenStack foundation was an early attempt at trying to bring together many competing vendor interests under a single governance model. Critics would argue that it may have tried to take on too many use-cases (e.g. PaaS, Big Data, DBaaS) and projects in the early days, but the project has continued to evolve and many large cloud environments (Enterprise, Telco) are running on OpenStack.

Since the creation of the OpenStack Foundation, several other highly visible open source projects have created independent foundations to manage the governance of the projects (e.g. CNCF, Cloud Foundry, etc.)

Founders Don’t Always Make the Big Bucks

While OpenStack was viewed as a disruptive threat to the $1T Enterprise infrastructure industry, and heavily funded by venture capital, most of the founding individuals didn’t make out in a big way financially. Piston Cloud and Cloudscaling were sold to Cisco and EMC, respectively, with relatively small exits. SwiftStack has pivoted from just supporting OpenStack to also supporting multiple public cloud storage APIs and software-defined storage use-cases. Nebula went bankrupt. Even Mirantis has moved their focus over to Kubernetes and containers. Ironically, Red Hat has become the Red Hat of OpenStack.


May 28, 2017  5:32 PM

Do IT admins fear for their future?

Brian Gracely Brian Gracely Profile: Brian Gracely
Automation, Cloud Computing, DevOps, IT admin, learning, Planning, software-defined

downloadMost tech events that I attend are fairly positive events, with people talking about new technologies and how these might “change the world”. The pushback on most talks is about the viability of the technology, or who would actually attempt to use that technology in production.

But a couple weeks ago at Interop, I experienced a much different vibe at several of the cloud computing talks. At several of the talks I attended, people in the audience were asking how this technology would replace their job and what they could do to prevent it.

We’ve Seen this Before

Now, this isn’t really a new sentiment. We heard it from mainframe and mini admins when open systems and client-server computing was introduced. We heard this from telecom admins when voice-over-IP was introduced. And we heard it from various infrastructure teams when virtualization and software-defined were introduced.

What seemed different about the concerns at this event were that most of the people asking questions didn’t believe that they’d ever get the opportunity to expand their current skills at their current employer. In essence they were saying, I don’t doubt that DevOps or Public Cloud or Cloud-native apps will happen, we just don’t see how they’ll happen via the IT organization at their company.

hello-languages-570x371I’ve written before about how learning new technologies has never been more accessible (here, here, here). But I also realize that many people aren’t going to take the time to learn something new if it can’t be immediately applied to your current job. It’s sort of like taking classes in a foreign language, but not having any people else to practice your new language with.

Do we need more IT Admins?

During one of the session by Joe Emison (@joeemison), he made the point that while developers are driving more changes within IT today, that developers aren’t every good at many of the tasks that IT admins typically perform. But this is leading them to leverage more and more public cloud services (see chart).

screen-shot-2017-05-28-at-5-11-29-pm

It was a sobering slide for those in attendance, especially those what had spent many years building up those skills. There was also a realization that they were part of IT organizations that had previously never really been measured or incentivized to optimize for speed, but rather to optimize for cost-reduction and application up-times.

Double down on developers?

There really weren’t many answers for people asking about their future in a world of DevOps, Public Cloud, Automation and more focus on developing and deploying software quickly. Most answers were focused on learning the software skills necessary to program something – where it was an application or the automation tools needed to stand up infrastructure/security/CI pipelines quickly. Those might not have been the answers that IT admins wanted to hear, but they are the answers that provide some path forward. Answers that tell people to do nothing, or just wait for the future to change probably aren’t going to create the future that people in the audience had hoped for.


May 28, 2017  1:16 PM

3 Lessons Learned: Containers vs. Container Platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Kubernetes, Linux, malware, OpenShift, Security, swarm

brg3akbThis past week I had the opportunity to present a session entitled “Managing Containers in Production: What you need to know” at the Interop conference in Las Vegas. In addition to the talk, I had the opportunity to watch several other presentations about containers and cloud-native applications. One session was focused on “The Case for Containers: What, When, and Why?”. It was primarily focused on Containers 101 and some examples of how you might run containers on your local machine. It highlighted for me three distinct differences between running containers locally and running them in production.

Local Containers vs. Container Platforms

One of the discussion points was getting from running a single containers to running several containers that make up an application, or several interconnected services. The suggestion was that people can just use the build in “Swarm Mode” to interconnect these clusters. While this is true, the session failed to mention the more popular way to do this, using Kubernetes. A member of the audience also asked if this could create a multi-tenancy environment for their business, and they were told that there were no multi-tenant technologies for containers. It’s true that Swarm Mode does not natively support multi-tenancy. But it is also incorrect that multi-tenancy isn’t supported for containers. Red Hat OpenShift delivers a multi-tenant environment for containers (via projects, etc.), built on top of Kubernetes.

Docker Hub vs. Managed Container Registries

imagesThroughout the talk, the speaker used Docker Hub as the source for all container images. While Docker Hub has done a great job of bringing together the containerized applications of ISVs and independent engineers, it does have it’s challenges. First, several independent studies have show that many images on Docker Hub have known security vulnerabilities or viruses. This means that it’s important to know the source of container images as well as have a mechanism to scan/re-scan any images you use in your environment. Second, Docker Hub is a registry located across the Internet from your environment. What will you do if Docker Hub isn’t reachable in your application pipeline? This leads many companies to look at using local container registries to not only improve availability, but also manage bandwidth requirements which might be high for large container images. It also allows companies to better manage image sources (e.g. corporate standard for trusted images) and scanning capabilities.

Aligning Container OS vs. Host OS

unnamed-2A final topic that came up as a result of an audience question was whether or not you should align the base Linux image in the container with the OS in the host where the container is running. This is an important topic to discuss because containers are a core element of the Linux operating system. In essence, they divide the Linux running on the host into two sections: container image and container host.

unnamed-1For an individual’s machine, it may not matter that there is alignment between container base image and the host OS. This can often happen if you’re using the defaults in a tool like Docker for Windows/Mac (e.g. LinuxKit or Alpine Linux) and the popular images from Docker Hub (e.g. Ubuntu Linux).  But as this moves into a production environment, it becomes a more critical alignment. There are many elements to Linux containers and Linux hosts. There can be differences between different versions of an OS, version of Linux kernel and the libraries included with each one. This can introduce security vulnerabilities or a lack of functionality.

Overall, it’s great to see container topics being widely discussed as not only DevOps and Developer-centric events, but also as Infrastructure-centric events like Interop. But it’s important that we discuss not only the basics, but how the emerging best-practices get put into production in a way that not only benefits developers and the applications, but also give operators and infrastructure teams a model to keep those applications running and secure.


May 13, 2017  2:13 PM

Managing Containers in Production

Brian Gracely Brian Gracely Profile: Brian Gracely
CaaS, containers, DevOps, Docker, Kubernetes, Linux, OpenShift, PaaS, pipeline, SDN, Storage

_65888562_triple-e_2_editedNext week at Interop 2017 in Las Vegas, I’m giving a talk about managing containers. The focus of the talk is to look at the expanded interactions that are required as engineers move from having a single container, running on their laptop, to moving it into production. It looks at how much developers need to know about containers to get their applications working, and what operations teams need to plan for in terms of container scheduling, networking, storage and security.

Breaking down the talk, there are three critical messages to take away.

The Need for Container Platforms

Platforms that manage containers have been around for quite a while (the artist formerly known as “PaaS”), just like Linux containers have been around for much longer than docker. But as containers are becoming more popular with developers, as the native packaging mechanism for applications, it becomes increasingly important than operations teams have the right tools in place to be able to manage those containers. Hence the need for container platforms, and the emergence of technologies like Kubernetes.

The Developer Experience Matters

As platforms transition from PaaS to CaaS, or some combination of the two, it’s important to remember that the container is just a packaging mechanism for applications. It’s critical to make sure that developers are able to use the platform to rapidly build and deploy applications. This could means that they package the application on their laptop using a container, or push their code directly into a CI/CD pipeline. In either case, the container platform must be able to take that application and run it in a production environment. The platform shouldn’t restrict one development pattern or another.

Operational Visibility is Critical

While containers bring some interesting properties around packaging and portability, it’s important for operational teams to realize that they have different characteristics from virtual machines. Containers may run for a few seconds or for long periods of time. This means that the management, monitoring and logging tools have to be re-thought in order to be valuable in a container-platform environment.


Page 2 of 2312345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: