From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 2512345...1020...Last »

October 6, 2018  6:27 PM

New Cloud-Native Learning Models

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, learning, Online training, Public Cloud

Back in the day, when technologies like server virtualization were starting to change the landscape of IT (like back in 2007), it would not be unusual for someone to build out a home lab to test out new technologies. This mean buying a few servers, some SAN or NAS storage, a network switch, and likely a bunch of software licenses to make it all work. It wouldn’t be unusual for people to spent $5000 to $10,000 on these home labs, as well as the on-going electrical costs and maintenance of the system.

But as more cloud-native technologies emerged, both in the open source communities and via public cloud services, a new trend is emerging in how people are able to learn and test. As would be expected, the trend is moving the testing environments to the public cloud, and with a set of online services that don’t require anything except a web browser.

Public Clouds

All of the major public clouds, AWS and Azure and Google Cloud Platform, all have a “free tier” that allows user to try any service up to a certain capacity. These free tiers are a great way to test our new services, or potentially run some lightweight applications. The free tiers have all the same features as the paid tiers, just with lesser available resources.

In addition to accessing free cloud resource, the public cloud providers are also providing various levels of training resources – AWS, Azure, GCP. Some of these courses are tutorials, while others are quick-starts to get basic functionality working in a live environment.

Public Cloud Certifications

Another popular service that is targeting certifications for the public cloud services is A Cloud Guru. We learned how they build their service on serverless technologies on The Cloudcast. Initially targeting training for basic AWS services and AWS certifications, it has expanded it’s offerings to include other cloud services (Azure and Google Cloud), as well as starter courses to learn things like Alexa Skills or Serverless application programming.

Learning Open Source Cloud-Native Skills

Yet another platform that is gaining popularity is Katacoda. We recently spoke with their creator, Ben Hall, on The Cloudcast. Katacoda provides interactive tutorials for emerging technologies such as Kubernetes, Docker, Prometheus, Istio, OpenShift, Jenkins and GitHub. Each technology The platform allows the user to use their browser to emulate being directly on a machine, via directly CLI access. One of the highlights of Katacoda is that users can directly follow the step-by-step tutorials, or be flexible in how they use the platform. This makes is easy to learn, but also make mistakes with having to completely start over a module.

All of these new platforms are making it much easier, and less expensive for both beginners and experts to learn and trial all of these emerging technologies.

October 6, 2018  11:10 AM

The Need for Internal Change Advocates

Brian Gracely Brian Gracely Profile: Brian Gracely
Agile, DevOps, Digital transformation

For the last 7 or 8 years, the list of companies that have attempted to transform their business through technology is very long. Companies in every industry and in every geography. Early on, it was called “Cloud First” and attempted to emulate how Silicon Valley companies ran their IT departments. Over time, it has evolved to being called things like “Agile Development” or “DevOps” or “Digital Transformation”. At the core of all of these changing dynamics are the intersection of new technology which enables faster software development, and the cultural/organizational challenge of aligning to more frequent deployments. These topics are discussed in many DevOps Days events around the world. Vendors such as IBM, Red Hat and Pivotal (and others) have programs to help companies reshape their internal culture and software development processes. Consulting companies such as Thoughtworks (and many others large SIs) have also specialized in these broad transformational projects.

In researching many of the success stories, there are lots of examples of companies that were able to get critical pieces of technology to work for them. These newer technologies (e.g. CI/CD pipelines, automated testing, infrastructure-as-code, Kubernetes, serverless, etc.) are all centered around automating a previously manual process. They allow companies to more tightly couple steps in a complex software build or release process. And they allow the companies to do these tasks in a repetitive way so that they create predictable outcomes. The implementations of these technologies, which they can often take 6 to 12 to 18 months to get fully operational, depending on existing skills or urgency of need, often create stunning results. Results like we see in the State of DevOps Report.

But one critical element that is often overlooked, or explicitly stated as a cause for success, is the role of internal marketing and evangelism of the successes along the way. The storytelling behind the small steps of progress in the transformation.

For many engineering teams (ITOps or Developers), the idea of “storytelling” about changes often seems awkward or unusual. From their perspective, that’s a “fluff” or “worthless” activity that isn’t really addressing the difficult challenges of engineering. And yet so many transformations stall because not enough people within the organization know about the changes that are happening. These IT organizations are not that different from a technology company that’s trying to sell products in a crowded marketplace. IT organizations already have a lot on their plate, lots of previous goals to achieve and sometimes they are just not that interested in change if it doesn’t impact them individually.

The way that technology vendors offset this noise in the market is through marketing and evangelists/advocates. People that are trained to listen to the market about challenges, and showcase how specific products/technologies can solve problems. These evangelists/advocates are often not just talking about the technology, but sharing stories about how their customers were able to achieve success or overcome challenges. This is a function that many internal IT organizations would be smart to emulate.

A few resources that I’ve found useful in learning how to construct ways to convince people that change is beneficial are:

Any of the books by the Heath Brothers.

  • “Made to Stick” talks about why some new ideas have staying power and some other fail.
  • “Switch” talks about how to convince people to make changes when it seems like it’s nearly impossible to get people to change.
  • “The Power of Moments” does a great job of explaining why it’s so important to elevate the importance of certain moments and activities to help inspire people to achieve big things.

Another good, recently released book to read is “The Messy Middle” by Scott Belsky. The book looks at how to navigate through the peaks and valleys of new projects.

Both of these sets of resources will seem unusual to many in IT, but they fundamentally look at how to manage through change, and establish communication models that help get other people to want to participate and achieve common goals.

So if your IT transformation project is stalling, it’s worth taking a look at if you’re not spending enough time getting others involved and excited about the project.


August 31, 2018  4:14 PM

Virtualization Admin vs. Container Admin

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Google, Kubernetes, Red Hat, Virtualization, VMware

Source: @Wendy_Cartee (via Twitter)

This past week, while scrolling through Twitter, I saw an image (right) with the caption “Get a crash course on Containers and Kubernetes 101”. The images was from VMworld 2018 and the room was pretty full. It seemed like lots of virtualization admins were now interested in containers and Kubernetes, very new concepts at a VMworld event. Having been heavily involved in this space for the last 3+ years, and seeing 1000s of container-enthusiasts attend event like DockerCon or GoogleNEXT KubeCon or Red Hat Summit, I had to remind myself that the technology is still in the early days. And during these early days, it’s important to provide 101 level content so people can learn and quickly get up to speed on new technologies. The great thing about today’s world vs. when many of these VM Admins were learning about virtualization, is that we’re no longer bound by the need to buy a bunch of extensive physical hardware or maintain a home-lab. There are great learning tools like MiniKube that runs on your laptop, or online tutorials for basic and advanced Kubernetes scenarios.

So with the goal of helping VM Admins learn more about containers and Kubernetes, we decided to focus this week’s PodCTL podcast on how their worlds are different and similar. This wasn’t intended to be a “which one is better?” comparison, but rather to look at how much similarity these was, and how many new concepts would a VM admin need to learn or adjust to in order to succeed with containers.

We discussed a number of areas:

  • Control Plane
  • Content Repository
  • Data Plane (Hosts, OS, Apps)
  • Networking, Storage, Management, Logging, Monitoring
  • Network
  • Storage
  • Security
  • Backups
  • What is automated (by default vs. tooling)
  • Availability (models)
  • Stateful vs. Stateless apps
  • Automated (integrated) vs. Manual tasks
  • OS and Software Patching

Like any new technology, there is definitely a learning curve, but the tools and resources available to learn in 2018 are far better than they were learning virtualization in 2008-2011. In terms of priorities, understanding both containers and Kubernetes is probably something that virtualization admins should place high on their lists for 2018-2019, as more and more developers and even packaged applications will using containers.

Take a listen and let us know what areas we missed, or areas you think we may have gotten incorrect? Are you trying to learn more about containers and Kubernetes? Share with us in the comments how your journey is going.


August 19, 2018  10:02 AM

The Introvert vs Extrovert struggle

Brian Gracely Brian Gracely Profile: Brian Gracely
"public speaking", dinner, Meetup, Podcast

Over the last 7 years, I’ve recorded over 400 podcasts (here, here), visited 100s of companies, spoken publicly at dozens of events and have been a spokesperson at my job. After many years in the industry, talking about technology in public is not a fear. Sometimes there is a perception that doing those types of activities would be a sign of an extroverted personality. But in my experience, I’ve found that many of us that do these types of things, at least in the tech industry, tend to skew more towards introverted personalities.

The thing about speaking on technical topics is that they are typically well-defined and bounded. “Please come speak to us about Topic X, Y or Z, and take some questions.” It may also involve helping people solve complex challenges, but again, that is somewhat of a well-defined process – challenges, constraints, options. Being successful in these environments can make people appear to skew towards extroverted personalities.

But on the other hand, our industry is full of unstructured events and activities within these events. Meetups and happy hours and dinners. For the introverts, these events can be crippling. The ability to make small talk is an acquired skill, and one that can seem as complicated as learning a new technology. For some people, the small talk needed to fit into these environments may seem like a waste of time. For others, it’s easy to get intimidated by crowds of more extroverted people. It’s not unusually for people to expect you to carry the dinner conversation after you carried the work meeting conversation all day.

So if you’re extroverted in how you talk about technology, but introverted in the surrounding activities, how do you survive? It’s not a simple question with a simple answer. To some extent, the ability to evolve your extroverted skills can be both a survival technique and way to open up career opportunities. On the other hand, the limitations of introverted tendencies can limit your ability to be trusted by others to be involved in important activities. Can they involve you or trust you in an important business dinner if you claim up when making small talk?

For myself, I’ve found that the interview process in podcasts has helped overcome some the introverted limitations. It forces me to not only listen to the conversation, but also be constantly thinking of a next question or a follow-up. But it’s by no means a perfect exercise to improve, because those conversations always start with something structured – a known topic. Small talk at a dinner or meetup may not always have that element. This is where you need to take a risk. Throw out a topic – maybe it’s about a local event, or a movie, or just ask other people about themselves. People usually love to talk about themselves. From there, be a good listener and ask follow-up questions. Or offer some related experience. Over time, the fears of the introverted can improve. But it takes time, and it takes practice and repetition. And being uncomfortable quite often.

And after all those repetitions of public forms of speaking, it’s still not easy to get over introverted tendencies. It’s a skill that I wished I could master, and I’m envious of those to whom it comes naturally. But given how connected tech communities are, it eventually is a skill that needs to be learn, no matter how painful and uncomfortable it can be. It’s OK to walk into a large room of strangers and have an immediate instinct to want to walk out. It’s better to learn how to overcome that fear in smaller groups and hopefully that experience can translate into the larger room or more unknown environment.


August 15, 2018  9:55 PM

Knative emerges in the Serverless world

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, FaaS, Google, IBM, Kubernetes, Microservices, PaaS, Pivotal, Red Hat

During Google Cloud NEXT 2018, a new open source project called “Knative“. From the Google Cloud website, the project is described as “Kubernetes-based platform to build, deploy, and manage modern serverless workloads.”

Beyond Google, several other vendors that were involved in the initial release put out statements about Knative – including IBM, Pivotal, and Red Hat. Other companies such as Heptio gave it a test-drive to explore the features and capabilities.

Is Knative Serverless for Kubernetes?

Before diving into what Knative does, it might be good to look at some of the serverless landscape for Kubernetes. We dug into it on PodCTL podcast back in February. In essence, the serverless on Kubernetes landscape looks like this:

  • Many companies are interested in running Kubernetes platforms, because they want to containerize application, and because it can be a multi-cloud / hybrid-cloud platform.
  • Some companies are interested in having the capability of building their applications as functions, vs using previously used models such as monoliths or microservices.
  • There really aren’t any serverless standards, so a few groups (including the CNCF) have started focusing on this.

Is Knative Serverless for Kubernetes (asking again)?

Knative is actually made up of three parts:

  • Serve – The “compute” part, where the functions are run. This is typically going to be the underlying containers and Kubernetes. This is also where we’ll see several of the existing Kubernetes serverless frameworks (e.g. Fission, Kubeless, OpenWhisk, Riff, etc.) plug into Knative.
  • Build – This takes code and packages it up into containers, sort of like OpenShift S2I or Heroku Buildpacks do today.
  • Events – These are the triggers for the functions. This could be a file put in a storage bucket, or a database entry updated, or a stream of data, or lots of other things.

So looking at the bigger picture, Knative will be part of the broader ecosystem of projects that will deliver services and capabilities for applications. Kubernetes will be the core orchestration engine, and can run on top of any cloud environment. Istio delivers granular routing and proxying functions, as well as application-level capabilities for microservices. And Knative can deliver serverless functions that integrate with events.

 

It’s important to remember that Knative is a brand new project and will take time to mature. And its primarily designed to be used with new applications – serverless, Functions-as-a-Service and microservices. So you don’t need to go running off to your CIO and tell him/her that you need to move all of your existing applications to Knative just because there were a few cool demos at GoogleNEXT. It’s yet another tool/framework in the toolbox, and can be helpful for a specific set of applications and problems.


August 1, 2018  3:13 PM

Evolving Approaches to Hybrid Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Data Center, FlexPod, Google, Hybrid cloud, Kubernetes, Microsoft, Public Cloud, Red Hat, Vblock, VMware

For many years, some pundits have argued that it was only a matter of time before all applications ran in the public cloud. Some said it would be by 2020 or 2025. For some of the public clouds, there was “the cloud” (public cloud) and “legacy data centers”. But perspective on that have started to change, starting with some of the strongest critics of concepts like hybrid cloud – the public cloud providers themselves.

For the sake of argument, I’m not going to spend any time debating the differences between “hybrid cloud” and “multi cloud”. Use whatever definition suits your needs. It’s some combination of (two or more) clouds being used in conjunction to deliver business applications. Most company use many clouds today to deliver services for their business, internally and externally, through a combination of data centers, co-location, IaaS/PaaS/SaaS services.

The Hardware + Virtualization Approach

Going back several years, like the 2010 timeframe, you started seeing some vendors bring to market a stack of hardware and virtualization software (e.g. VCE VBlock, NetApp Flexpod, etc.) that could be sold for on-premises data centers or into a non-webscale service-provider / cloud-provider. The idea was that virtualization was already widely used in Enterprise IT, and it could be delivered as-a-Service (e.g. IaaS-type services) consistently from either environment. This model gained some adoption as “converged infrastructure” in Enterprise IT data centers, but never gained traction in cloud providers, and was not adopted by webscale cloud providers (e.g. AWS, Azure, Google).

The Virtualization Approach

As the virtualization companies realized that being tied to Enterprise hardware would not work in a hybrid work, the approach evolved to be virtualization-centric and virtualization-only. This is the approach that VMware has evolved from vCloud Air to VMwareonAWS. It has the advantage of retaining technology and tools that many Enterprise customers know and understand. It has the disadvantage of only working in specific environments (e.g. AWS, but not Azure or GCP or Alibaba).

The Container/Kubernetes Approach

Since containers are not tied to specific virtualization technology, but rather an element of Linux OS (and eventually Windows OS), they are a more portable technology that virtual machines (VMs). This allows containers to be run consistently in any cloud environment that supports Linux OS – where the Container OS and Host OS are aligned. Combine this with the growing adoption of Kubernetes (both as software; e.g. Red Hat OpenShift) and cloud services (e.g. Azure Kubernetes Service, Google Kubernetes Service) and you have a layer of interoperability that is more wide-reaching that previous solutions. This approach may be managed entirely by Enterprise customers, or it could begin to integrate with managed (or cloud-native) services from the cloud providers.

The Extending the Public Cloud Approach

While the Containers/Kubernetes approach has been around since ~ 2014-2015 and is gaining traction, another approach is beginning to emerge. This time from the public cloud providers, attempting to reach down into Enterprise data centers. The first to attempt this was Azure, with AzureStack. This is Azure’s approach at bringing some Azure services down to the AzureStack resources.

The newer approaches have been AWS’ Snowball with EC2 support, and Google Cloud’s GKE On-Prem.

AWS Snowball delivers a hardware form-factor with compute and storage, able to run both AWS Lambda functions and now EC2 computing workloads. The (current) capabilities are limited to the size of the Snowball and any bandwidth connecting the Snowball and AWS cloud services.

GKE On-Prem (which I discussed earlier) will bring a manage Kubernetes offering to customer’s data centers, helping to accelerate adoption of containerized applications.

What’s been interesting to watch with these newer offerings is how they are learning from previous approach, making some of the same mistakes, and trying to new twists on creating consistency between multiple environments.

  • How are the offerings managed?
  • Where does the cloud provider’s responsibility start/stop and the customer’s responsibility start/stop?
  • Are these offerings only targeted at new applications, or will they attempt to integrate with existing and legacy systems and applications?
  • Do the offerings have any dependencies on specific hardware?
  • How does a cloud provider decide which features to bring down from the cloud to the data center? When is it too many? When is is not enough?
  • How are the hybrid cloud services priced?
  • Do these services have an role for SIs or VARs or ISVs, or are they working directly with Enterprise/SMB customers/

These are all difficult questions to answer, and the different offerings will have different answers based on their technology strengths, relationships with customers, delivery capabilities and many more variables.

In July 2018, hybrid cloud (or multiple cloud) is becoming a real thing. And it’s not just being attempted by traditional Enterprise vendors. Will the public cloud providers be as successful when they don’t control the entire environment? It’s going to be a very interesting space to watch evolve.


July 30, 2018  5:46 PM

Thoughts from Google Cloud NEXT 2018

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, Alphabet, AWS, Azure, containers, Google Cloud, Kubernetes, ml

[Disclaimer: I was invited to the Google Community event on Monday before GoogleNEXT2018]

This past week, Google Cloud held their annual Google Cloud NEXT event in San Francisco. This is an event that I’ve attended each of the past 4 years, and I’ve seen it grow from a small event to one that hosted more than 20,000 people.

The event is always one that leaves myself, and many people that follow the industry (here, here, here), sort of feeling schizophrenic. This happens for a number of reasons:

  • [Good] Everyone knows and respects that Google (Alphabet) has tons of crazy smart people that build crazy impressive technologies. You’d be hard pressed to find anyone on the planet (that regularly uses technology) to not identify some part of their day-to-day life that isn’t dependent on some Google technology.
  • [Good] Google continue to push the envelope on cutting edge technologies like Machine Learning and Artificial Intelligence, including ways to simplify it through technologies like Tensorflow or AutoML. These are incredible technologies that could be applicable to any Enterprise or startup.
  • [Bad] Google/Alphabet has also had it’s share of dropped services, and Google Cloud (GCP) still aligns itself with Google/Alphabet, so it still has to explain to Enterprise IT customers that they are in this cloud game for the long haul.
  • [Unknown] GCP loves to showcase their customers that have solved really complex problems. The types of problems that Google engineers would love to solve themselves, or help customers work on. But they downplay IT customers that solve common, day-to-day problems that reduce costs or make existing applications faster. Those are boring problems to solve. But they are also boring problems that IT is willing to spend more to solve or improve.
  • [Bad/Unknown] GCP has this habit of telling the market how cool the internal technologies of Google are, many of which aren’t available to customers. Or how incredibly smart their engineers are, but leaving customers to believe that they can’t achieve significant improvement without Google-level talent.
  • [Unknown] Google/Alphabet just announced massive revenue numbers for the most recent quarter, but Google Cloud still does not break out any revenue numbers, so it’s difficult to tell how large or fast it’s growing.

So for 3-4 years, we’ve watched GCP try and figure out the balance between Google engineering, Google tech, GCP vs. Google branding, legacy IT and just dealing with the fact that Enterprise IT is about dealing directly with people vs. interacting with automated systems or algorithms.

For this year’s show, I thought they improved in several areas:

  • The keynotes were less about the behind-the-scenes parts of Google (e.g. how big their internal data centers are) and more about how the technology can be used to solve problems.
  • They continue to showcase more people that work to create GCP, without making it about those people. They are bringing a set of voices that are getting better at speaking a language that some Enterprise IT groups can understand.
  • They continue to make AI/ML technologies more accessible and easier to use, and they are applying them to more and more “common” use-cases.
  • They do an excellent job of highlighting women and people of color in leadership roles within GCP, and as experts within a wide variety of fields. They do this as well or better than any company in the cloud computing space.
  • The messaging, positioning and alignment between GSuite and GCP still isn’t completely cohesive (different buyers, different audience), but some of the technologies being embedded within GSuite look very interesting.
  • They took a chance and began to show a hybrid cloud approach to the market (with GKE On-Prem, which we discussed on PodCTL #43).

Those were all promising areas as GCP attempts to grow the business and connect with new types of customers. But it wasn’t all steps forward:

  • The keynote lacked an overall perspective on how the business is doing:  #customers, growth rates, revenues, the breadth of the GCP portfolio.
  • The keynote was short on customer references, and more importantly, customers speaking in their own voice (not just an interview). This was strange because customer references were all over the show floor.
  • GCP does some very innovative things with pricing, performance, networking and security. These are cornerstone elements of Enterprise IT. They need to be reinforced as differentiators for GCP.

I often come back to my write-up from an earlier Google Cloud NEXT, and ask myself if GCP is primarily in this to be an evolution of IT (move anything to the cloud), or mostly wants to be part of the new things customers do with technology. They tend to be positioned towards the latter, which is fine in the long run, especially if your parent company essentially prints cash. But they are leaving so much revenue behind to AWS and Azure by downplaying the “boring” that I often wonder how far from behind they will be able to play catch-up from to make the super-cool things like AI/ML be the center of their business.


July 7, 2018  2:33 AM

Cloud computing definitions are no longer relevant

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, CaaS, Cloud Foundry, containers, FaaS, Google, IaaS, OpenShift, PaaS, SaaS, Serverless computing

Back in 2011, the National Institute of Standards and Technology (NIST) created the Definition of Cloud Computing. At the time, the definitions of Infrastructure-as-a-Service (IaaS), Plaform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) seemed to be reasonable, especially given the existing offerings in the market. At the time, AWS mostly offered compute and storage resources, so AWS as an IaaS. Heroku, Cloud Foundry and OpenShift had a platform for developers to push code, so they were PaaS. And things like WebEx or Salesforce.com or Gmail were just on-demand versions of software that were used within a company, so they were SaaS.

Fast-forward 7 years and those definitions no longer seem very relevant. For example, neither the concept of “containers” (or Containers-as-a-Service, CaaS) or “serverless” (or Functions-as-a-Service, FaaS) are defined or mentioned in that NIST definition.

On one hand, someone might argue that the NIST definition could just be updated to add CaaS and FaaS as new -aaS definitions. On the other hand, someone could argue that CaaS is sort of like IaaS (containers instead of VMs), or that PaaS platforms often used containers under the covers, or that FaaS is essentially the same definition as PaaS, it’s just become more granular about what is “pushing code” vs. “pushing a function” – they are both just chunks of software. Either of those perspectives could be considered valid. They could also make things confusing for people.

Which brings us back to my original claim – the definitions are no longer really all that useful. AWS is no longer just IaaS. OpenShift is no longer a PaaS. And many things could be considered SaaS, from Salesforce to Slack to Atlassian to a Machine-Learning service on Azure or Google Cloud.

We already see the lines getting very blurry between the CaaS and PaaS market. Should a Kubernetes product only be a CaaS platform (if it allows containers to be deployed), or can it also be a PaaS platform if it enables developers to also push code?

The last point about SaaS was recently raised after several people listened to our Mid-Year 2018 show, when we discussed which companies are leading in SaaS. It sparked a conversation on Twitter, where a number of people threw out “AWS”. These comments surprised me, as I had previously not thought of AWS as a SaaS provider, but mostly as an IaaS and PaaS provider, with a bunch of “tools” or “services” that could be connected together to build applications. But the more I thought about it, many of the services of an AWS (or Azure or GCP) are just software, being delivered (and run/managed) as a service. Everything from a VM to Storage to an authentication service to an AI/ML service is just software being delivered as a service. It’s just a bunch of exposed APIs. And developers, or non-developers, can use them any way they need to create a simple service or a complex service. They are no different that a service like Netlify or Auth0 or Okta or GitHub (or 1000s of others). When you start thinking about things from that perspective, it sort of explains why Gartner’s IaaS MQ now only has 6 web-scale clouds listed, but it also make the “IaaS” part sort of irrelevant (from a classification perspective).

So the relevant questions are now:

  • Where do you want to run an application? How critical is it to control the environment?
  • What abstractions do you either want to provide to developers or does the operations team want to maintain?
  • If using multiple clouds, or services, how important is it to have consistency of abstraction and operations?

As with every technology, we learned as the technologies evolve and it’s worth questioning if we should still use the same taxonomies to describe what is trying to be accomplished. In the case of Cloud *-aaS definitions, the old definitions no longer apply.


June 28, 2018  12:59 PM

Digital Transformation Requires Good Storytelling

Brian Gracely Brian Gracely Profile: Brian Gracely
Digital transformation, Marketing, Meetup

Photo Credit: http://bit.ly/2twDSSP

There’s an interesting dynamic happening in technology circles these days. On one-hand, there is the long standing animosity of technologists towards “marketing”. On the other hand, there is an accelerating desire by business leaders to accelerate their ability to transform and potentially disrupt how they compete in their given markets. The technologists live in a world of constant technology change, but often bemoan change being forced upon them. The business leaders are using terms like “digital transformation” to articulate the changes they hope to enact through technology, organizational and process change. Both sides are facing change, but far too often they aren’t on the same page.

In the past, getting funding for a new IT project was often based on the ability to build a business case that demonstrated it would reduce costs, or improve the speed of an existing process. In some cases, it was just a regularly scheduled upgrade because the equipment had been fully depreciated. But the new projects are different; they require a business case that require a new type of calculus. The new model requires that the company measure their ability to do something very new, often times without the required skills in place to accomplish the goals. To a certain extent, this requires a leap-of-faith, which means that many people are putting their reputation (and maybe their jobs) on the line with the success or failure of the project.

So where does this “story telling” come into play? On one level, it’s the normal activities of getting “buy in” from different levels of the organization. This means going to the various stakeholders and making them believe that you’ll help them achieve their individual goals with the project. But on a different level, this story telling requires someone (or many people) to create a vision, and get that vision to permeate across the company. It’s the type of story telling that gets groups to want to help make it possible, instead of just being a recipient of the project.

We recently began seeing more executives exercising their story telling skills at events, standing on stage to tell their story at a tradeshow or meetup. This didn’t  happen 5+ years ago, but as more companies are recruiting top-level engineers (developers) to staff those new projects, this is now happening more frequently. These executives are evolving their ability to talk about their vision, their success, their challenges/failures, and how they drove change within their company. But for successful companies, it not just about an executive giving a good presentation. It about them creating a culture that accepts that change is not just necessary, but the new normal.

For the technologist that views this this as marketing, consider thinking about it this way. These new projects are (typically) creating the new face of the company. The new way the company will interact with customers, partners and the marketplace. These projects are no longer just an IT project, but they are the product of the company. Like any successful product, they need effective marketing to succeed. This marketing, or story telling, has to be done across many groups in the company, in order to get the breadth of buy-in to help evolve the company. This marketing needs to not only show measurable results, but inspire people that the things once thought too difficult are now possible. In essence, this marketing (or story telling) is serving the purpose of trying to capture the time, attention and resources of people that many other available choices; some of which might be in direct contrast to this specific project. And just like in the marketing of other products, this story telling needs to not only be able to explain the value of success, but be able to defend itself against “competitive” claims of alternative approaches or expectations of failure.

So as we get closer to the next decade, it’s become clear that the core of business differentiation is technology. Starting a business from scratch is difficult, but it has the advantage of limited technical debt. Transforming an existing business means that the debt will either remain, evolve, or be eliminated. Making that happen, one way or another, will begin with someone having a vision and telling a story. Having the skills to craft and tell that story will become ever more critical as people attempt to move existing businesses forward.


June 26, 2018  3:08 PM

Kubernetes is the Platform. What’s next?

Brian Gracely Brian Gracely Profile: Brian Gracely
APIs, CaaS, Google, Kubernetes, OpenShift, PaaS, Public Cloud, Service Broker

This past week, I gave a webinar titled “Kubernetes is the Platform. Now what?“, based on this presentation. I thought it might be useful to provide some additional context beyond what could be explained in 30 minutes. The purpose of the presentation was to explain how Kubernetes has evolved over the past couple of years, what it is capable of doing today, and looking forward to where new innovation is happening around the Kubernetes platform.

A Brief History Lesson on the evolution of the Platform market

Ever since venture capitalist Marc Andreessen uttered the phrase “software is eating the world” to the Wall Street Journal in 2011, companies of all sizes and maturity levels have been in a race to acquire software development talent. That talent enables startup companies to disrupt existing industries and business models, and that talent can also be used by existing companies to reshape how they digitally interact with customers, partners and their markets.

In order to succeed in the race to become a software-centric business, one of the most critical pieces to have in place is an application development and deployment “platform”. In today’s world, this platform is the digital equivalent of the supply chain and factory that enabled successful businesses of the 20th century. The goal of this platform is to not only simplify the ability for developers to rapidly build and deploy new applications and updates, but also be able to securely scale those deployments are demand grows and changes.    

At the time of Andreessen’s original comments, many companies and communities were trying to provide solutions to this problem through Platform-as-a-Service (PaaS) platforms. This included Heroku, OpenShift, dotCloud, Google AppEngine, Cloud Foundry AWS Elastic Beanstalk and several others. While PaaS platforms gained some traction, they suffered from several significant challenges:

  • Tied to one specific cloud platform
  • Limited developer applications to specific languages or frameworks
  • Used proprietary or platform-specific application packaging models
  • Provided limited visibility for troubleshooting applications
  • Provided limited visibility to the operators of the platform
  • In some cases, not open source or extensible

Many of these limitations were resolved with two core technologies – Linux containers (docker) and open source container orchestration, specifically Kubernetes. The combination of these two building blocks set in motion where the industry is today, with a unified architecture that allows a broad set of applications to run, and the foundation for continued innovation.   

As Kubernetes has evolved since 2015, it has been able to support a wide variety of application types, from new cloud-native applications to existing applications to big data analytics and IoT. The embedded deployment models within Kubernetes allow it to be intelligent and properly manage the deployment and availability of this variety of application types. This ability to support so many applications on a single platform results in better ROI for the platform, but also simplifies overall operations. And as Kubernetes has evolved, matured and stabilized, it has allowed new innovation to happen around Kubernetes to improve the developer experience, support even more application types, and provide better operations for applications running on Kubernetes.

 

Adding Services to the Kubernetes Platform

Beyond the core capabilities of Kubernetes, the community has seen opportunities to innovate around important areas for application workflows, security, developer tools, service brokers and many other areas. This has led to new projects within the Cloud Native Computing Foundation (CNCF), that augment Kubernetes:

 

Enabling Services off the Kubernetes Platform

While Kubernetes has done an excellent job of enabling many applications to run in containers on the platform, the world still doesn’t run entirely on Kubernetes. This means that there needs to be a common way to reach services that run off the platform. This is where the Kubernetes community has innovated around the Open Service Broker, allowing integration of 3rd-party services through a broker model. This applications applications to integrate with off-platform services, and Kubernetes operators to still have visibility into usage patterns. Brokers for services from AWS, Azure and Google Cloud already exist, as well as brokers for Ansible Playbooks. In the future, we expect that the number of brokers will continue to grow, both from cloud providers, but also being independently built to serve specific business needs.

 

Extending the Kubernetes API via Custom Resources

At some point in its evolution, every project must decide how broad its scope will be. Every project wants to be able to add new functionality, but this must always be balanced against future stability. While the Kubernetes community is still innovating around the core, it made a conscious decision to allow the Kubernetes API to be extensible, allowing new innovations to be Kubernetes compatible, without expanding the Kubernetes core. This extensibility is called Custom Resource Definitions (CRDs), it is already allowing significant extensions to Kubernetes. For example, most of the “Serverless” or Functions-as-a-Service (FaaS) projects – such as Kubeless, Fission, OpenFaaS, Riff, etc – integrated with Kubernetes through CRDs.

 

Simplifying Operations with Operators

While Kubernetes does include powerful and granular “deployment” models, those models don’t include all the things that complex applications might need for Day 2 operations. To help fill this gap, the Operator Framework was created to enable applications to not only be deployed (directly or in-conjunction with other tools, such as Helm charts), but also to codify the best practices for operating and managing those applications. In essence, building Automated Operations around those applications. The Operators framework can be used for core elements of the Kubernetes platform (e.g. etcd, Prometheus, Vault), or used for applications that run on the Kubernetes platform (e.g. many examples here). ISVs are already beginning to adopt the Operator Framework, as they realize that it will allow them to write one best practice to Kubernetes, which allows their application operator to run on any cloud that has Kubernetes.

 

Kubernetes – A Unified Platform for Innovation

When all of these elements are put together, it becomes clear that not only has Kubernetes established itself as the leading container orchestration standard, but it’s also established itself as the foundation of a unified platform for innovation. The core Kubernetes services are able to run a broad set of business applications, and the extensibility is enabling innovation to happen both on the platform and off the platform. This unified approach means that operations teams will be able to establish a common set of best practices. It also means that Kubernetes-based platform, such as Red Hat OpenShift, has created that application platform that Andreessen discussed nearly a decade ago as critical for any business that wants to be a business disruptor and not on the list of being disrupted.


Page 1 of 2512345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: