One of the most frequent questions asked by IT organizations is, “Which applications should run in the cloud?”. While there are no definitive answers, and various people will try to put this into a bunch of categories, it is a question that continues to be asked by many IT organizations.
To some extent, the answer given of follows how the answering person makes money in the industry. Are they aligned more to public cloud (and software developers) or more aligned to private cloud (and often hardware upgrades)?
Today, as I scrolled through my Twitter timeline, I saw two opinions that got me thinking that maybe there will be two schools of thought going forward.
One school of thought was that most applications need to live on-premises, because that’s where the majority of “users” reside, and that users need to be close to the applications. I can understand this mindset if someone sells technology that primarily lives on-premises, or they are primarily focused on IT-centric applications (Microsoft, SAP, Oracle) or things like VDI. While these applications might not live on a mainframe, they tend to adopt the mainframe mindset of focusing on local applications, local responsiveness and very strict response-times and availability levels.
Another school of thought is that most applications will eventually live in the public cloud. Recently, I’ve been listening to the Engineers.Coffee podcast, which is run by Donnie Flood and Larry Ogrodnek, who build the Bizo business (sold to LinkedIn) entirely on AWS – go download it now! In a recent show, they talked about how easy it was to work with AWS’ Machine Learning (ML) system.
This example builds on the recent news and momentum from Microsoft Azure and Google Cloud Platform around ML and AI.
If the focus of digital business applications are moving to mobile platforms, and ML services can significantly simplify the ability to get “data science” services into an application, there is an excellent chance that we’ll see this area begin to rapidly expand in usage over the next 2-3 years. Even Google’s CEO called out Artifical Intelligence as the future of his company.
What is the Priority of the Business?
By no means is this an extension set of options for customers – near a Mainframe or mean Machine Learning – but it does look at a customer’s priorities. Are they looking to optimize existing environments through things like Virtualization and Converged Infrastructure, or are they focusing budgets on market-facing mobile applications that can be augmented by ML?
One of my kids turned 10 years old recently and it got me thinking about some of the things that have happened in their lifetime. My goal of this exercise wasn’t to remember lost teeth or their 1st bike ride, because those events are captured in 1000s of pictures – as parents seem to do these days. Instead, my goal was to put in perspective the amount of their life that was affected by the things that happened since 2006.
Here’s a short list:
- President Obama is elected
- AWS is created
- iPhone is created
- Tesla ships their first car
- Netflix begins streaming content
- Twitter/Facebook/Instagram/WhatsApp/Pinterest are created.
In just ten years, the IT industry, Automotive industry, Telecommunications industry and Media industry have been significantly disrupted. Multiple trillions (yes, that’s “T” trillions) of dollars in economic value being radically changed in just a decade. In many ways, several of these accomplishments have built upon each other, especially because of AWS and the iPhone. And those technologies were the foundation of getting a US presidential campaign funded and ultimately leading to two elections.
The 1990s and 2000s were all about building out the foundation of the Internet. This created an environment where network connectivity was everywhere (HQ offices, remote offices, coffee shops, your couch and eventually airplanes). This removed the barrier of geography for businesses and communities around the world
The last ten years have been built on that ubiquitous foundation into a world where all the information is now in everybody’s pocket, and the barriers to technology resources are never more than a credit card swipe away.
In 2001, we couldn’t image a world where 100Mb to your desktop (or house) wouldn’t be bandwidth overkill. Now 1Gb/s fiber to your home is becoming commonplace in many cities. In 2008, we couldn’t image a set of applications that would be more compelling than some of the web “mashups” that had emerged out of the Internet bubble. Now $1B+ companies are being built with a storefront that is just an “app” on a 6″ piece of smartphone real-estate.
The access barriers are gone. The infrastructure barriers are gone. And with the spread of open source software, the experimentation barriers are gone. So we’re now at a stage where the ability to be creative far outweighs the ability to fund a great idea.
While I still worry about the future where jobs may be dominated by drones and robots and artificial intelligence, the possibilities to build on those new frameworks are very exciting as well.
The pace of change has never been faster. It’s hard to know what the future will bring, but there’s never been a time when individuals can control their destiny as much as now. The transition from thinking about technology for technology’s sake, to thinking about technology for new business ideas is going to be fun for the next 5-10 years.
In 2014, IT analyst firm Gartner introduced the concept of Bi-Modal IT as a way to explain that legacy applications and next-generation applications are extremely different in how they are developed and operated. The framework suggests that IT should be separated into two groups, each focused on either the existing or future applications.
The advocates seem to align to Gartner’s “Mode 1” group, those invested in the existing application. They argue that existing IT (and business) organizational dynamics will create too much friction to sustain rapid change.
The opponents tend to have experience with the new “Mode 2” applications. They argue that learning to be agile is a critical survival technique for any company hoping to survive the digitization of the 21st century.
How should CIOs think about the digital transition?
While there is no one-size-fits-all approach to the Bi-Modal IT debate, there are three areas that should be consider in any strategy focused on balancing the old and the new:
- Focus on Inward-Facing Context – Internal applications tend to focus on driving productivity for the business. Once deployed, IT organizations focus on making them highly available, measuring their success by uptime metrics. These applications (e.g. Email, CRM, ERP, HCM, Collaboration) rarely provide competitive advantage for the business in today’s world, but they can still be optimized to reduce overall IT costs in the business. This is an area where CIOs should be looking at virtualized, converged infrastructure systems and flash storage to reduce on-going operational costs, as well as a focus on automating repetitive tasks.
- Focus on Outward-Facing Context – As the market evolves and customer’s buying habits change, CIOs need to find ways to manage new and unexpected opportunities. For example, who could have predicted that Pinterest would have such an impact on consumer purchasing habits? As IoT project emerge, how large will they scale? These opportunities bring unexpected challenges to IT that are outside of their normal comfort zone. They demand that IT understand APIs, Cloud Computing and DevOps principles of agility and frequent updates. These applications offer the opportunity to significantly change the business and bring new levels of differentiation to the market.
- Focus on Bridging the Internal and External Applications – While Inward-Facing and Outward-Facing applications often have very different characteristics, the reality is that many companies need to build a bridge between them. For example, how to bring credit card transactions to a new mobile app? Or how to integrate 3rd-party data from a partner into a new analytics application for the sales team?
The best CIOs will recognize that all three areas will need focus and execution to be successful over the next 3-5 years of technology transitions. Each area brings unique characteristics, and they each need skills that are able to work closely with the other two areas. Keeping these focus areas (or teams) isolated will certainly lead to more complicated integration in the future.
The Bi-Modal IT debate will continue over the next few years, as established companies attempt to compete with new startups in every industry. Successful companies will realize that their legacy can be their advantage, but bridging it with the technology of the future will give them a way to move the business forward.
DevOps is a difficult concept to explain to people. It’s part technology, part operational process and part culture. Take a frequently updating application, sprinkle some Chef or Puppet or Docker on it, and wrap it in a big hug of collaboration … and you’re doing the DevOps. All the cool web scale companies, like NetFlix, Uber and AirBnb, but what does it look like in real life?
This got me thinking. Is there way to explain DevOps that everyone can comprehend? It should include the follow things:
- Frequent updates to the application
- Automated test and approval process
- Automated deployment of the application
- Limited friction between the Dev and Ops teams
The nice thing about mobile applications is that everybody has experience interacting with them (business or consumer versions), unlike ERP or HCM applications. The other thing is that most people don’t really know how mobile applications work (on the back side), so you don’t have to overcome bunch of objections about “how high availability works”.
Frequent Updates to the Application
In general, people have experienced the frequent updates to mobile applications. They have come to expect that “new updates” means new features or new security updates. hey also understand that updates can happen at any time or day, not just during the 2am maintenance windows.
Automated Test and Approval Process
This area doesn’t exactly fit the analogy, as the App Store approval process isn’t completely automated. But it can be used to show the application is developed and updated prior to sending it to “a system that manages deployment for users”.
Automated Deployment of the Application – Interaction with a “Platform”
Depending on how the mobile application is built, there is a good chance that it was built using a Mobile Backend as-a-Service (MBaaS) such a Parse or Firebase or another MBaaS. In those scenarios, the application is deployed and nobody really needs to think about how the infrastructure and surrounding services are managed. They just sort of work and handle scalability.
Limited Friction between the Dev and Ops Teams
We dug into this a little bit on The Cloudcast – DevOps is a re-org (at 27:30 of Eps.176). Mobile application development often has to break some of the existing rules to happen, but the framework is in place to build and deploy quickly.
I understand that this isn’t a perfect analogy for how DevOps works, but it can be a useful way to showcase the end result. Show somebody the “after” results. For the technical audience, it clearly hides a lot of complexities involved in moving to a more agile model of application development and operations. It’s not perfect. I’m sure you could “wow” someone by showing them a blue-green deployment update with CI/CD integration without any downtime – but that might take 45 minutes and show lots of CLI commands that are difficult to read and interpret.
These days, there are lots of articles being written about how the “legacy” or “traditional” IT vendors are not long for this world (e.g. here, here). The core premise of these types of articles is typically built around two ideas:
- Business is now being conducted in very different ways, and things like (public) Cloud Computing and Open Source software (contributions, communities, economics) are disrupting the business models of the existing vendors.
- Since we’re now in the Cloud/Mobile era of computing (after Mainframe/Mini, PC/Client-Server), the companies that win the new era are almost never the companies that won the previous era.
If we look at the current Traditional vs. New vendors, it looks like this:
While they are not in exactly the same market segments, there is a overlap, especially as buyers are using new applications, services and devices. Their collective revenues are about the same, while the “New” vendors have higher Mkt.Cap and better Cash/Debt ratios.
Now if we move Microsoft to the “New” category, as they are often mentioned with the transition to Satya Nadella as CEO, the numbers shift significantly:
- Nearly 3x the Market Cap
- 157% more Revenues
- 41% more Cash
- 18% less Debt
- Facebook, Google and Microsoft are reshaping what hardware designs look like though the Open Compute Project (OCP)
- Google and Microsoft are open-sourcing elements of networking software
- Apple is open-sourcing application development frameworks for mobile devices
- Microsoft is open-sourcing many aspects of their development frameworks for .NET
- Salesforce has a massive ecosystem of developers building on their SaaS/PaaS platform
- Collectively those five companies deliver the leading public cloud services in the world.
Some of the numbers in the “New IT” figures are skewed in the sense of being completely “modern”, as Google has massive revenues from Search/Advertising and Microsoft has the Windows/Office revenues. But they are still massive revenue generators that give them the flexibility to move more boldly into the $335B of revenue held by the traditional IT vendors.
Will we see a new round of winners in the Cloud/Mobile era, or will we see the existing incumbents be able to find new routes to revenue growth? Legacy is a difficult thing to replace, but the new combatants have very large war-chests and much newer technology to do battle for those places on the big game board.
It’ll be interesting to see how the next 5-10 years play out…
When I first started using the thing called “the Internet”, it was via something called America Online (AOL). At the time, it was mostly about sending email or reading message boards. There weren’t very many websites at the time, and there was almost no business transactions happening.
Several years later, when I was working at Cisco during the Internet-growth-then-bubble days, we watched the Internet transform into a platform where commerce was a natural extension to this global platform. We couldn’t completely understand it at the time, but we had a sense that John Chambers’ (Cisco CEO) famous line, “the Internet will change the way we live, work, learn and play” would come true in some way.
When I was about 10 years old, I got my first real job as a paperboy for a local newspaper outside of Detroit, Michigan. Delivering those papers in rain or snow or sunshine, and collecting money from customers each month, I learned a lot about discipline and responsibility. I kept that job for three years, before moving onto other teenager jobs like being a stock-clerk at a grocery store or mowing lawns or folding shirts at a mall.
I bring this up for a couple reasons. I remember the first time I heard about a newspaper shutting down their daily circulation because readership had dropped. All of their daily content was now on the Internet and people no longer wanted that delivery service. It was a strange moment, because it was the first time I connected the dots that my new profession was putting my old profession out of existence. Sort of a cars vs. horse and carriage moment for me. And even understanding the evolution of work, it hit close to home about how technology will displace jobs.
Now that I have children, we sometimes discuss the types of jobs that they might have someday. We talk about studying hard in school and things like college, etc. But we also talk about things like how to manage money, and jobs that they might have to make enough money to go out with friends or buy some new clothes.
And then I read about things like the grocery store without any employees. Or Uber for Lawn Care. And then I’m torn about how much of a good thing all this technology is. Sure, convenience is great for consumers, but there is a broader ecosystem of activities in play here. Even the most basic jobs teach kids responsibility, accountability, and how to have basic human interactions. And if they don’t think they make enough money, they may become motivated to work harder, or become the owner of the shop themselves.
It’s great that kids are learning to code at an early age, but I don’t know that I really want to live in a world where the goal of every kid is to become a data scientist. Or that the goal of every entrepreneur is to replace a bunch of human interacts with a mobile app.
I understand that technology evolves. But sometimes I wonder if the evolution is really a good thing…
This past week, I had the opportunity to host a CrowdChat discussion about Cloud Computing as a preview of the Cloud Connect track at Interop. One of the questions I asked the audience was:
Obviously, this is a hypothetical question and somewhat extreme, as it would be extremely complicated and expensive for any company to move 100% from on-premises to a public cloud. But my goal was to see how IT organizations view their role as more of their companies applications move to public cloud services (IaaS, PaaS or SaaS). Far too often, I hear people getting more concerned about how their role will be eliminated, instead of being focused on how it could evolve.
So let’s look at some roles in the IT industry and how they could evolve as more applications move to public cloud:
Application Developer: If we look at the results of the 2016 Developer Survey from StackOverflow, it’s difficult to see how those roles will change that much. Many are trying to evolve from Waterfall process to more Agile process, but the trend towards more application developer is growing.
Enterprise Architect: Regardless of where applications are deployed, there is still a need for Architects to connect the business challenges to the technical possibilities. If anything, the breadth of services offered in the public cloud could make their evolution more interesting.
IT Managers: Regardless of how much an IT organization evolves to more integrated DevOps collaboration, there is still a need to manage teams, manage budgets, manage projects and work closely with vendors (or open communities). IT managers may also pick up more work as companies migrate to using more SaaS applications.
Security Teams: The borders for security have been breaking down for at least a decade, as people work remote from central offices, people use smartphones and WiFi access from everywhere is now ubiquitous. So the need for security teams in the cloud continues to be a high priority and those skills are in high demand.
Networking Teams: Networking people tend to worry about who while manage the deployment and operations of the network if it’s running in the cloud. While the rack & stack pieces go away, most other functions will remain in place. Plus, many applications will be deployed in a hybrid model (public and private), so they will need to manage remote interconnects and security across new boundaries. In the interim, networking professionals should be better understanding software-defined networking, as that is essentially what is being used in the public cloud.
Storage Teams: While the provisioning of storage is significantly easier in the public cloud, data still needs to be managed over the lifecycle – this means backups, snapshots, synchronization across geographic regions. Many of these functions are beginning to get automated within Public Cloud services, as well as becoming integrated features within other services (e.g. Database-as-a-Service). Of all the teams impacted by public cloud, storage is impacted quite a bit.
Virtualization Teams: Even more so than storage, virtualization is heavily impacted by public cloud. Virtualization is essentially invisible in the public cloud. Things like “vMotion” or “Live Migration” just happen if they are supported on a specific cloud. This is an interesting change of events, because virtualization was considered “moving up the stack” within the data center just a few years ago.
I’ve discussed before that some other functions, such as managing APIs, managing cloud costs, understanding the law about data sovereignty, managing compliance and many new aspects others will be in high demand. As people have been saying for 10-15 years, being able to evolve skills “up the stack” will be even more valuable as more applications move to the public cloud.
How has your IT organization changed as applications have moved to the public cloud?
As I cover Cloud Computing, we’re always looking slightly ahead to see what’s on the horizon in terms of new features or trends. Over the last 6-9 months, all the major Cloud Computing providers (AWS, Azure, Salesforce, Google, Oracle, IBM, etc.) have either announced or implemented the early stages of their IoT (Internet of Things) offerings. In some cases the focus is on streaming data or data processing services, in other cases it’s new security services, and for others it’s unique new capabilities like serverless computing or device virtualization.
We’ve all heard the predictions about the size of IoT, from 50B devices to $4T in new global value creation. We’re even beginning to see some predictions about the growth in the infrastructure needed to support it on the backend.
Personally, I’ve been “doing Internet stuff” for over 20 years now, and it’s been incredible to see how it’s grown and changed the way people live, work, play and learn (credit to my old boss John Chambers for that quote). Thinking about how IoT will impact the next 20 years is an exciting prospect.
But even with all the progress that’s been done so far, I still have some basic questions that I still haven’t quite resolved in my head:
 Will standards exist, or will IoT go through a walled-garden phase? We saw the early stages of the Internet move from University/Research networks to the walled gardens of AoL and Compuserve and MSN. Today we have the Apple AppStore and Google Play for mobile apps. Will there be open standards for IoT, or will we go through phases of proprietary protocols and marketplaces?
 How to Power the Devices? If you’ve ever been anywhere with a dying smartphone, you begin to realize that power ports aren’t always accessible. You’ve seen the huddled masses at airports, or cord-sharing in the backseat of a car. Now put yourself on a farm, or a two-lane road outside of town, or the middle of the ocean. The ability to easily get power to these locations becomes much more complicated. This will either become a massive bottleneck to IoT progress, or we’re going to see some incredible innovation in battery technologies over the coming years. Hopefully it’s the latter.
 How to Network the Devices? Going hand-in-hand with the power challenge is the networking challenge. Unless WiFi ranges get significantly better, this communication will need to be carried over cellular signals. Not only is this a power draw, but the bandwidth is often limited in high-density areas (e.g. been in a packed stadium before?) or remote locations. Is cached data useful to IoT applications, or will it need to be real-time to provide value. This will need to be considered by IoT application architects and the associated network architects.
 Where does the Data go? At its core, IoT is about collecting data and making decisions. But where does the data go? Does it stay local, at the edge or nearby cluster? Or does it get centralized in a cloud data center? This is where bandwidth challenges come into play, as well data management. Wikibon’s David Floyer recently looked at the cost of edge computing vs. cloud computing for a video surveillance application. Would love to see some insight about where data goes for various types of applications.
 How to Secure all those devices? Every day now, there’s a story or two about a security issue with a device that could be considered IoT. Whether it’s voice recognition with your new HDTV, or a security bug in the Linux kernel, the fear of a massive security threat is balancing the hype of IoT progress. Internet 1.0 is sort of a hodge-podge of security, so how will Internet 2.0 do?
 How to Manage all those devices? It wouldn’t be a proper IT discussion without putting management last. Beyond network bandwidth, the management and operations is where all the cost resides, but our industry has a tendency to talk about it last. Managing 1000s of servers is difficult. Managing 10,000s of mobile devices is difficult. Now multiply that by several orders of magnitude. The existing tools are designed for that scale. So how will companies manage all those devices?
The hype of IoT is fun to think about. It will create lots of new industries and new businesses. It’ll take a while for some of these challenges to get solved, so I’ll be watching to see how quickly the industry makes progress.
Every week, when I go out to my mailbox, there is a blue card from Bed Bath & Beyond that offers a 20% coupon on a single item. Most of the time, it goes in the trash because I don’t have an immediate need for new towels or a waffle maker.
I mention this because that coupon has sort of become my barometer for evaluating technology claims. If something is 20-50% cheaper, as is often advertised, I tend to ignore it because it doesn’t really make a material dent in the economics of technology. Those levels of saving are typically only available as a Day 1 cost, or are measured against an old technology (or business process). If I don’t have that savings in my current technology, I’ll usually get it by default in the next buying cycle, across many technology choices. This is the beauty of commoditizing hardware and the evolution of open-source software. Vendors no longer chase each other’s R&D, but rather they chase the open communities or as-a-Service offerings from the cloud. The differentiators are moving to people skills, improved process and operational efficiency.
Last week, I wrote that AWS is changing the rules of the IT industry. I didn’t say they were winning, since plenty of other IT companies make more revenues, but they are definitively driving the changes on the large chess board.
When I saw that the AWS Certified Solutions Architect certification was the #1 paying job, it got me thinking. This title always used to be held by Cisco (CCIE, CCNA) or Microsoft (MCSE, MCSA, etc.). Those were large companies that had dominant offerings in their respective markets. Smart people went where the money and jobs were. Now we’re seeing that shift towards AWS. But why? Here are a couple thoughts:
- For a larger sized company (+$5B in revenues), AWS is growing faster than anyone else in the IT industry.
- Companies are trying to determine if AWS could be a potential alternative to their IT department, which has given them high-levels of frustration for many years.
- Companies are trying to figure out a “digital business” strategy, and they are seeing the popular examples are currently running on AWS (Netflix, AirBnB, Uber, etc.).Maybe that’s the place to get started instead of within their own data center, or using their existing IT team?
AWS offers a great set of free training on a per-product basis. When this is married to their Free Tier of service for most AWS services, it’s an excellent starting point for learning about AWS. But it doesn’t align itself to structured training or the specific topics needed to pass certifications. This is where a new company, A Cloud Guru, comes into play.
Started by some AWS experts, A Cloud Guru are focused on not only helping students understand AWS technologies, but is specifically focused on helping students pass the various levels of exams. They are like the Kaplan of AWS certifications. But there are lots of places to learn, so what makes A Cloud Guru so interesting?
- The training is extremely cost effective. Courses start at $29 (USD). This is about 50x less expensive than most instructor-led courses. [See my note on evaluating cost-savings above.]
- The UI is user-friendly. It’s instructor-led, but allows me to go at any pace I want: 1x, 1.5x, 2x. It also allows me to easily skip ahead or go back in 15-second increments. It’s like the iTunes player for training.
- The AWS experience isn’t simulated. Every student gets an AWS account; all learning is done on the real AWS systems. This isn’t simulated, or limited to the equipment dedicated to a lab environment.
- Since it’s live AWS, the student can take a snapshot and come back to the resources at any time. Everything is done on the student’s schedule and can be as interrupted as needed. This is important because if you’re doing this outside your normal job, life is full of interruptions.
- Once you purchase a course, you own the rights to it for life. This means that $29 will get you through today’s certification, and the renewal in 2yrs (and beyond that). • Courses get updated as AWS adds or changes their services.
This service is a great way to learn about the most popular and fastest-growing technology in the industry. It’s simple, inexpensive, and very professional in how it’s delivered. It’s an investment in your future, and cost effective enough to be worth your time.
Nothing gets the IT industry more riled up than a perspective that puts Amazon AWS at the forefront of anything. Even though most people will admit that Cloud Computing is a legitimate trend in our industry, there is a strange binary reaction to any implications of changes in the status quo. What do I mean by “binary reactions”? Even though there are typically dozens of companies (or open-source projects) that compete in any given segment of the IT market, people tend to think that everything is a binary, zero-sum game. Must be an engineering 1s and 0s thing. Meaning that the new kills the old, and that EVERYTHING will move to the new immediately.
While nobody is actually implying that AWS will be the only major player going forward, there are some interesting trends that seem to imply that the balance of power is beginning to shift more in AWS’ direction. Does this mean they will be the big winner? Who knows. But it’s (IMHO) beginning to feel more and more like the IT game is now being played by AWS’ rules instead of the incumbents.
- For many years now, Venture Capitalists (VCs) no longer provide funding to startups if it’s going to be used for CAPEX spending on IT resources, rather they expect them to use public cloud resources to get started. The game has changed from giving them $50M in funding, of which the first $5M went to Intel, EMC, Cisco and Sun, to giving them $5M and expecting them to focus on hiring and AWS resources. That’s a 10x change in how funding gets allocated.
- During their Q4’15 earnings call, EMC CEO David Goulden said, “As we look at the external environment, customers continue to be in either transactional or transformational spending mode and in some cases both at the same time. Customers in a transactional spending mode are buying just enough and just-in-time for their traditional environments and we saw this in our stronger maintenance renewal bookings throughout last year. Customers in transformational mode are either transforming their existing IT systems towards a hybrid cloud or building and deploying new digital applications to transform their business.”
- Cloud Providers of all sizes, Rackspace / HP / Verizon, are exiting the market and choosing to no longer compete with AWS. Great customer support, world-class branding and massive network pipes were just not enough to overcome AWS’s years of web-scale cloud engineering. It’s not hard to predict who will be added to this list in 2016-2017.
- The Wall Street Journal is questioning if AWS is having an impact on the global economy, as IT spending slows for hardware/infrastructure.
- Engineers from top IT vendors are wondering how AWS is able to keep making money by offering a portfolio that seems to not make money for existing vendors.
- AWS Certified Solutions Architect is now the #1 Top-Paying certification in the industry. Engineers and customers are voting with their career paths and wallets.
So let’s see:
- The source of startup funding is different and leaning towards AWS. Check.
- The traditional vendors are not only consolidating because profit margins are falling and it’s difficult to transition an existing business model, but their customers are starting to buy traditional equipment in ways that more closely align to AWS buying patterns. Check.
- Major Cloud Provider competition is leaving the market because they wouldn’t keep up with the pace of growth and capital funding needed to compete. Check.
- The global press is now starting to understand the broader impact that AWS will have on the IT industry, which is a major indicator of economic trajectory. Check.
- IT vendors can’t figure out how their competition is making money in areas and ways that they can’t. Check
- IT engineers and customers are betting their careers and business projects on AWS. Check.
My colleague, Dave Vellante, was talking about this a couple years ago. The marginal economics of web-scale computing, at least in AWS’ case, is nearing the economics of software in the 1990s-2000s. We saw what this did to shape the client-server world for companies like Microsoft and Oracle. And maybe those same economics will apply to AWS longer-term as well.
Or maybe they won’t.
Maybe someone else will figure out how to better compete with AWS. But I’m guessing that they will be playing a game that tends to aligns to the new rules that AWS is re-writing for the IT industry.