30,000+ people descended on Las Vegas last week for the annual AWS re:Invent conference. As I’ve written before, it has become one of the top conferences for the IT industry. With so many people in attendance, we had to know that many new services would be announced; and AWS did not lack for quantity or creativity.
Between CEO Andy Jassy’s keynote on Day 2, and CTO Werner Vogel’s keynote on Day 3, the list of new capabilities that are now available or are in early preview was impressive.
So with all these new things to digest and understand, what are the most important to really dig into? IMHO, these are the key areas. BTW – if you really want to dig into the new services, go check out the AWS Blog and the dozens of new posts from Jeff Barr. Or go check out the keynote and session videos on AWS’ YouTube channel.
We already know about AWS in their Cloud data centers, but now AWS wants to be everywhere. They added enhancements to their Snowflake boxes to include more storage and some amount of computing capacity. This is the beginning of a branch-office type of strategy. They drove semi-trucks on stage for their Snowmobile offering, which is the beginning of AWS in your data center. They added Greengrass to IoT devices, which gives them an expanded edge-device story. And Echo/Alexa continues to gain traction for home automation.
The Serverless movement, primarily driven by AWS Lambda but also new entrants like OpenWhisk, Iron.io, Azure Functions, Serverless Framework, etc., has been growing rapidly. ServerlessConf occurs every 4 months and continues to grow. And AWS announced Lambda would be everywhere – on the edge of their CDN network, in the small Snowflakes and the big Snowmobiles, and in Greengrass IoT devices. Not to mention tons of expanded management and monitoring capabilities like X-Ray and Step Functions. AWS has some container functionality available via ECS and Blox (open-source), but it’s clear that they’d like to see their customers jump from using EC2-centric VMs to Serverless functions in the future.
The Future of the Ecosystem?
On Day 1, Andy Jassy told partners that he expected them to be much more committed to AWS than other cloud offerings.
On Day 2, Werner Vogel made an interesting statement during his keynote (paraphrased), “In the future, everyone will have access to the exact same software, so data is the only true differentiator.”
It was an interesting statement, because it came around the time when AWS was picking off ecosystem partners that do things like Application Performance Monitoring. And then new AWS VP Adrian Cockcroft (@adrianco) sent out this tweet:
When a former Venture Capitalist moves over to the leading cloud provider and tells people to stop building startups the old fashion way (via VC funding) and start becoming a service on AWS, it signals the scope of AWS’ intentions with their current and future ecosystem. I’m not sure Silicon Valley is ready to concede the future of startups to becoming AWS teams, but it will definitely be a hot topic of discussion in 2017.
The Advancement of Machine Learning
If you want to read a great analysis of AWS vs. Google vs. Apple as product vs. services companies for the 21st century, go become a daily reader of Ben Thompson’s Stratechery blog, or listen to his weekly Exponent podcast. Within those discussions is a deep focus on why Alexa is a perfect offering from AWS for multiple reasons:
- It’s centered around cloud-based services (AWS, Lambda)
- It’s centered around a “new” digital interface (voice) instead of an existing digital device (phones, laptops, etc.).
- It’s priced low enough to capture mass markets.
- It’s cross-functional, so it can interact with both AWS and Amazon, and both companies can learn from the user interactions.
Those learnings about usability and technology are now being delivered as services via AWS – LEX, Polly and Rekognition. While some people will look at these and say that they are late to market compared to similar Google Cloud AI/ML services, it will be in the way that AWS markets these to existing Enterprise customers that will help them in those markets. For example, Google demonstrates their offerings as a “fun consumer service” (e.g. a compliment to Google Search). AWS demonstrates them as a next-generation call-center or personal business assistant. Their understanding of the target audience is much different.
2016 was a Tipping Point
While it’s taken many people a long time to grasp the scope of AWS and it’s growth trajectory, they have mostly begun to understand it’s role in the new world of IT. I think we’re going to see people taking many, many months to come to terms with the breadth of the foundation that AWS laid out at re:Invent 2016. There are some very foundational elements that will take many years to mature and evolve, but the pieces are there to have significantly bigger impact to business and the IT industry than their first 10 years in service did.
For the last 20 years, the world of IT has been defined by fairly well-known swim lanes of technology, technology vendors, and technology supply chains. Developers build applications. Operations managed the applications and the underlying infrastructure. Companies bought technology from established vendors, or companies within a vendors partner-ecosystem (VARs, SIs, ISVs, etc.). Networking companies sold networking gear. Storage companies sold storage gear. Database companies sold databases and applications. etc, etc, etc. But the last few years have made those swim lanes very blurry, with the IT landscape dramatically changing via public cloud services (such as AWS), open source software, and a number of traditional technology companies evolving to become more solution providers than technology innovators.
So with AWS re:Invent this week, it’s always good to stop and look at the IT landscape from several different perspectives. The show is expecting to get 30k+ attendees, which would put it in the Top 5 major events for the industry, along with Salesforce Dreamforce, Oracle OpenWorld, Microsoft Insight and VMware VMworld.
Who is the AWS re:Invent audience?
To a certain extent, the IT industry is only so large, so it’s not unusual to have overlap (or migration) between large events. For example, the infrastructure audience to the VMware audience to the OpenStack audience and now some to the AWS audience. We can see that in the companies that sponsor events like VMworld. But the sponsors for re:Invent are different – they focus on things about the infrastructure (for the most part). Companies that manage cloud resources, manage cloud costs, manage cloud projects and integrations (SIs) monitor cloud applications, and secure cloud applications. They are also advanced application frameworks (Salesforce, OpenShift, that can interact with native AWS services. They also include a whole range of companies that sell SaaS applications, either directly or through the AWS marketplace. AWS grew up with start-ups that evolved into web scale companies. But over time, they legitimized “shadow IT” and have been attracting more and more enterprise line-of-business groups that have immediate needs, or needs that exceed the day-to-day capacity (and organizational model) of their in-house IT organization. The evolution of “digital transformations” at many companies will ultimately decide which groups are making the key IT decisions going forward, and where those decisions will be executed.
Where does AWS fit into the IT Landscape?
Trying to identify where AWS fits in the IT landscape is complicated, if viewed from traditional lenses. It’s a Co-Lo Cloud Provider, a Managed Services Provider, a technology leasing company (which you’ll lease in increments from milliseconds to years), a SaaS provider (for some applications), and some combination of an IaaS and PaaS platform. But it’s also your new server vendor, and new storage vendor, and new security vendor. It’s been suggested that they are taking significant business from traditional vendors.
While no company seems to be immune from competition with AWS (yet), the more successful companies seem to be software vendors that are able to add value on top of the core AWS offerings. The less successful companies seem to be the ones that want to compete directly with them, especially cloud providers. As we’ve seen with many managed-service providers (Rackspace, VMware, etc.), they are moving towards a model of managing services on top of AWS instead of continuing to invest in their own infrastructure and data centers.
Is Your Company Prepared to Transform to an all-in AWS Model?
We’ve all heard the stories about the two-pizza team culture at Netflix, but is your IT organization (or shadow IT teams) ready to fully adopt all of the changes needed to go all-in on an AWS model (or even a hybrid AWS model)? While some well-documented examples exist, there are still many examples of companies that are making transitions in-house. And given the intense demand for talent, it’s not unusual to hear people now say that they don’t want to share details of their cloud successes (in-house or public cloud) because they fear the talent poachers.
What are the “New Rules” of the IT landscape?
This is the real question that people should be asking themselves. It’s fairly clear that open source and public cloud are going to re-write the rules for the next 10-20 years (with changes coming every 3-5 years), but what will those new rules be? I’d be curious what your thoughts are. I’ll be digging into this in future posts.
When I was a senior in high school, I was a fairly good baseball player with ambitions to one-day play in the Major Leagues. I wrote a senior essay about getting injured while playing for the Chicago Cubs and missing an opportunity to play in the World Series.
This past week, I watched with millions of others as the Cubs finally broke their 108-year drought and finally won the World Series. It was a thrilling series, with the final game going into extra innings and late into the night. I stayed up to watch the end of the game on TV, also following along with the emotions of fans on Twitter.
And the next day I saw this image on my Twitter timeline. The picture shows the dramatic 8th inning home run by Cleveland’s Rajai Davis to tie the game at 6-6. It was an important moment in the game, but it wasn’t “the moment”. The readers of The New York Times were denied that moment because they had to go to press before the game ended in the 10th inning, about an hour later. The printed NYT missed the biggest sports news story because it is an irrelevant medium
This picture caught my attention not because of the Cubs, but because it closed the loop for me. My very first job was delivering newspapers door-to-door. This was in the early 1980s, when major cities often had multiple thriving newspapers.
By the time I got in the technology industry in the mid-1990s, printed newspapers were still a viable business, but the Internet was beginning to allow people to self-publish content for a fraction of the cost. This publishing model knocked down the local barriers of information, as it became simple to find information from around the world (thanks AOL and Yahoo and Google). But the Internet information was still not nearly real-time, as we didn’t have the information in our pocket as we do today.
But 20 years later, the cycle is complete for me. Newspapers have consolidated and gone digital and transformed into something very different that I delivered door-to-door as a child. And one pillar of communications in our country was disrupted, or destroyed, by the new pillar of communications in our world. I’ve had a front row seat to both models. It was a 30+ year cycle. It makes me wonder, how fast will the cycles evolve for other major pillars of our world?
When you attend quite a few technology conferences, you tend to hear the same massages and narratives over and over again. Stop me if you’ve heard this one – To drive a digital transformation within your business, you’re going to have to become a software company that uses DevOps to build cloud-native applications using microservices and continuous integration on immutable cloud infrastructure. Your company needs to become like Uber, Netflix or AirBnb in order to avoid getting Uber’d in your industry.
At a recent show, an Enterprise Architect came up to me and asked a straight-forward question – “Assuming I could figure out how to make all that technology work, how would I explain it to our business leaders in a way that they understand it….in business terms, not technical terms.” I took a stab at trying to explain that here, along with a ROI model and real transition example. [Disclosure, I work for Red Hat and had all those examples handy – this blog isn’t supposed to be a Red Hat advertisement.]
I’m reasonably versed in how to talk to technical and business audiences because I’m a weird mutt that’s been a solution architect and have an MBA. All that means is that I know that sometimes you need to talk about Agility vs. ROI vs. Cost of Capital vs. Internal Rate of Return. They all relate to similar things – are we getting measurable value out of the money and effort we spend, in the context of those areas where we could spend that time and money.
So this brings me back to that question. If, as a technology industry, we’re asking engineers to be interested in digital transformations, then we need to give them the basic tools and languages to be able to explain to a business leader how some piece of new technology (or process) will improve the business. Not just how to save technology or process costs, but truly impact how the company will grow revenue and profitability. These are essentially “Technical MBA” skills, for lack of a better term. And I’m not aware of any place that currently offers these frameworks or skills. I’m sure there are 50 page documents or spreadsheets that a high-priced consultant would provide you for a large fee, but shouldn’t think be something that is freely available to help our industry succeed and expand?
I’m willing to start building some of this content and putting it on GitHub, but I’m curious of other people think this is a needed set of knowledge. If so, what do you think should be included? I have some ideas, but I’d love to get your feedback or suggestions. And if you’re interested in participating, please reach out to me and we’ll figure out a way to collaborate.
This isn’t a deep-thoughts piece or a “hot take” on the election. There will be no shortage of those to fill your time, if you choose. This is simply an observation based on a few things I’ve seen and heard over the last few months – and then connected some dots while watching election coverage the other night.
For all the reasons* that people will claim for why the results of the election happened, one thing that appears to be “true” is that a large portion of the population is aligned to an economically struggling model (especially around manufacturing) and they have felt neglected or marginalized by another groups of people. This is sometimes tough for the technology crowd to understand, especially if you’ve never driven through a Rust Belt town that has been decimated because of a factory closing. “Just get re-trained” isn’t really a viable option for many of these people, for a wide variety of reason. These people were told that the business world moves fast and that they should get on-board with models that seem to replace the need for them.
* NOTE: I’m fully aware that there are many other issues/causes that impacted the results of the election. I’m not trying to minimize them or debate them here.
As I was attending several tech conferences recently, the topic of “DevOps” came up frequently. The discussions were about customers that wanted to “do some DevOps” or “add some DevOps”, usually because their management wanted them to understand that that the business world moves fast (and they need faster software, or better quality software). Now if you’re a fairly skilled SysAdmin, focused on the Infra/Ops for compute, then the DevOps push to automate all the things isn’t that big of a leap for you. You most likely have some of the basic skills needed to make this transition – understanding of Linux, basic scripting skills, etc. But if you’re from the rest of the Infra/Ops team, responsible for things like Networking, Storage, Virtualization, Security, etc. then you might be feeling like one of those Rust Belt workers. Your vendors haven’t really given you the tools to do all the necessary automation, and in some cases, they are also struggling to stay viable as these new DevOps approaches impact their existing customers (e.g. “Software Defined Everything”). Those people keep hearing that the skills and tools they have worked with for 10+ years are now “commodities” and should be marginalized or ignored.
I’m not sure if DevOps is the way forward for many IT organizations, mostly because I can rarely find two people that have the same definition or model of what DevOps is. There’s the Gene Kim “Phoenix Project” model, but I don’t see that out in the wild as much as I see the book on people’s desk. There are probably lots of reasons for that, but it seems like one of them might be that the DevOps world tends to treat the non-experts as a marginalized class of IT. The “they just don’t want to learn” set of people.
It’s not a perfect parallel or analogy, but since DevOps likes to draw Lean Manufacturing parallels to software development, I believe that we need to also be cognizant of the people that are part of the factory too – not just the processes. They are being told that they should get on-board with models that seem to replace the need for them.
Last week, VMware and AWS announced that they are working on a new service to deliver VMware technology from AWS’ cloud – called “VMware on AWS“.
We talked about this with Greg Knieriemen and Keith Townsend on The Cloudcast.
The strategies for VMware and AWS are becoming clearer:
VMware has been looking for more ways to control how their SDDC stack is deployed, as well as ways to downplay the role of the underlying hardware. They are focused on displacing the functionality of hardware-centric compute, networking and storage, and downplaying the focus on cloud management (e.g. vCloud Realize Suite). They have been getting pressure from customers to better define an IaaS cloud strategy, and they now have solid partnerships in place with IBM and AWS.
AWS has attracted developers and startups, but struggled to attract the “traditional IT” that is aligned to VMware, Oracle and Microsoft workloads. This partnership now provides a way for customers to potential migrate entire sets of data center resources to AWS, as well as getting VMware to endorse that AWS is now a viable destination for Enterprise workloads.
What is Hybrid Cloud between now and 2018?
The VMware on AWS offering is still in beta/preview, with GA scheduled for some time in 2017. Since this is being targeted at Enterprise customers, you can expect any uptake to happen later in 2017 or into 2018. Many details still needed to be filled in by VMware, especially in the areas of pricing, licensing transitions (for migrated workloads), and integrations with AWS services.
For many companies, this offering will be compared to the Microsoft Azure Stack, which is also supposed to GA in mid-2017 (after originally being scheduled for late-2016). This will require updated to customer’s on-premises Windows Server environments, which traditionally lag behind the GA dates.
This means that both offerings realistically have 2018 timelines before we hear about mainstream adoption. And both of these offerings are primarily based on simplified IaaS services (compute, storage, networking), but we’re seeing more and more C-level executives that are focused on Digital Transformations and evolution of how they develop software applications. Will we see greater adoption of PaaS and CaaS (e.g. CloudFoundry, Docker DataCenter, Kubernetes, Red Hat OpenShift, etc.) platforms before these offerings become viable in 2018?
Will more MSPs move to AWS?
If you follow the writings of Ben Thompson (Stratechery.com; Exponent podcast), you know that Amazon will often experiment with a new platform idea before expanding it’s reach at greater scale. The VMware on AWS offering is much closer to a Co-Lo or Managed Services offering that a Public Cloud offering. Is AWS using the VMware on AWS model as an experiment to attract more and more existing Co-Lo and MSP customers to their platform? The MSP market is highly fragmented and many of them don’t have the resources to continue to invest in non-differentiated data center facilities. Is this deal just the precursor to AWS becoming the defacto server provider to the MSP ecosystem?
What is the Dell/EMC stance on AWS?
Even before the Dell-EMC merger, it was often difficult to figure out the strategic focus of the EMC Federation of companies. EMC wanted to sell hardware on-premises. VMware wanted to commoditize hardware and wanted to create a homogenous “cloud” ecosystem of all VMware SDDC. Pivotal wanted to abstract away any infrastructure or cloud and focus on a platform for developers. In general, their one commonality was a competitive distain for AWS, either directly or indirectly. And Dell generally shared that competitive posture, choosing to be more closely aligned to Microsoft. But now that has changed. One of the most valuable brands within Dell Technologies is now aligned with AWS and IBM cloud offerings. Both Pivotal and Dell-EMC are getting more aligned to Azure or Azure Stack, but VMware has no current alignment to their one-time foes in Redmond. So where does this leave a customer that has interest in potentially using Azure in a Hybrid Cloud environment?
Back in 2007-2009, as the awareness of cloud computing was growing, you couldn’t go a couple days without hearing about the killer use-case for cloud – “cloud bursting”. This magical ability of the cloud to make sure that your website could manage the rush of Black Friday shoppers.
Many years later and we’re (mostly) past the talk of cloud bursting. But now the buzzword universe is obsessed with the Internet of Things and the trillions of dollars of value and insight it will unlock for future generations. And with this promise of technology nirvana comes the new use-cases that will help you understand why it’s needed for your business.
Let’s take a look at a couple of examples that I’ve recently seen that have left me scratching my head.
The Internet Connected Appliance
At first glance, this is a very interesting approach to leveraging Artificial Intelligence (AI), Serverless Computing, and IoT to create a maintenance program for the filters within refrigerators. Using sensors to do predictive maintenance on remote devices is potentially a “killer application” of IoT and AI. And the serverless angle is very appealing as well. We’re actually spinning off a new podcast (The Serverlesscast, @serverlesscast) soon to explore this area in more depth.
But the thing about this example that had me questioning it was the actual value to the end-customer. We’ve all heard about connected homes for at least a decade, but actually making that work has proven to be extremely complicated – and left many of us having to play tech-support for our friends and parents. In this case, the following things are needed:
- Networking on a device, which also needs a UI to program it to join the local WiFi (and hopefully use secure passwords and protocols to connect). Plus an extended tech support model to answer questions from non-techies that just want a new water filter.
- All of the serverless elements to be programmed and integrated together.
- All of the AI logic to be programmed to “be trained” on the behaviors of the refrigerator over time.
Within my own home, I recently bought a new refrigerator. For the water filter replacement, GE gave me the option to have a replacement sent every 6 months for a fixed fee. The model works great – I don’t have to worry about the filter AND I don’t have to worry about any of the networking or applications that could break when it’s time to get a new filter. Might I only need it every 7 months instead of 6? Sure, that’s a possibility. But it’s a frictionless model for the consumer, hence there is value for me.
The Roads will Brake the Cars
I saw this one in The Register this week and I just don’t know what to think about it. It’s one think to have Tesla build a nationwide network for self-owned super-chargers for electric cars. It’s another thing to think that our highway system, which is massively underfunded as is (and constantly under repair) are going to get “embedded braking systems”. This to me is the new cloud bursting example for IoT.
All of this might sound a little bit cynical about IoT. Fair enough. And just so you don’t think I’m leaving you with nothing but bad demo ideas, here’s one that seems pretty powerful and useful – http://devpost.com/software/hazel
There are lots of good things happening with IoT these days – just be careful which types of stories you believe.
Earlier today I received an email from a friend that contained this simple, and complicated question. The person doesn’t work for any vendor, so the focus of this question wasn’t specific. The question was more of a response to the breadth of announcements from Oracle, Microsoft and Google over the last couple of weeks.
Oracle announced that they were (soon) going to launch a brand new IaaS cloud and attempt to complete directly with AWS. As expected, the Twitterati was very skeptical of their ability to deliver, as was Ben Thompson of Stratechery. If you don’t already listen to Ben’s “Exponent” podcast, be sure to add it to your favorite pod-catcher. Oracle had been making steady progress in SaaS and PaaS revenues, but IaaS isn’t really their core area of focus. Are they getting distracted?
Microsoft announced that AzureStack will eventually ship, but before that, they are strengthening their partnership with Docker by embedding Windows containers in Server 2016. This got some people excited, but Richard Seroter made a good point on the Pivotal Conversations podcast by highlighting that Windows Server 2003, 2008 and 2012 currently hold 87% marketshare – meaning that it might be 10 years before Windows 2016 becomes mainstream in the Enterprise. Given that Microsoft Azure is already 33%+ using Linux, and products like Powershell and SQL Server are moving to Linux, will Windows Server 2016 ever gain major traction in the Enterprise?
Google announced a bunch of new technology enhancements to the Google Cloud Platform, including major database and AI capabilities. They also announced that Google Container Engine (GKE), based on Kubernetes, was running the popular Pokemon Go! platform – requiring massive scalability. But then Google proceeded to do Google things by remaining “Google for Work” to “G Suite”. Not only were Nate Dogg and Warren G not consulted, but Google also has to try and convince Enterprise IT that their massive scale is needed for Enterprise-scale problems.
And we’re still about 2 months from AWS re:Invent.
While all of these large cloud providers have massive cash assets and are making announcements, none of them have really delivered a home run recently. Azure seems to be gaining Enterprise mindshare, but they still haven’t fully realized how to leverage their massive installed base and sales force. Oracle also has a massive installed base, but getting them to migrate to the cloud will not be an easy process. Years of customization will be difficult to move to standardized cloud environments. And Google has awesome technology, but the market continues to ask them if they are still serious about delivering cloud services.
So to answer my friend, the market is evolving with offering that will compete with AWS. Not all of them will be effective, but we’re moving into a new stage of public cloud usage where more Enterprises view it as a viable option. It’s still unclear if those Enterprises will have the same needs or affinity for AWS as customers of the last 5-7 years have.
As everyone’s favorite genius Sir Issac Newton once said, “for every action, there is an equal and opposite reaction.” Back then, software was not eating the world, but obviously he had the foresight to realize that Marc Andreessen’s famous proclamation would have ramifications on the hardware side of the technology industry.
It would be easy to look at the recent rash of Private Equity transactions and assume that they were driven by the general commoditization of hardware. But IMHO, there are more things are work here.
The software eating the world trend is driving (or being driven) by three key factors:
- The availability of relatively frictionless public cloud resources.
- The growth of open source software projects which enable powerful access to technology which drives Big Data, Mobile and Web scale applications and architectures.
- Startup companies “disrupting” existing industries by putting the Internet between customers and their service, removing the friction of many layers of sales channels and distribution.
The guys over at the Software Defined Talk podcast did an excellent job of reviewing several of the recent Private Equity transactions (Dell/EMC, HPE Software, Rackspace, etc.). As you can see, they aren’t all hardware-centric businesses, but the cause of their disruption is tightly coupled to the three elements I mentioned above.
In essence, these companies that get acquired or are doing deals with Private Equity (directly or indirectly) are struggling with the transition where those three factors are impacting their business, or struggling to manage the breadth of their portfolios. Over time, as more companies attempted to build (often via acquisition) a “complete stack” set of solutions, many have struggled to also create a sales and marketing model that targeted the expanded list of buyers at large customers. Their models attempted to mix hardware, software, professional services and various consumption models (CAPEX, OPEX, Subscription).
Now all of these Private Equity deals are attempting to provide cash back to the vendor companies. How the vendors will deal with the new capital is still TBD. Will they use it to make new acquisitions that are more closely aligned to their core business? Or will they use the money to do financial engineering for shareholders or debt holders? If nothing else, it will make the transparency of these companies much different for the market and customers.
Not only does leave the market with many questions about the future of these operating models, but it also creates several new questions:
- What happens to the technologies that were sold to the Private Equity companies?
- Will we see the Private Equity trend expand to some of the larger, traditional companies who have seen top line growth rates near “0” or negative for the last few years?
- Are their any great opportunities for young leaders that want to revitalize a business that was sold to Private Equity?
These large shifts in ownership will make it very interesting to watch the levels of investment and innovation over the next 3-5 years. At a time when many end-user customers are trying to drive their technology agenda faster, many vendors are taking a step back to try and figure out how to adjust to this faster paced market.
It wasn’t long ago that if you wanted to get some perspective on the size of the revenues of the cloud computing market, you had the following options:
- AWS didn’t report their numbers, so you could make some educated guesses about their “Other” number in SEC filings.
- Microsoft didn’t break out the individual products, but rather they lumped them into broader categories.
- Gartner provided detailed capabilities reports as part of their IaaS MQ, as well as some trajectory concepts, but didn’t offer a breakdown of revenues.
- Many publicly traded technology vendors talked about “cloud” solutions and leadership in cloud, but almost never broke out cloud revenues as a specific number.
- Many mid-tier cloud providers are privately owned, or part of a larger conglomerate, so they tend to not break out cloud-specific revenues.
So if you wanted to gauge the size of public cloud or private cloud market, the options were somewhat fragmented and the results tended to be foggy. Some analyst firms attempted to size portions of the market, but that often brought criticism from technology vendors that didn’t believe they had counted enough of their technology portfolio revenues – without giving any additional guidance to the analyst community.
Some of this is understandable, as many companies do not breakout revenues below a certain level (e.g. $100M, 500M, or $1B), depending on the size of their company. Still other may not be disclosing this information because their products aren’t do as well in the market as they would like you to believe.
But some things are beginning to change – some for the good and some for the bad – at least from the transparency perspective.
- Amazon now breaks out AWS revenues each quarter.
- Microsoft now breaks out Office 365 revenues, as well as putting their Azure revenues into the Intelligent Cloud bucket – which also includes SQL Server and Windows Server (on-premises) revenues.
- Oracle breaks out their SaaS and PaaS revenues for Oracle Cloud. They sort of break out their IaaS revenues, but these will likely start including revenues from the Oracle Cloud Machine, which lives on-premises – which leads us to the challenge of what counts as “Oracle Cloud” (on-prem, public cloud, some hybrid combination??)
- Google/Alphabet does not break out Google Cloud Platform revenues.
- Dell Technologies, owners of VMware (vCloud Air), Pivotal and Virtustream, no longer have to disclose their revenues to the public after their merger with EMC closed on September 7th.
- Rackspace was recently acquired by Private Equity firm Apollo, so they no longer have to disclose their revenues to the public.
- HPE recently sold their software portfolio to Micro Focus, but is apparently keeping their Helion Cloud business under HPE. HPE does not break out the revenues for the Helion business.
So now we have AWS as the most transparent guidepost of cloud revenues, and the hardware vendors moving more towards private ownership and limited (if any) revenue disclosure. I talked about this (and many other industry topics) with Keith Townsend (@CTOAdvisor) on a recent Cloudcast podcast.
Given the changing landscape in financial transparency, it will be interesting to see how customers adapt to working with vendors. Do they continue to believe vendor claims about market-share, or do they begin to shift more focus towards open-source projects and track community participation as a more transparent metric of growth trajectory?