Earlier this month, I wrote a piece about the Architectural Considerations for Open Source PaaS and Container platforms. It was a follow-up to a series I wrote about 12 months earlier, looking at various aspects of these types of Cloud-native application platforms.
Changes are happening quickly in the PaaS market. These platforms were previously known as PaaS (Platform-as-a-Service), but many of the offerings tend to be shifting their focus more towards Containers-as-a-Service (CaaS).
I tried to put this in the perspective of a “hierarchy of needs”, which evolved from basic stability, to basic developer and application needs, to the scalability and flexibility needs as usage of the platform grows within a company.
- Platform Stability and Security – Before any applications can be onboarded onto the application, how does the platform itself provide a level of operational stability and security, including the underlying OS that runs the platform?
- Application Portfolio – Does the platform support a broad range of customer applications, including both stateless and stateful applications? The latter part is critical because most customers will need to either re-purpose existing applications, or interconnect new and existing applications.
- Developer Adaptability – How much flexibility does the platform provide for developers to get their code / applications onto the platform? Does it mandate that they must move to a single tool for onboarding, or is it flexible in terms of how applications get onboarded? How much re-training is needed for developers to effectively use the platform?
- Scalability – As more applications are added to the platform, how well will it scale? This scalability looks at areas such as # of containers under management, # of projects, # of applications, # of groups on a multi-tenant platform. It also looks at the scalability of any associated open source community (e.g. Cloud Foundry, Docker, Kubernetes, Mesos, etc.) that is contributing to the projects associated with a platform.
- Flexibility – In the spectrum between DIY, Composable and Structured platforms, there are trade-offs in how flexible the system is today vs. in the future. Given the rapid pace that platforms and the associated technology are evolving, IT organizations and developers need to consider where they expect their usage of a platform to evolve over time. Will the POC experience extend into the future, as usage expands? Will the needs of the “pioneer” team extend to “settlers” and “town planners”?
NOTE: There will probably be people that will wonder where the cultural aspects of cloud-native fit into these hierarchical needs. They actually fit into each layer and probably could be represented as a vertical bar that sits to the edge of the diagram.
As the pace of change in the platform market continues to accelerate, it is important to have a framework to evaluate how the changes impact the needs of both the developer and operator groups within a company. With so many changes happening so quickly, it’s easy to be confused about what is important and what is just technology noise. Being able to prioritize how something new impacts platform considerations will be a critical consideration for IT organizations and developers looking to build cloud-native applications, as well as evolving aspects of their existing application portfolio.
Over 5 years ago, we started The Cloudcast podcast. That show is focused on the trends in Cloud Computing. Several months back, we decided to have a couple shows (here, here, here and here) focused on this emerging trend called “Serverless Computing”. Those shows turned out to be some of the most popular we’d ever had. It got us thinking….maybe there’s something here.
There was a couple comments we observed in those months that peaked my interest. The first came from Joe Emison (@JoeEmison), who was using a bunch of Serverless services, and he said that it had reduced his AWS bill by 80%. Then a couple weeks later, during an AWS keynote (AWS Summit), AWS talked about a customer that redesigned a MapReduce application and saved 80% on their AWS bill. So this sort of set off a lightbulb in my head. Real customers…and AWS wasn’t afraid to talk about it.
People have a tendency to make everything binary for a new technology – does it kill the old thing or not? In reality, what we’ve seen is that newer stuff typically tries to solve a specific problem, and then the use-cases expand because people get comfortable with it and are tolerant of it’s drawbacks. Server virtualization is a great example of this. Nobody really needed virtualization, but it became valuable because it could save costs (improve efficiency) for companies that overbought server capacity. It wasn’t perfect then (and still isn’t), but it solved a measurable problem at the time (server and licensing costs). Since then, it’s created many other problems, but entire segments of the industry have sprung up around it.
Serverless is just an extension of that philosophy – some people have a specific application need to just execute functions, and they’d really prefer not to have to deal with all the operational planning that goes with it. It’s definitely not for every application (as currently written), but it can serve a specific purpose for certain types of applications.
- Single functions that don’t need to be recursive (run in a loop)
- Auto-scalable (up and down)
- Charged on a per-usage basis (don’t pay for idle time)
Serverless could be it’s own type of system, or it could just be a feature of a “PaaS” platform, depending on how it’s implemented. The various implementations are all a little bit different, with a bunch of vendors creating implementations.
- [Tech] http://martinfowler.com/articles/serverless.html#drawbacks
- [Ops View] https://charity.wtf/2016/05/31/operational-best-practices-serverless/
- [More Ops View] http://devopscafe.org/show/2016/7/7/devops-cafe-episode-68-patrick-debois.html
- [Content] https://serverless.zone/
- [Podcast] @serverlesscast (Twitter), http://serverlesscast.io (podcast)
- [Conference] ServerlessConf
- [Meetups] Search “serverless” – about a dozen around the world so far
There are a couple types of apps where people are starting to use Serverless:
- Single page web applications
- Mobile applications (eg. iOS and Android)
- Thick clients and a variety of services on the backend.
But Serverless creates concerns from the Ops world, because some vendors have started throwing out phrases like “No Ops”. But we’ve seen this before, in things like PaaS or CaaS, where stuff is supposed to be easier for Devs and the Ops functions are hidden. In that world, somebody still has to think about stuff like:
- Having programmable infrastructure under the covers
- Managing the authentication system
- Managing the logging and monitoring systems
- Managing data
- Managing security
But Serverless isn’t just for Devs. It can also be very helpful to Ops teams, especially for the types of tasks that Ops will often do – check the status of things, repeatedly poll something, take an action based on an input, etc. Serverless doesn’t have to be just about developer tasks.
Not all Serverless is the same. There are architectural differences between the various services. For example:
- AWS Lambda mostly requires a front-end to get to Lambda (IoT Gateway, API Gateway)
- Microsoft Functions allow you to use programming languages and scripting languages (e.g. PowerShell)
- IBM OpenWhisk allows the functions to be Docker containers, so almost anything could run
- Some of the MBaaS (Mobile Backend as-a-Service) things have really, really simple integrations – Google’s Firebase, Auth0, Netlify, etc.
- Some things can run on-premises (e.g. Iron.io), so Ops will have to be involved – sometimes they run standalone, sometimes they are integrated with an IaaS or PaaS
All of this seemed like interesting stuff to us, we we’ve decided to start a new podcast that will be focused on Serverless. Hopefully people will find the content interesting….
Bots are all the latest buzzword, with all the mandatory “2016 is the year of bots” articles popping up all over the place. In some cases, people except to see bots replace jobs like call-center attendants and tech support. Still others foresee a future where they might replace many knowledge worker jobs. So I thought it might be interesting to go through my list of past (and present jobs) and see where bots or automation could have an impact.
- Newspaper Delivery – well, newspaper circulations are way down, thanks to everybody getting their news via the Internet. But bots are having an impact on writers of common stories.
- Lawnmower – for just a couple thousand dollars, you too can replace that kid that you pay a few bucks to sweat in the hot sun so that you can have a weekend hobby or drive your kids to games on the weekend.
- Grocery store clerk – I still see humans stocking the shelves when I go to the grocery store, but the checkout aisle has already jumped the shark with automated checkout. And drones delivery is not far around the corner for all you Amazon Prime members.
- Janitor – yep, that was me, purveyor of the custodial arts. yes, there are those motion-sensing towel dispensers in the bathrooms, but automation hasn’t taken over this world yet.
- Deli counter – The grab-n-go sandwich is popular, but people still like to have some say in what they order.
- Golf Caddy – The glory days of Caddyshack are over, with these novelties overtaking courses everywhere (here, here). And these things will tell you everything you need to know on the course.
- Camp Counselor (sports) – Apparently eSports are a thing, and getting bigger. will kids ever play outside ever again?
- House painter – This one might not catch on as much since it sounds like machine guns outside your house
- Clothing store at mall – people still go to malls instead of Amazon?
- Home builder – replaced by a 3D printer.
- Truck driver – Those jobs will be gone soon.
- Delivery service – Adios, jobs!
- Inside Sales – Got product or pricing questions, we’ve got BOTS!!
- Technical Support – This is a prime Bots use-case
- Product Manager – As distribution channels get shorter with increased usage of public cloud, we’ll see more direct customer usage data becoming an input to build roadmaps which will go directly to the engineering teams. bad news for the people with people skills…..
- Blogger – (see “Newspaper” above) if they can do it for real journalists, they can definitely do it for bloggers.
- BBQ Pitmaster – Yes, unfortunately stuff like this exist…..yuck!!
The good news….somebody is going to have to develop the software for all those bots.
The Platform-as-a-Service (PaaS) market has been very interesting over the last 9-12 months. Let’s recap some of the highlights:
You get funding, and you get funding, and you get funding!!
- Pivotal: $253M – Round C
- Docker: $113M – Round D
- DataDog – $94.5M – Round D
- Apprenda: $24M – Round D
- Mesosphere: $73.5M – Round C
- CoreOS: $28M – Round B
- Rancher Labs: $20M – Round B
- Weave – $15.5M – Round B
- Sysdig: $15M – Round B
I’m sure I’ve missed a few other deals, but that’s $600M+ in VC funding into a space that is essentially going through a v2.0 evolution (v1.0 being the earlier versions of Heroku and Google AppEngine). Throw on top of that the $1B/qtr that AWS, Google and Microsoft put into their clouds, and the IBM “$1B bets” and the market is moving in the right direction.
Is the funding turning into revenues?
This is where things get more complicated to evaluate, since Pivotal (via EMC) is the only one of those companies that publicly reports their numbers – sort of. None of Google (AppEngine), Salesforce (Heroku), IBM, Microsoft or AWS disclose any details around their PaaS/Platform revenues.
Looking at Pivotal’s numbers, we can determine a few things:
- Pivotal does not break out Pivotal Cloud Foundry (PCF) revenues. Their reporting includes all aspects of Pivotal’s business, including PCF, Pivotal Data and Pivotal Labs. Pivotal’s CEO said that PCF is on a $200M annualized bookings run-rate. NOTE: Annualized Recurring Revenue (ARR) and Bookings are two different accounting metrics.
- From the current 10-Q, the business is still about 2/3 services and 1/3 software sales. This doesn’t seem that unusual as they (and most PaaS companies) are targeting enterprises that will need quite a bit of help getting up to speed on using these new cloud-native technologies.
- Gross profit margins are 41.2%, which is low for a typical software company (typically in 80%), but Pivotal is still a younger company and the cloud-native and big-data transitions are very people intensive.
- The overall business operates at a loss (-$58M) as R&D and SG&A costs are still higher than revenues. While Pivotal does operate Pivotal Web Services, they seem to be primarily targeting on-premises deployments with large Enterprise and Gov’t customers, which have higher sales costs and longer sales cycles. And because the revenues and expenses are not broken out by product, we could infer but can’t assume that it might be a similar percentage to the bookings/revenues that were highlighted by the Pivotal CEO.
But the variety of go-to-market approaches with PaaS/Platform offerings are still somewhat diverse (on-premises software, managed on-premises services, public cloud services, etc.), so making any assumptions about the overall markets based on one company’s financial reporting would be a mistake. What is needed is much more financial disclosure about the various public cloud services (e.g. IBM Bluemix, AWS services, Google Cloud Platform / AppEngine, Microsoft Azure services) to give us a much better understanding of the state of the PaaS/Platform market.
Architectural approaches are varied, but beginning to consolidate
While some people want to claim that their architecture is the de-facto choice, or declare themselves the winner in this market, IMHO it’s still way too early for those claims. Nobody is even close to $1B in revenues yet, and technology is a tres commas world.
Just 9 months ago, I wrote that the market was Structured vs. Unstructured. At the time, it was a decent attempt at segmenting the market. But in that short period of time, that framework has gone through significant changes. Now, the major PaaS religions seem to be:
- Structured (Highly Opinionated): Cloud Foundry
- Semi-Structured or Composable: The platforms that are migrating towards Kubernetes (Apcera, Apprenda, CoreOS, Google Cloud Platform, Red Hat OpenShift, etc.)
- Container Services: AWS Elastic Container Service, Azure Container Service, Docker Data Center, Rancher Labs
There are still some powerful (technology) outliers, such as Mesosphere/Mesos/Marathon and Hashicorp/Nomad that will be interesting to watch.
Building cloud-native, microservices applications is still complicated
This is a topic that definitiely needs it’s own series of posts, but the TLDR is that it’s still very, very early days for tools that will help a broad number of developers build these cloud-native, microservices applications. Things like SpringBoot/SpringCloud, NetflixOSS, Micro and a few others exist, there still seem to be more books about microservices than tools to simplify things for developers.
Oh yeah, and now the Serverless movement is beginning to gain traction… (stay tuned!)
Networking is still complicated
Most developers don’t care about networking. To them, it either works or it’s a convenient thing to blame. Unfortunately, it needs to work and someone needs to figure out how to make it work. And ever since the work went to virtual machines and now containers, networking has become much more complicated.
The good news is that some people (e.g. Weave, Romana, Docker/Socketplane, Project Calico…plus all the big networking companies) are focused on making it easier to network all these containers and microservices. The less good news is that it’s still evolving and new architectures still have to be created.
A long way to go…
The PaaS/Platform market is still in very early days and is still rapidly evolving. The good news is that we’re still seeing VC funding flowing into the space (even if funding markets might be getting tighter) and we’re seeing the technologies mature and evolve. The other good news is that we’re seeing more end-user companies (e.g. “customers”) taking a more involved role in what technology will impact their business going forward.
The less good news is that the scoreboards and balance sheets are still pretty fuzzy, so betting on a winner is still complicated. I suspect that we’ll continue to see many companies stand up at multiple keynotes over the next year, talking about their deployments with various technologies and companies.
For many years, I worked for technology vendors in roles that involved building new products and then trying to get businesses to buy and use them. A good portion of my time was spent talking to IT organizations about all the new features we had available and why they would want to use those features to solve a problem they had. To support this effort, we made tons of PPT slides and brought out extremely detailed roadmaps of what the next 6-12-18 months would look like.
But here’s where reality used to set in:
- About 70-80% of customers were running software that was at least 2-3 releases behind the “latest version”.
- Regardless of what version of software the customers were running, the majority of them would only turn on about 30% of the features (mostly the defaults).
- It was not unusual to see customers take 3-6 months, and sometimes longer, to test and validate a new release before it would go into production. And then there was the waiting period before an “outage window” was available for the update.
While this was equally frustrating for the vendors and the customers, the give and take or features vs. deployments sort of settled into a groove and the industry learned to deal with these realities.
Then software-defined-<whatever> and open source started to become more mainstream, which brought with them a completely different update model.
- VMware hypervisor has a major release every 12 months, with a minor bug-fix release every 6 months.
- OpenStack has a major release every 6 months.
- Docker has updates every 1-2 months.
- Technologies like Kubernetes and Mesos are releasing updates every 1-2 months.
Houston….we might have a problem.
The good news is that these newer technologies bring with them lots of tools and best-practices to adjust to the increased pace of updates. Stuff like:
- Continuous Integration and Continuous Deployments (CI / CD) – tools like GitHub and Jenkins and JFrog and others to help send new software into a pipeline of tests and have them get deployed into Dev, Test, Staging and Production.
- Automation – lots of tools like Docker, Chef, Puppet, Ansible, SaltStack to help automate deployments.
- DevOps – the cultural phenomenon where Devs and Ops groups (or integrated into one group) work more closely together to collaborate around deployments.
- Blue / Green Deployments – the model where updates are deployed to a small % of the available resources to validate if the changes work in production – and then if things are good then the updates can be deployed to more / all the resources.
The bad news is that IT organizations will need to learn all these new techniques and tools if they want to take advantage of these new technologies.
…or they can look to various vendors and cloud providers that will deliver those technologies as a service.
…or they can just use public cloud services and not worry about maintaining any of them (updates included).
So before IT organizations start evaluating these new technologies, they need to evaluate how well they can introduce a very rapid learning curve of new operational models into their organization. Just figuring out upgrades might be enough to make them think twice about an DIY projects.
Lately, I’ve been trying to connect a few dots that seem to be complicated to connect:
- VCs (and others) have publicly said that there will never be another Red Hat in the open source software world – namely that there will never be another company that makes large amounts of money (at $1B+ levels) supporting open source software.
- The most widely used services on the Internet wouldn’t exist without open source software.
- The majority of the revenue that comes from activities associated with open source software are driven by companies where open source is just an input into a cloud-delivered service.
- VCs continue to pour millions of dollars into companies that lead open source projects (e.g. Pivotal ($250M+), CoreOS ($28M), Rancher Labs ($20M), Weave ($15M)) – $300M+, and that’s just in the last week. Fairly recently, we’ve seen Mesosphere get $73M, Mirantis get $200M….and the list goes on and on.
- It’s not clear that these same VCs have any idea what business models are viable when the core technology is based on open source software. The only model they discussed in the podcast was a SaaS vendor that included aspects of open source in their offering – e.g. managed open source.
As I wrote last week, the 2016 Open Source Jobs Report highlights that 87% of companies are struggling to find talent for emerging open source technologies. And as I talk to more and more of these companies, many of them will tell you that they initially thought they were selling to developers, but it was often the operations or security teams that held the budget to make the buying decisions.
Here’s where I’m confused:
- It seems like the best way to monetized open source is to either deliver it as a cloud service, or be in services/support (e.g. Red Hat, Chef, Puppet, etc.). The services/support model is limited in scale because it’s people-centric.
- The largest cloud providers (AWS, Azure, Google) are getting more proficient at taking the open source projects and turning them into services (e.g. container schedulers, etc.). They are eliminating the operations skills-gap that’s called out the Open Source Jobs report.
- The large cloud providers are not acquiring the open source startups, especially in the infrastructure domains.
- The largest contributors to open source projects tend to be the largest vendors, who can afford to pay engineers to stay focused on open source projects (e.g. Intel, IBM, Red Hat, Cisco, HPE).
So why do we continue to see all this VC funding?
- Do the VC expect the large traditional vendors (except Red Hat) to try and use open source acquisitions as a way to prevent the on-going commoditization of hardware? This doesn’t seem to work, as most of these acquisitions have been < $200M (except Citrix buying Cloud.com for $400M) and most don’t align to their existing go-to-market models.
- Do the VCs expect the cloud providers to start acquiring the startups for talent? That could work, but probably not at the large valuations that now exist.
- Do the VCs expect that developers, who have traditionally not held large budgets, will start becoming large buying centers? And will they be using that software on-premises, or in the public cloud?
- Do the VCs expect the startups to reach IPO? At least for the infrastructure companies, that path has not yet shown success.
So many of these investments are in areas that have overlapping technology and present way too many choices to Enterprises that don’t have the skill-sets in-house to make those long-term decisions.
At the end of the day, the only thing I can logically think of is that the VCs see these investments as relatively small, but strategic enough to kick-start enough development that the cloud providers will someday add as a valuable service. Then the VC investments in application-centric services can take advantage of those services and hopefully scale faster.
I continue to not be able to connect some of these dots together. Would be interested in hearing how others see these investments evolving…
In the past, I’ve written a number of times (here, here, here) about how it is getting easier to learn new technologies. It’s true – between free software, tons of online blogs and tutorials, and free public cloud services, there has never been a better time to learn about new technologies.
But cost is rarely the inhibitor to learning anymore. It’s things like time and priorities and motivation. And most importantly, learning new stuff can be difficult. I was on a run recently and I was listening to my friend Jason Edelman (@jedelman8) on the Datanauts podcast. Jason is one of my favorite stories about learning new stuff because he started doing it at night because he was passionate about automation. It was a hobby for a while, but it evolved into his full-time business…..which is going to get really cool very soon (keep an eye on it!!).
As I listened to Jason talk about his journey, it dawned on me that there isn’t a roadmap for this stuff. And even when you think there is a roadmap, it probably has a few dozen forks in it because technology is evolving so rapidly and far too often engineering communities try and solve the same type of problem about 6 different ways.
Is there a Formula for Learning?
As I continued running, I wondered if there was an easier way to help people think about what to learn, or how to learn it. Is there a simplified formula that could be used, similar to how financial planners tell you what mix of stocks and bonds to invest in for retirement?
It’s not perfect by any means, but here’s what I came up with.
- Take your current age and subtract it from 60 (eg. 60-35=25).
- Then take the answer, and round it to the nearest decade (e.g. 32=30, 35=40)
- Divide that number by 10, and then subtract 1.
- This final number is the suggested number of new technologies or skills you should learn every couple years if you want to stay relatively competitive in technology markets.
Example: Let’s say you’re 35 years old. So 60-35 = 25. We round that up to 30, divide by 10 (=3) and subtract 1 (=2). At age 35, you’re in a peak earning range and upwardly mobile, so learning 2 new technologies (or skills) should keep you ahead of the pack.
Where Should I Focus?
This is always a complicated question, so let’s try and simplify it a little bit. Instead of choosing a specific technology, focus on a type of technology. For example, let’s suppose that you want to learn about data center automation. Instead of stressing over Chef vs. Puppet vs. Ansible vs. SaltStack vs. whatever, just pick one of them. The biggest thing you’ll learn is ‘how to learn’. Becoming an expert in a year or two isn’t the goal, in fact Gladwell would say you’d need 5yrs of dedicated focus to become an expert (10,000hr Rule), instead the goal is to learn the new paradigm of learning. The same thing goes for learning a new business skill or other relevant skill for your industry.
Just Get Started
What I’ve always learned when trying new things isn’t that the new stuff is so hard to learn, it’s your own ability to deal with frustration. There’s some personal ego to get over. But if you can find a buddy to learn with, or a great tutorial, or just that you’re willing to laugh at your early failures, it usually works out alright.
The key is to get started, try not to be overwhelmed, and stay persistent with the baby steps. Our industry is constantly changing, which means that you need to constantly change as well. Learning how to learn is the most important thing you can learn.
One of the most frequent questions asked by IT organizations is, “Which applications should run in the cloud?”. While there are no definitive answers, and various people will try to put this into a bunch of categories, it is a question that continues to be asked by many IT organizations.
To some extent, the answer given of follows how the answering person makes money in the industry. Are they aligned more to public cloud (and software developers) or more aligned to private cloud (and often hardware upgrades)?
Today, as I scrolled through my Twitter timeline, I saw two opinions that got me thinking that maybe there will be two schools of thought going forward.
One school of thought was that most applications need to live on-premises, because that’s where the majority of “users” reside, and that users need to be close to the applications. I can understand this mindset if someone sells technology that primarily lives on-premises, or they are primarily focused on IT-centric applications (Microsoft, SAP, Oracle) or things like VDI. While these applications might not live on a mainframe, they tend to adopt the mainframe mindset of focusing on local applications, local responsiveness and very strict response-times and availability levels.
Another school of thought is that most applications will eventually live in the public cloud. Recently, I’ve been listening to the Engineers.Coffee podcast, which is run by Donnie Flood and Larry Ogrodnek, who build the Bizo business (sold to LinkedIn) entirely on AWS – go download it now! In a recent show, they talked about how easy it was to work with AWS’ Machine Learning (ML) system.
This example builds on the recent news and momentum from Microsoft Azure and Google Cloud Platform around ML and AI.
If the focus of digital business applications are moving to mobile platforms, and ML services can significantly simplify the ability to get “data science” services into an application, there is an excellent chance that we’ll see this area begin to rapidly expand in usage over the next 2-3 years. Even Google’s CEO called out Artifical Intelligence as the future of his company.
What is the Priority of the Business?
By no means is this an extension set of options for customers – near a Mainframe or mean Machine Learning – but it does look at a customer’s priorities. Are they looking to optimize existing environments through things like Virtualization and Converged Infrastructure, or are they focusing budgets on market-facing mobile applications that can be augmented by ML?
One of my kids turned 10 years old recently and it got me thinking about some of the things that have happened in their lifetime. My goal of this exercise wasn’t to remember lost teeth or their 1st bike ride, because those events are captured in 1000s of pictures – as parents seem to do these days. Instead, my goal was to put in perspective the amount of their life that was affected by the things that happened since 2006.
Here’s a short list:
- President Obama is elected
- AWS is created
- iPhone is created
- Tesla ships their first car
- Netflix begins streaming content
- Twitter/Facebook/Instagram/WhatsApp/Pinterest are created.
In just ten years, the IT industry, Automotive industry, Telecommunications industry and Media industry have been significantly disrupted. Multiple trillions (yes, that’s “T” trillions) of dollars in economic value being radically changed in just a decade. In many ways, several of these accomplishments have built upon each other, especially because of AWS and the iPhone. And those technologies were the foundation of getting a US presidential campaign funded and ultimately leading to two elections.
The 1990s and 2000s were all about building out the foundation of the Internet. This created an environment where network connectivity was everywhere (HQ offices, remote offices, coffee shops, your couch and eventually airplanes). This removed the barrier of geography for businesses and communities around the world
The last ten years have been built on that ubiquitous foundation into a world where all the information is now in everybody’s pocket, and the barriers to technology resources are never more than a credit card swipe away.
In 2001, we couldn’t image a world where 100Mb to your desktop (or house) wouldn’t be bandwidth overkill. Now 1Gb/s fiber to your home is becoming commonplace in many cities. In 2008, we couldn’t image a set of applications that would be more compelling than some of the web “mashups” that had emerged out of the Internet bubble. Now $1B+ companies are being built with a storefront that is just an “app” on a 6″ piece of smartphone real-estate.
The access barriers are gone. The infrastructure barriers are gone. And with the spread of open source software, the experimentation barriers are gone. So we’re now at a stage where the ability to be creative far outweighs the ability to fund a great idea.
While I still worry about the future where jobs may be dominated by drones and robots and artificial intelligence, the possibilities to build on those new frameworks are very exciting as well.
The pace of change has never been faster. It’s hard to know what the future will bring, but there’s never been a time when individuals can control their destiny as much as now. The transition from thinking about technology for technology’s sake, to thinking about technology for new business ideas is going to be fun for the next 5-10 years.
In 2014, IT analyst firm Gartner introduced the concept of Bi-Modal IT as a way to explain that legacy applications and next-generation applications are extremely different in how they are developed and operated. The framework suggests that IT should be separated into two groups, each focused on either the existing or future applications.
The advocates seem to align to Gartner’s “Mode 1” group, those invested in the existing application. They argue that existing IT (and business) organizational dynamics will create too much friction to sustain rapid change.
The opponents tend to have experience with the new “Mode 2” applications. They argue that learning to be agile is a critical survival technique for any company hoping to survive the digitization of the 21st century.
How should CIOs think about the digital transition?
While there is no one-size-fits-all approach to the Bi-Modal IT debate, there are three areas that should be consider in any strategy focused on balancing the old and the new:
- Focus on Inward-Facing Context – Internal applications tend to focus on driving productivity for the business. Once deployed, IT organizations focus on making them highly available, measuring their success by uptime metrics. These applications (e.g. Email, CRM, ERP, HCM, Collaboration) rarely provide competitive advantage for the business in today’s world, but they can still be optimized to reduce overall IT costs in the business. This is an area where CIOs should be looking at virtualized, converged infrastructure systems and flash storage to reduce on-going operational costs, as well as a focus on automating repetitive tasks.
- Focus on Outward-Facing Context – As the market evolves and customer’s buying habits change, CIOs need to find ways to manage new and unexpected opportunities. For example, who could have predicted that Pinterest would have such an impact on consumer purchasing habits? As IoT project emerge, how large will they scale? These opportunities bring unexpected challenges to IT that are outside of their normal comfort zone. They demand that IT understand APIs, Cloud Computing and DevOps principles of agility and frequent updates. These applications offer the opportunity to significantly change the business and bring new levels of differentiation to the market.
- Focus on Bridging the Internal and External Applications – While Inward-Facing and Outward-Facing applications often have very different characteristics, the reality is that many companies need to build a bridge between them. For example, how to bring credit card transactions to a new mobile app? Or how to integrate 3rd-party data from a partner into a new analytics application for the sales team?
The best CIOs will recognize that all three areas will need focus and execution to be successful over the next 3-5 years of technology transitions. Each area brings unique characteristics, and they each need skills that are able to work closely with the other two areas. Keeping these focus areas (or teams) isolated will certainly lead to more complicated integration in the future.
The Bi-Modal IT debate will continue over the next few years, as established companies attempt to compete with new startups in every industry. Successful companies will realize that their legacy can be their advantage, but bridging it with the technology of the future will give them a way to move the business forward.