From Silos to Services: Cloud Computing for the Enterprise

Page 2 of 1812345...10...Last »

March 21, 2016  7:40 PM

Sometimes I hate the Internet…

Brian Gracely Brian Gracely Profile: Brian Gracely
AOL, Cloud Computing, Internet, mobile apps, Uber

shutterstock_134935040
When I first started using the thing called “the Internet”, it was via something called America Online (AOL). At the time, it was mostly about sending email or reading message boards. There weren’t very many websites at the time, and there was almost no business transactions happening.

Several years later, when I was working at Cisco during the Internet-growth-then-bubble days, we watched the Internet transform into a platform where commerce was a natural extension to this global platform. We couldn’t completely understand it at the time, but we had a sense that John Chambers’ (Cisco CEO) famous line, “the Internet will change the way we live, work, learn and play” would come true in some way.

When I was about 10 years old, I got my first real job as a paperboy for a local newspaper outside of Detroit, Michigan. Delivering those papers in rain or snow or sunshine, and collecting money from customers each month, I learned a lot about discipline and responsibility. I kept that job for three years, before moving onto other teenager jobs like being a stock-clerk at a grocery store or mowing lawns or folding shirts at a mall.

I bring this up for a couple reasons. I remember the first time I heard about a newspaper shutting down their daily circulation because readership had dropped. All of their daily content was now on the Internet and people no longer wanted that delivery service. It was a strange moment, because it was the first time I connected the dots that my new profession was putting my old profession out of existence. Sort of a cars vs. horse and carriage moment for me. And even understanding the evolution of work, it hit close to home about how technology will displace jobs.

Now that I have children, we sometimes discuss the types of jobs that they might have someday. We talk about studying hard in school and things like college, etc. But we also talk about things like how to manage money, and jobs that they might have to make enough money to go out with friends or buy some new clothes.

Screen Shot 2016-01-24 at 2.28.14 PMAnd then I read about things like the grocery store without any employees. Or Uber for Lawn Care.  And then I’m torn about how much of a good thing all this technology is. Sure, convenience is great for consumers, but there is a broader ecosystem of activities in play here. Even the most basic jobs teach kids responsibility, accountability, and how to have basic human interactions. And if they don’t think they make enough money, they may become motivated to work harder, or become the owner of the shop themselves.

It’s great that kids are learning to code at an early age, but I don’t know that I really want to live in a world where the goal of every kid is to become a data scientist. Or that the goal of every entrepreneur is to replace a bunch of human interacts with a mobile app.

I understand that technology evolves. But sometimes I wonder if the evolution is really a good thing…

March 19, 2016  3:12 PM

Will IT Jobs Evolve with Public Cloud?

Brian Gracely Brian Gracely Profile: Brian Gracely
API, AWS, Azure, Cloud Computing, Google Cloud, Public Cloud, Security

This past week, I had the opportunity to host a CrowdChat discussion about Cloud Computing as a preview of the Cloud Connect track at Interop. One of the questions I asked the audience was:

Interop Cloud Connect - CrowdChat - March 2016

Interop Cloud Connect – CrowdChat – March 2016

Obviously, this is a hypothetical question and somewhat extreme, as it would be extremely complicated and expensive for any company to move 100% from on-premises to a public cloud. But my goal was to see how IT organizations view their role as more of their companies applications move to public cloud services (IaaS, PaaS or SaaS). Far too often, I hear people getting more concerned about how their role will be eliminated, instead of being focused on how it could evolve.

So let’s look at some roles in the IT industry and how they could evolve as more applications move to public cloud:

Application Developer: If we look at the results of the 2016 Developer Survey from StackOverflow, it’s difficult to see how those roles will change that much. Many are trying to evolve from Waterfall process to more Agile process, but the trend towards more application developer is growing.

Enterprise Architect: Regardless of where applications are deployed, there is still a need for Architects to connect the business challenges to the technical possibilities. If anything, the breadth of services offered in the public cloud could make their evolution more interesting.

IT Managers: Regardless of how much an IT organization evolves to more integrated DevOps collaboration, there is still a need to manage teams, manage budgets, manage projects and work closely with vendors (or open communities). IT managers may also pick up more work as companies migrate to using more SaaS applications.

Security Teams: The borders for security have been breaking down for at least a decade, as people work remote from central offices, people use smartphones and WiFi access from everywhere is now ubiquitous. So the need for security teams in the cloud continues to be a high priority and those skills are in high demand.

Networking Teams: Networking people tend to worry about who while manage the deployment and operations of the network if it’s running in the cloud. While the rack & stack pieces go away, most other functions will remain in place. Plus, many applications will be deployed in a hybrid model (public and private), so they will need to manage remote interconnects and security across new boundaries. In the interim, networking professionals should be better understanding software-defined networking, as that is essentially what is being used in the public cloud.

Storage Teams: While the provisioning of storage is significantly easier in the public cloud, data still needs to be managed over the lifecycle – this means backups, snapshots, synchronization across geographic regions. Many of these functions are beginning to get automated within Public Cloud services, as well as becoming integrated features within other services (e.g. Database-as-a-Service). Of all the teams impacted by public cloud, storage is impacted quite a bit.

Virtualization Teams: Even more so than storage, virtualization is heavily impacted by public cloud. Virtualization is essentially invisible in the public cloud. Things like “vMotion” or “Live Migration” just happen if they are supported on a specific cloud. This is an interesting change of events, because virtualization was considered “moving up the stack” within the data center just a few years ago.

I’ve discussed before that some other functions, such as managing APIs, managing cloud costs, understanding the law about data sovereignty, managing compliance and many new aspects others will be in high demand. As people have been saying for 10-15 years, being able to evolve skills “up the stack” will be even more valuable as more applications move to the public cloud.

How has your IT organization changed as applications have moved to the public cloud?


March 12, 2016  11:12 AM

6 Things I Don’t Understand about Internet of Things

Brian Gracely Brian Gracely Profile: Brian Gracely
AOL, AWS, Azure, Cloud Computing, Google, IBM, iot, Oracle

As I cover Cloud Computing, we’re always looking slightly ahead to see what’s on the horizon in terms of new features or trends. Over the last 6-9 months, all the major Cloud Computing providers (AWS, Azure, Salesforce, Google, Oracle, IBM, etc.) have either announced or implemented the early stages of their IoT (Internet of Things) offerings. In some cases the focus is on streaming data or data processing services, in other cases it’s new security services, and for others it’s unique new capabilities like serverless computing or device virtualization.

We’ve all heard the predictions about the size of IoT, from 50B devices to $4T in new global value creation. We’re even beginning to see some predictions about the growth in the infrastructure needed to support it on the backend.

Personally, I’ve been “doing Internet stuff” for over 20 years now, and it’s been incredible to see how it’s grown and changed the way people live, work, play and learn (credit to my old boss John Chambers for that quote). Thinking about how IoT will impact the next 20 years is an exciting prospect.

But even with all the progress that’s been done so far, I still have some basic questions that I still haven’t quite resolved in my head:

[1] Will standards exist, or will IoT go through a walled-garden phase? We saw the early stages of the Internet move from University/Research networks to the walled gardens of AoL and Compuserve and MSN. Today we have the Apple AppStore and Google Play for mobile apps. Will there be open standards for IoT, or will we go through phases of proprietary protocols and marketplaces?

[2] How to Power the Devices? If you’ve ever been anywhere with a dying smartphone, you begin to realize that power ports aren’t always accessible. You’ve seen the huddled masses at airports, or cord-sharing in the backseat of a car. Now put yourself on a farm, or a two-lane road outside of town, or the middle of the ocean. The ability to easily get power to these locations becomes much more complicated. This will either become a massive bottleneck to IoT progress, or we’re going to see some incredible innovation in battery technologies over the coming years. Hopefully it’s the latter.

[3] How to Network the Devices? Going hand-in-hand with the power challenge is the networking challenge. Unless WiFi ranges get significantly better, this communication will need to be carried over cellular signals. Not only is this a power draw, but the bandwidth is often limited in high-density areas (e.g. been in a packed stadium before?) or remote locations. Is cached data useful to IoT applications, or will it need to be real-time to provide value. This will need to be considered by IoT application architects and the associated network architects.

[4] Where does the Data go? At its core, IoT is about collecting data and making decisions. But where does the data go? Does it stay local, at the edge or nearby cluster? Or does it get centralized in a cloud data center? This is where bandwidth challenges come into play, as well data management. Wikibon’s David Floyer recently looked at the cost of edge computing vs. cloud computing for a video surveillance application. Would love to see some insight about where data goes for various types of applications.

[5] How to Secure all those devices? Every day now, there’s a story or two about a security issue with a device that could be considered IoT. Whether it’s voice recognition with your new HDTV, or a security bug in the Linux kernel, the fear of a massive security threat is balancing the hype of IoT progress. Internet 1.0 is sort of a hodge-podge of security, so how will Internet 2.0 do?

[6] How to Manage all those devices? It wouldn’t be a proper IT discussion without putting management last. Beyond network bandwidth, the management and operations is where all the cost resides, but our industry has a tendency to talk about it last. Managing 1000s of servers is difficult. Managing 10,000s of mobile devices is difficult. Now multiply that by several orders of magnitude. The existing tools are designed for that scale. So how will companies manage all those devices?

The hype of IoT is fun to think about. It will create lots of new industries and new businesses. It’ll take a while for some of these challenges to get solved, so I’ll be watching to see how quickly the industry makes progress.


February 29, 2016  3:52 PM

The Cost of AWS Training Just Got 50x Cheaper

Brian Gracely Brian Gracely Profile: Brian Gracely

Every week, when I go out to my mailbox, there is a blue card from Bed Bath & Beyond that offers a 20% coupon on a single item. Most of the time, it goes in the trash because I don’t have an immediate need for new towels or a waffle maker.

I mention this because that coupon has sort of become my barometer for evaluating technology claims. If something is 20-50% cheaper, as is often advertised, I tend to ignore it because it doesn’t really make a material dent in the economics of technology. Those levels of saving are typically only available as a Day 1 cost, or are measured against an old technology (or business process). If I don’t have that savings in my current technology, I’ll usually get it by default in the next buying cycle, across many technology choices. This is the beauty of commoditizing hardware and the evolution of open-source software. Vendors no longer chase each other’s R&D, but rather they chase the open communities or as-a-Service offerings from the cloud. The differentiators are moving to people skills, improved process and operational efficiency.

Last week, I wrote that AWS is changing the rules of the IT industry. I didn’t say they were winning, since plenty of other IT companies make more revenues, but they are definitively driving the changes on the large chess board.

When I saw that the AWS Certified Solutions Architect certification was the #1 paying job, it got me thinking. This title always used to be held by Cisco (CCIE, CCNA) or Microsoft (MCSE, MCSA, etc.). Those were large companies that had dominant offerings in their respective markets. Smart people went where the money and jobs were. Now we’re seeing that shift towards AWS. But why? Here are a couple thoughts:

  • For a larger sized company (+$5B in revenues), AWS is growing faster than anyone else in the IT industry.
  • Companies are trying to determine if AWS could be a potential alternative to their IT department, which has given them high-levels of frustration for many years.
  • Companies are trying to figure out a “digital business” strategy, and they are seeing the popular examples are currently running on AWS (Netflix, AirBnB, Uber, etc.).Maybe that’s the place to get started instead of within their own data center, or using their existing IT team?

AWS offers a great set of free training on a per-product basis. When this is married to their Free Tier of service for most AWS services, it’s an excellent starting point for learning about AWS. But it doesn’t align itself to structured training or the specific topics needed to pass certifications. This is where a new company, A Cloud Guru, comes into play.

Started by some AWS experts, A Cloud Guru are focused on not only helping students understand AWS technologies, but is specifically focused on helping students pass the various levels of exams. They are like the Kaplan of AWS certifications. But there are lots of places to learn, so what makes A Cloud Guru so interesting?

  • The training is extremely cost effective. Courses start at $29 (USD). This is about 50x less expensive than most instructor-led courses. [See my note on evaluating cost-savings above.]
  • The UI is user-friendly. It’s instructor-led, but allows me to go at any pace I want: 1x, 1.5x, 2x. It also allows me to easily skip ahead or go back in 15-second increments. It’s like the iTunes player for training.
  • The AWS experience isn’t simulated. Every student gets an AWS account; all learning is done on the real AWS systems. This isn’t simulated, or limited to the equipment dedicated to a lab environment.
  • Since it’s live AWS, the student can take a snapshot and come back to the resources at any time. Everything is done on the student’s schedule and can be as interrupted as needed. This is important because if you’re doing this outside your normal job, life is full of interruptions.
  • Once you purchase a course, you own the rights to it for life. This means that $29 will get you through today’s certification, and the renewal in 2yrs (and beyond that). • Courses get updated as AWS adds or changes their services.

This service is a great way to learn about the most popular and fastest-growing technology in the industry. It’s simple, inexpensive, and very professional in how it’s delivered. It’s an investment in your future, and cost effective enough to be worth your time.


February 20, 2016  10:10 PM

Why AWS Makes the New Rules in IT

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, EMC, HP, Microsoft, Oracle, Rackspace, Verizon, VMware

shutterstock_179146070Nothing gets the IT industry more riled up than a perspective that puts Amazon AWS at the forefront of anything. Even though most people will admit that Cloud Computing is a legitimate trend in our industry, there is a strange binary reaction to any implications of changes in the status quo. What do I mean by “binary reactions”? Even though there are typically dozens of companies (or open-source projects) that compete in any given segment of the IT market, people tend to think that everything is a binary, zero-sum game. Must be an engineering 1s and 0s thing. Meaning that the new kills the old, and that EVERYTHING will move to the new immediately.

While nobody is actually implying that AWS will be the only major player going forward, there are some interesting trends that seem to imply that the balance of power is beginning to shift more in AWS’ direction. Does this mean they will be the big winner? Who knows. But it’s (IMHO) beginning to feel more and more like the IT game is now being played by AWS’ rules instead of the incumbents.

  • For many years now, Venture Capitalists (VCs) no longer provide funding to startups if it’s going to be used for CAPEX spending on IT resources, rather they expect them to use public cloud resources to get started. The game has changed from giving them $50M in funding, of which the first $5M went to Intel, EMC, Cisco and Sun, to giving them $5M and expecting them to focus on hiring and AWS resources. That’s a 10x change in how funding gets allocated.
  • During their Q4’15 earnings call, EMC CEO David Goulden said, “As we look at the external environment, customers continue to be in either transactional or transformational spending mode and in some cases both at the same time. Customers in a transactional spending mode are buying just enough and just-in-time for their traditional environments and we saw this in our stronger maintenance renewal bookings throughout last year. Customers in transformational mode are either transforming their existing IT systems towards a hybrid cloud or building and deploying new digital applications to transform their business.”
  • Cloud Providers of all sizes, Rackspace / HP / Verizon, are exiting the market and choosing to no longer compete with AWS. Great customer support, world-class branding and massive network pipes were just not enough to overcome AWS’s years of web-scale cloud engineering. It’s not hard to predict who will be added to this list in 2016-2017.
  • The Wall Street Journal is questioning if AWS is having an impact on the global economy, as IT spending slows for hardware/infrastructure.
  • Engineers from top IT vendors are wondering how AWS is able to keep making money by offering a portfolio that seems to not make money for existing vendors.
  • AWS Certified Solutions Architect is now the #1 Top-Paying certification in the industry. Engineers and customers are voting with their career paths and wallets.

So let’s see:

  1. The source of startup funding is different and leaning towards AWS. Check.
  2. The traditional vendors are not only consolidating because profit margins are falling and it’s difficult to transition an existing business model, but their customers are starting to buy traditional equipment in ways that more closely align to AWS buying patterns. Check.
  3. Major Cloud Provider competition is leaving the market because they wouldn’t keep up with the pace of growth and capital funding needed to compete. Check.
  4. The global press is now starting to understand the broader impact that AWS will have on the IT industry, which is a major indicator of economic trajectory. Check.
  5. IT vendors can’t figure out how their competition is making money in areas and ways that they can’t. Check
  6. IT engineers and customers are betting their careers and business projects on AWS. Check.

My colleague, Dave Vellante, was talking about this a couple years ago. The marginal economics of web-scale computing, at least in AWS’ case, is nearing the economics of software in the 1990s-2000s. We saw what this did to shape the client-server world for companies like Microsoft and Oracle. And maybe those same economics will apply to AWS longer-term as well.

Or maybe they won’t.

Maybe someone else will figure out how to better compete with AWS. But I’m guessing that they will be playing a game that tends to aligns to the new rules that AWS is re-writing for the IT industry.


February 14, 2016  9:31 PM

4 Simple Ways to Fix Twitter

Brian Gracely Brian Gracely Profile: Brian Gracely
Collaboration, Facebook, Instagram, twitter

It’s been a rough year for Twitter. The stock price is down 75%. The user base is down several hundred million accounts (how many bots?) and the leadership team is going through a significant set of changes and challenges.

TWTR Stock Price - Feb/15 to Feb/16

TWTR Stock Price – Feb/15 to Feb/16

Last week I wrote that I was concerned about the decline of Twitter and it’s importance as a community collaboration and learning tool for business. It takes a long time to build a community, or find a community where you can extract enough value vs. the enduring noise. And it takes quite a while to get used to a service like Twitter, which doesn’t really follow any natural forms of human communications.

There are some ugly issues to fix, especially in the areas around harassment, stalking and bullying – great write up here.

If I was tasked to focus on how to grow the user-base and make it a simpler product to use, here are some suggestions:

[1] Make the Sign-Up process be about more than following Celebrities or Brands – For many new users, they don’t get immediate value because they don’t know who to follow. Right now, the initial sign-up process suggests lots of celebrities and brands (sports teams, fashion brands, etc.) to follow. Instead, allow it to suggest a set of curated lists of people that share similar work or personal interests. Allow a person to follow a list of people that work at the same company, or their past company.

[2] Expand the Character Limit or Group Tweets Together – We’ve already seen Twitter offer a way to “See What You Missed”, so it’s not hard to alter the timeline that someone sees. People should be able to more easily follow threaded tweets and conversations, without having to be distracted by all the other noise at a given time. Facebook is much more conversational in structure. Twitter could easily allow that as a configurable mode for users that primarily use it to consume information instead of being people that post.

[3] Buy Apps for Basic Interactions, Not Weird Interactions – Twitter owns Vine and Periscope. Video is a great app and becoming a major part of how people consume information. But Vine is a 6 second loop and Periscope disappears after either 8 or 24hrs. On the other hand, Facebook owns Instagram, which lets you do basic pictures and longer videos that don’t go away. Most people want those services. You could wrap ads around those services. But Facebook owns them, not Twitter. Twitter owns the quirk apps because they can’t figure out if they are core communications or artistic expression. It’s the difference between being mainstream and being a niche. Both are fine, but know what you want to be.

[4] Turn Likes/Favs into Searchable Pages – People use the Fav/Like in different ways. I prefer to use them like bookmarks for later reading. Twitter should turn them into a searchable and update page, like a magazine, and allow others to subscribe to those as well as people. It’ll help them find interesting topics, as well as the people driving those conversations.

So that’s my short list of things Twitter could do to become a tool that is easier for people to adopt and gain knowledge from. What are some of your ideas?


February 7, 2016  5:49 PM

Is the Next Era of Collaborating Coming Soon?

Brian Gracely Brian Gracely Profile: Brian Gracely
Collaboration, Facebook, Github, Slack, twitter

shutterstock_139378880Let me begin by saying that I have no inside information or insight into any of these company’s strategies. Instead, I’m just mentally connecting some dots and starting to wonder if any of these recent trends will have an effect on myself or the communities I interact with.

Lots of Headlines in the News recently

Are Your Communities Changing?

While meeting with various people in Silicon Valley last week, one trend (amongst many) kept coming up in discussions – something was wrong with Twitter. Usage rates were down; people didn’t like all the UI changes; they might get acquired, etc. As a self-proclaimed Twitter junkie, this is somewhat concerning. Twitter is a very valuable service to me, especially as someone that doesn’t live in the heart of Silicon Valley. I’d be more than happy to pay a monthly/annual fee to use the service, but Twitter doesn’t allow this option. It would have a huge impact on my professional life if it went away. Rebuilding those communities would take a very long time on another service.

A couple weeks ago, GitHub had a very visible outage. I heard about it from a few people that were doing technical demonstrations, but I’m sure it had a wider impact for many developer and operational organizations. As a public service, it’s not massively integrated into the workflow of many companies. It’s where software lives today. Full stop. And to hear that they are going through some changes and potential pivots makes you begin to wonder what impact that will have on developers moving forward. Rebuilding those workflows and code repositories (and collaboration services) somewhere else would take a very long time on another service.

Slack has grow to widespread prominence over the last couple years, becoming one of the defacto places with group collaboration happens within companies and across communities. When I started the EMC {code} group, we used it extensively and it essentially replaced email for 90+% of our communications. Having to move back to email as a central communication channel has been somewhat painful. Slack is often listed as one of the Silicon Valley unicorns, but how well it is really prepared to take on companies like Google or Microsoft in a long-term competitive fight? Could it eventually go away? Rebuilding those communities would take a very long time on another service.

Facebook. Ugh, Facebook. It’s great for sharing family pictures. It’s also clogged with endless political rants and garbage surveys and mind-numbing clickbait articles. Unfortunately it continues to grow and grow and grow. It’s a web property, but it feels more painful than a bad Microsoft application from the 2000s – cluttered with mostly garbage, and constantly changing the UI or adding features that you’re not really sure how to use. And as it continues to grow, there is a chance that people might start using it for more that personal communications. We might eventually get forced to further blur our personal and professional life.

The Future of Collaboration / Community Tools?

The last 8 years of collaboration tools have been pretty awesome. Between smartphones and interactive communities, it’s been very good. But for some reason, it’s starting to feel like maybe we enjoyed them a little too much, didn’t pay for them enough, and now the perpetual search for revenues might leave many of us searching for new places to communicate. I hope I’m wrong, but there are a few dots that are starting to line up…in a bad way.


January 30, 2016  4:43 PM

Building the Cloud Computing track at Interop

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, IaaS, Interop, iot, PaaS, SaaS, Security

logo

In the past, I’ve had the opportunity to help shape specific tracks at large vendor events (EMCWorld, DevOps Day @ EMCWorld, Cisco Live, VMworld), but those experiences were all driven from within the specific vendor’s view of the world. I had never had the opportunity to help shape a track at a large independent event, but that changed when the Interop committee reached out to invite me to chair the Cloud Connect track. As there are many events happening every week, around the world, I thought it might be useful to share some of my experiences in how we shaped the Cloud Connect track at Interop (May, 2016).

Finding Speakers

The best events are driven by community, usually through a Call for Papers (CFP) process, sometimes called Call for Speakers. With so many events happening each year, I wish there was a more centralized service to keep track of these dates and deadlines, across any technical event. Sites like Lanyrd exist, but I’d like to see it allow people to plug in their upcoming schedules and have it give more proactive notifications. Many people reached out to me after the deadline and selection process had completed, unaware that the CFP process had taken place. Never the less, it’s important to let lots of ideas flow into the process and gather feedback about the types of topics that people are passionate about discussing.

Selecting Speakers, Sessions and Topics

For the Cloud Connect track, I believe we had about 100-150 submissions for approximately 12 slots. This is a great thing in that there is a huge amount of variety to choose from. The downside is that you have to tell a large number of people that their sessions were not selected. As I began the selection process, I used a few basic “elimination” criteria:

  1. If it looks like a vendor/product commercial, it’s probably not going to be included. That’s what vendor websites and webinars are for.
  2. If it overlaps with several other sessions (same idea, same topic), I’ll tag it, and sort through to find the best within that topic.
  3. If it includes words or phrases like “a journey…”, “the road to…”, “in a box…” they got eliminated because people want to implement things, not listen to another version of a journey that is really a vendor pitch (see #1 above)

Once I had narrowed it down to 30-40 selections, then I started crafting a framework of topics or concepts that I wanted to fill. I had several goals in mind:

  1. Make sure attendees are getting educated on the leading topics, technologies and trends.
  2. Make sure that we include a mix of technology and business topics, especially the more complex and pragmatic business areas (pricing, budgeting, compliance, etc.). Give attendees things that can go back and implement when they return from the event.
  3. Make sure the speakers are a mix of vendors and end-users. Vendors bring insight into areas where major investment and innovation is happening. End-users bring the reality of how technology aligns to their specific business problems.
  4. Don’t be afraid to include a couple topics that are a little farther out, but are disruptive enough to get attendees thinking about possibilities in the future.
  5. Look for dynamic speakers and diversity of speakers. Create engaging sessions. Sometimes their topics weren’t great, but I could work with them to alter it to fit into the overall framework of the track.

Would YOU want to Attend?

This was the last stage that I went through – the sniff test. Would I want to attend these sessions?

At the end of the day, I’m proud of the track that we pulled together. We have a great mix of topics – education about Public Cloud services (IaaS/PaaS/SaaS); education about Private and Hybrid Cloud services; discussions on Cloud Native Application technologies (Containers, PaaS); How to Purchase Cloud services; How to secure Cloud Services; How to think about the Organizational Impact of building/buying Cloud Services….as well as some stuff about IoT to keep people thinking about the future

Hopefully some of that thought process is helpful to you if you’re planning an event or coordinate a set of sessions at your local meetup.


January 24, 2016  4:34 PM

A Look at 2016 – “Hey You Kids, Get Off My Lawn!!” Edition

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cisco, Dell, DevOps, Docker, Hardware, HP, IaaS, PaaS, Public Cloud, VCE, VMware

2016 is going to be an interesting year in technology. I’ve predicted that it’s the year where the Public Cloud markets begin to make the rules of the IT industry and everyone will need to figure out how they survive or fail under those new rules.

  • There are Presidential Elections happening in the US, which causes leaders to make projections on how a new administration might impact the economy.
  • Interest rates have recently risen (albeit slightly) in the US, which impacts investments and overall risk-tolerance for companies.
  • It was about 8 years between tech-bubble burst of 2000/2001 and housing market crash of 2008, and now it’s been 8 years since that event. VCs are already beginning to back off new funding rounds and people are calling for the end of the Unicorn Era.

Screen Shot 2016-01-24 at 2.28.14 PMSo I thought that I’d put on my “Hey You Kids, Get Off My Lawn!!” hat and take a look at the technology landscape of 2016. NOTE: I’m not endorsing any/all of these perspectives, but it’s a useful exercise to occasionally view the world from a 180* different, contrarian perspective.

Hardware: Yes, it’s a commodity. Yes, the leading companies that supply it are slowing their growth and beginning to pay dividends in a model that seems more like a public utility and less like a tech rocketship . But all that software needs to run on something, and the consolidation within this segment of the industry is already happening. And customers have tons of inertia (buying patterns, technical skills, existing data-center facilities, compliance models, etc.) All of the major companies now sell almost all the hardware elements, and many systems are consolidating around common x86 or ODM elements. We’ll probably see a few more companies fall off the playing field, but 5-6 big ones should remain for quite a while.

Public Cloud: AWS might be bigger than the next 14 competitors combined (see: Gartner IaaS MQ), but it’s still only expected to have done $7-8B in revenues in 2015 and it’s an 8yr old organization. It’s trying to displace IT leaders such as Microsoft, Oracle, EMC, HP, Dell, Cisco and others, who have massive cash reserves to fight long battles. Competitors like Google still haven’t gotten fully-engaged and large potential threats like Facebook and Apple haven’t really entered the game. Then throw in the inertia of trillions of dollars of legacy applications, systems and people-skills and it makes Public Cloud a long game that is nowhere near being decided. Has the industry ever seen a single company command such a dominant perspective from a sub-$10B revenue base? Which sets of rules will everyone play from in 2016, or do we continue playing a game with multiple sets of rules?

Cloud Native Applications: While some experts are calling Pivotal (and Cloud Foundry) the early leader, we still don’t see many revenue announcements from the leading PaaS players. Most announcements are still focused on vendor investments, community memberships, code contributions and early customers logos. And the market seems to have moved away from the polyglot, cool new languages focus of 2013/2014 and is now re-focused on Java and .NET for on-premises Enterprise applications. It’s middleware replacement that needs operators that need to learn how to manage the underlying system. And the container management argument seems to dominate the discussion (structured vs. unstructured; DIY vs. pluggable vs. containers-as-a-service) – does this create too much distraction from the previous goal of driving “software is eating the world” and “digitize the economy” and build applications faster? Does the Enterprise spend more on Public IaaS vs. Private PaaS, or does it follow the Public trend up the stack, or is Public too risky for large Enterprise spending?

Containers: The leading company, Docker, has a (reported) $2-3B VC valuation, but hasn’t made any earnings announcements or given earnings guidance – and they just bought a company focus on “the next thing” – unikernels. The market is getting extremely crowded with companies that do somewhat similar things – various forms of infrastructure for containers or microservices-based applications – such as Cisco Mantl, CoreOS, Hashicorp, Rancher, Red Hat OpenShift, and many, many others. And some early data (here, here) suggests that adoption in production environments is still not at levels that will disrupt VM usage. Does it disrupt VMs, or Infrastructure, or Config-Management, or all of the above, or just the PaaS ecosystem….or none of the above?

DevOps: Does it come in a box, and what size do I need to order to get my technical organizations onto a single sheet of paper org-chart? If a SKU for DevOps doesn’t exist, can I get a SKU for NoOps, or OpsDev, or SecOpsDev? Where is the macro on my spreadsheet to summarize the ROI for empathy, or the HR policy that’s need to remedy the need for counseling if burnout from pager-duty exceeds the unlimited vacation policy of my SREs?

What other areas needs the “Old Man Shakes Fist at Cloud” treatment?

A Dose of Reality?

More than anything, I’d just like to see some revenue numbers out of companies chasing the buffet line of software that is eating the world. We got that in 2015 from AWS and a few others, which made many people rethink how they thought about the shifts in the marketplace. Will we see that in 2016 for other segments of the market?


January 24, 2016  2:52 PM

How Many Engineers Does it Take to Build a Cloud?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, OpenStack, Operations, Private Cloud, Public Cloud, Red Hat

I came across this old picture the other day, which showed a group of people (circa 2010-2011) that were assembled in a room with the task of building a VCE Vblock. This was a team of EMC vSpecialists doing a training session on this “newer” technology. I did some sloppy editing to obscure their faces – but the names and faces aren’t important here.

Building a Vblock, Circa 2010

Building a Vblock (from 2010-2011)

By my count, there are 12 people in the room, and that doesn’t include anyone that’s outside the frame of the picture. This was a collection of highly specialized engineers, with background in servers, networking, virtualization and storage. With this training structure, it typically took them a week to build a Vblock system. All of this coordination was needed just to get the system to the point where a company could begin to deploy applications on this rack of equipment. Essentially Day 0.

Fast-forward to 2016 and now most of that configuration and complexity gets done at the factory. In essence, 12 people replaced by a set of scripts. And there are now many other offerings in the marketplace today that will deliver a similar Day 0 experience, without all the people that need to come on-site to build it.

This isn’t a commentary on the technology or the company behind it.  It’s an evolution of our marketplace, and the evolution of business expectations. The days of business patience to wait for infrastructure to get built or prepared in order to run an application are gone. SaaS applications and public IaaS services (e.g. AWS, Azure, etc.) are defining the expectations for business users, not IT departments.

IT Inefficiencies?

Maybe you look at that example and think, “uh oh, that’s going to mean a lot less jobs for IT in the future.” While this is possible, although not likely due to things like Jevons Paradox, let’s look at this through another lense. Let’s look at it through inefficiencies of costs. With the example above, there were inefficiencies of cost in building data center systems. The cost of having all 12 of those people in a room for a week would be $33,600 (@ $100k/person – calculator), and for more complex skills it could easily push it to $50,000. That’s before any applications were running.

But what about the costs of day-to-day operations? This past week, Red Hat released an interesting study of the operational costs of running a Private Cloud. At the core of the study are metrics that show the operations cost of a Private Cloud (in this case, based on Red Hat technology). In Year 1, the cost is $13,609 per VM. In Year 2, the cost is $8,043 per VM. In Year 3, the cost is $6,264. By Year 6, the cost is $5,200 per VM.

Three years to be able to gain the operational expertise needed to reduce the cost by 50%. Another three years to reduce an additional 15% from those operational costs. 

To put that in perspective, the Year 1 cost is equivalent to $1.55/hour. For $0.662/hour, someone could get a m3.2xlarge AWS EC2 RHEL instance in US-East using On-Demand pricing. Reserved Instances pricing for that instance would be $3979. That pricing is for a large virtual server that could then be subdivided into many VMs.

Will businesses put up with those cost levels, when external options to manage VMs are readily available in the marketplace today?

It’s been happening for a while, but expect to see a much greater push by the marketplace to attack those levels of operational costs, and the learning curves of so many individual companies trying to gain those capabilities themselves.

Does these costs levels represent an inefficiency that we’ll talk about in 5 years like that room full of engineers it took to build a converged Vblock system in 2010-2011? Curious about your feedback…


Page 2 of 1812345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: