From Silos to Services: Cloud Computing for the Enterprise

Page 4 of 19« First...23456...10...Last »

February 20, 2016  10:10 PM

Why AWS Makes the New Rules in IT

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, EMC, HP, Microsoft, Oracle, Rackspace, Verizon, VMware

shutterstock_179146070Nothing gets the IT industry more riled up than a perspective that puts Amazon AWS at the forefront of anything. Even though most people will admit that Cloud Computing is a legitimate trend in our industry, there is a strange binary reaction to any implications of changes in the status quo. What do I mean by “binary reactions”? Even though there are typically dozens of companies (or open-source projects) that compete in any given segment of the IT market, people tend to think that everything is a binary, zero-sum game. Must be an engineering 1s and 0s thing. Meaning that the new kills the old, and that EVERYTHING will move to the new immediately.

While nobody is actually implying that AWS will be the only major player going forward, there are some interesting trends that seem to imply that the balance of power is beginning to shift more in AWS’ direction. Does this mean they will be the big winner? Who knows. But it’s (IMHO) beginning to feel more and more like the IT game is now being played by AWS’ rules instead of the incumbents.

  • For many years now, Venture Capitalists (VCs) no longer provide funding to startups if it’s going to be used for CAPEX spending on IT resources, rather they expect them to use public cloud resources to get started. The game has changed from giving them $50M in funding, of which the first $5M went to Intel, EMC, Cisco and Sun, to giving them $5M and expecting them to focus on hiring and AWS resources. That’s a 10x change in how funding gets allocated.
  • During their Q4’15 earnings call, EMC CEO David Goulden said, “As we look at the external environment, customers continue to be in either transactional or transformational spending mode and in some cases both at the same time. Customers in a transactional spending mode are buying just enough and just-in-time for their traditional environments and we saw this in our stronger maintenance renewal bookings throughout last year. Customers in transformational mode are either transforming their existing IT systems towards a hybrid cloud or building and deploying new digital applications to transform their business.”
  • Cloud Providers of all sizes, Rackspace / HP / Verizon, are exiting the market and choosing to no longer compete with AWS. Great customer support, world-class branding and massive network pipes were just not enough to overcome AWS’s years of web-scale cloud engineering. It’s not hard to predict who will be added to this list in 2016-2017.
  • The Wall Street Journal is questioning if AWS is having an impact on the global economy, as IT spending slows for hardware/infrastructure.
  • Engineers from top IT vendors are wondering how AWS is able to keep making money by offering a portfolio that seems to not make money for existing vendors.
  • AWS Certified Solutions Architect is now the #1 Top-Paying certification in the industry. Engineers and customers are voting with their career paths and wallets.

So let’s see:

  1. The source of startup funding is different and leaning towards AWS. Check.
  2. The traditional vendors are not only consolidating because profit margins are falling and it’s difficult to transition an existing business model, but their customers are starting to buy traditional equipment in ways that more closely align to AWS buying patterns. Check.
  3. Major Cloud Provider competition is leaving the market because they wouldn’t keep up with the pace of growth and capital funding needed to compete. Check.
  4. The global press is now starting to understand the broader impact that AWS will have on the IT industry, which is a major indicator of economic trajectory. Check.
  5. IT vendors can’t figure out how their competition is making money in areas and ways that they can’t. Check
  6. IT engineers and customers are betting their careers and business projects on AWS. Check.

My colleague, Dave Vellante, was talking about this a couple years ago. The marginal economics of web-scale computing, at least in AWS’ case, is nearing the economics of software in the 1990s-2000s. We saw what this did to shape the client-server world for companies like Microsoft and Oracle. And maybe those same economics will apply to AWS longer-term as well.

Or maybe they won’t.

Maybe someone else will figure out how to better compete with AWS. But I’m guessing that they will be playing a game that tends to aligns to the new rules that AWS is re-writing for the IT industry.

February 14, 2016  9:31 PM

4 Simple Ways to Fix Twitter

Brian Gracely Brian Gracely Profile: Brian Gracely
Collaboration, Facebook, Instagram, twitter

It’s been a rough year for Twitter. The stock price is down 75%. The user base is down several hundred million accounts (how many bots?) and the leadership team is going through a significant set of changes and challenges.

TWTR Stock Price - Feb/15 to Feb/16

TWTR Stock Price – Feb/15 to Feb/16

Last week I wrote that I was concerned about the decline of Twitter and it’s importance as a community collaboration and learning tool for business. It takes a long time to build a community, or find a community where you can extract enough value vs. the enduring noise. And it takes quite a while to get used to a service like Twitter, which doesn’t really follow any natural forms of human communications.

There are some ugly issues to fix, especially in the areas around harassment, stalking and bullying – great write up here.

If I was tasked to focus on how to grow the user-base and make it a simpler product to use, here are some suggestions:

[1] Make the Sign-Up process be about more than following Celebrities or Brands – For many new users, they don’t get immediate value because they don’t know who to follow. Right now, the initial sign-up process suggests lots of celebrities and brands (sports teams, fashion brands, etc.) to follow. Instead, allow it to suggest a set of curated lists of people that share similar work or personal interests. Allow a person to follow a list of people that work at the same company, or their past company.

[2] Expand the Character Limit or Group Tweets Together – We’ve already seen Twitter offer a way to “See What You Missed”, so it’s not hard to alter the timeline that someone sees. People should be able to more easily follow threaded tweets and conversations, without having to be distracted by all the other noise at a given time. Facebook is much more conversational in structure. Twitter could easily allow that as a configurable mode for users that primarily use it to consume information instead of being people that post.

[3] Buy Apps for Basic Interactions, Not Weird Interactions – Twitter owns Vine and Periscope. Video is a great app and becoming a major part of how people consume information. But Vine is a 6 second loop and Periscope disappears after either 8 or 24hrs. On the other hand, Facebook owns Instagram, which lets you do basic pictures and longer videos that don’t go away. Most people want those services. You could wrap ads around those services. But Facebook owns them, not Twitter. Twitter owns the quirk apps because they can’t figure out if they are core communications or artistic expression. It’s the difference between being mainstream and being a niche. Both are fine, but know what you want to be.

[4] Turn Likes/Favs into Searchable Pages – People use the Fav/Like in different ways. I prefer to use them like bookmarks for later reading. Twitter should turn them into a searchable and update page, like a magazine, and allow others to subscribe to those as well as people. It’ll help them find interesting topics, as well as the people driving those conversations.

So that’s my short list of things Twitter could do to become a tool that is easier for people to adopt and gain knowledge from. What are some of your ideas?


February 7, 2016  5:49 PM

Is the Next Era of Collaborating Coming Soon?

Brian Gracely Brian Gracely Profile: Brian Gracely
Collaboration, Facebook, Github, Slack, twitter

shutterstock_139378880Let me begin by saying that I have no inside information or insight into any of these company’s strategies. Instead, I’m just mentally connecting some dots and starting to wonder if any of these recent trends will have an effect on myself or the communities I interact with.

Lots of Headlines in the News recently

Are Your Communities Changing?

While meeting with various people in Silicon Valley last week, one trend (amongst many) kept coming up in discussions – something was wrong with Twitter. Usage rates were down; people didn’t like all the UI changes; they might get acquired, etc. As a self-proclaimed Twitter junkie, this is somewhat concerning. Twitter is a very valuable service to me, especially as someone that doesn’t live in the heart of Silicon Valley. I’d be more than happy to pay a monthly/annual fee to use the service, but Twitter doesn’t allow this option. It would have a huge impact on my professional life if it went away. Rebuilding those communities would take a very long time on another service.

A couple weeks ago, GitHub had a very visible outage. I heard about it from a few people that were doing technical demonstrations, but I’m sure it had a wider impact for many developer and operational organizations. As a public service, it’s not massively integrated into the workflow of many companies. It’s where software lives today. Full stop. And to hear that they are going through some changes and potential pivots makes you begin to wonder what impact that will have on developers moving forward. Rebuilding those workflows and code repositories (and collaboration services) somewhere else would take a very long time on another service.

Slack has grow to widespread prominence over the last couple years, becoming one of the defacto places with group collaboration happens within companies and across communities. When I started the EMC {code} group, we used it extensively and it essentially replaced email for 90+% of our communications. Having to move back to email as a central communication channel has been somewhat painful. Slack is often listed as one of the Silicon Valley unicorns, but how well it is really prepared to take on companies like Google or Microsoft in a long-term competitive fight? Could it eventually go away? Rebuilding those communities would take a very long time on another service.

Facebook. Ugh, Facebook. It’s great for sharing family pictures. It’s also clogged with endless political rants and garbage surveys and mind-numbing clickbait articles. Unfortunately it continues to grow and grow and grow. It’s a web property, but it feels more painful than a bad Microsoft application from the 2000s – cluttered with mostly garbage, and constantly changing the UI or adding features that you’re not really sure how to use. And as it continues to grow, there is a chance that people might start using it for more that personal communications. We might eventually get forced to further blur our personal and professional life.

The Future of Collaboration / Community Tools?

The last 8 years of collaboration tools have been pretty awesome. Between smartphones and interactive communities, it’s been very good. But for some reason, it’s starting to feel like maybe we enjoyed them a little too much, didn’t pay for them enough, and now the perpetual search for revenues might leave many of us searching for new places to communicate. I hope I’m wrong, but there are a few dots that are starting to line up…in a bad way.


January 30, 2016  4:43 PM

Building the Cloud Computing track at Interop

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, IaaS, Interop, iot, PaaS, SaaS, Security

logo

In the past, I’ve had the opportunity to help shape specific tracks at large vendor events (EMCWorld, DevOps Day @ EMCWorld, Cisco Live, VMworld), but those experiences were all driven from within the specific vendor’s view of the world. I had never had the opportunity to help shape a track at a large independent event, but that changed when the Interop committee reached out to invite me to chair the Cloud Connect track. As there are many events happening every week, around the world, I thought it might be useful to share some of my experiences in how we shaped the Cloud Connect track at Interop (May, 2016).

Finding Speakers

The best events are driven by community, usually through a Call for Papers (CFP) process, sometimes called Call for Speakers. With so many events happening each year, I wish there was a more centralized service to keep track of these dates and deadlines, across any technical event. Sites like Lanyrd exist, but I’d like to see it allow people to plug in their upcoming schedules and have it give more proactive notifications. Many people reached out to me after the deadline and selection process had completed, unaware that the CFP process had taken place. Never the less, it’s important to let lots of ideas flow into the process and gather feedback about the types of topics that people are passionate about discussing.

Selecting Speakers, Sessions and Topics

For the Cloud Connect track, I believe we had about 100-150 submissions for approximately 12 slots. This is a great thing in that there is a huge amount of variety to choose from. The downside is that you have to tell a large number of people that their sessions were not selected. As I began the selection process, I used a few basic “elimination” criteria:

  1. If it looks like a vendor/product commercial, it’s probably not going to be included. That’s what vendor websites and webinars are for.
  2. If it overlaps with several other sessions (same idea, same topic), I’ll tag it, and sort through to find the best within that topic.
  3. If it includes words or phrases like “a journey…”, “the road to…”, “in a box…” they got eliminated because people want to implement things, not listen to another version of a journey that is really a vendor pitch (see #1 above)

Once I had narrowed it down to 30-40 selections, then I started crafting a framework of topics or concepts that I wanted to fill. I had several goals in mind:

  1. Make sure attendees are getting educated on the leading topics, technologies and trends.
  2. Make sure that we include a mix of technology and business topics, especially the more complex and pragmatic business areas (pricing, budgeting, compliance, etc.). Give attendees things that can go back and implement when they return from the event.
  3. Make sure the speakers are a mix of vendors and end-users. Vendors bring insight into areas where major investment and innovation is happening. End-users bring the reality of how technology aligns to their specific business problems.
  4. Don’t be afraid to include a couple topics that are a little farther out, but are disruptive enough to get attendees thinking about possibilities in the future.
  5. Look for dynamic speakers and diversity of speakers. Create engaging sessions. Sometimes their topics weren’t great, but I could work with them to alter it to fit into the overall framework of the track.

Would YOU want to Attend?

This was the last stage that I went through – the sniff test. Would I want to attend these sessions?

At the end of the day, I’m proud of the track that we pulled together. We have a great mix of topics – education about Public Cloud services (IaaS/PaaS/SaaS); education about Private and Hybrid Cloud services; discussions on Cloud Native Application technologies (Containers, PaaS); How to Purchase Cloud services; How to secure Cloud Services; How to think about the Organizational Impact of building/buying Cloud Services….as well as some stuff about IoT to keep people thinking about the future

Hopefully some of that thought process is helpful to you if you’re planning an event or coordinate a set of sessions at your local meetup.


January 24, 2016  4:34 PM

A Look at 2016 – “Hey You Kids, Get Off My Lawn!!” Edition

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cisco, Dell, DevOps, Docker, Hardware, HP, IaaS, PaaS, Public Cloud, VCE, VMware

2016 is going to be an interesting year in technology. I’ve predicted that it’s the year where the Public Cloud markets begin to make the rules of the IT industry and everyone will need to figure out how they survive or fail under those new rules.

  • There are Presidential Elections happening in the US, which causes leaders to make projections on how a new administration might impact the economy.
  • Interest rates have recently risen (albeit slightly) in the US, which impacts investments and overall risk-tolerance for companies.
  • It was about 8 years between tech-bubble burst of 2000/2001 and housing market crash of 2008, and now it’s been 8 years since that event. VCs are already beginning to back off new funding rounds and people are calling for the end of the Unicorn Era.

Screen Shot 2016-01-24 at 2.28.14 PMSo I thought that I’d put on my “Hey You Kids, Get Off My Lawn!!” hat and take a look at the technology landscape of 2016. NOTE: I’m not endorsing any/all of these perspectives, but it’s a useful exercise to occasionally view the world from a 180* different, contrarian perspective.

Hardware: Yes, it’s a commodity. Yes, the leading companies that supply it are slowing their growth and beginning to pay dividends in a model that seems more like a public utility and less like a tech rocketship . But all that software needs to run on something, and the consolidation within this segment of the industry is already happening. And customers have tons of inertia (buying patterns, technical skills, existing data-center facilities, compliance models, etc.) All of the major companies now sell almost all the hardware elements, and many systems are consolidating around common x86 or ODM elements. We’ll probably see a few more companies fall off the playing field, but 5-6 big ones should remain for quite a while.

Public Cloud: AWS might be bigger than the next 14 competitors combined (see: Gartner IaaS MQ), but it’s still only expected to have done $7-8B in revenues in 2015 and it’s an 8yr old organization. It’s trying to displace IT leaders such as Microsoft, Oracle, EMC, HP, Dell, Cisco and others, who have massive cash reserves to fight long battles. Competitors like Google still haven’t gotten fully-engaged and large potential threats like Facebook and Apple haven’t really entered the game. Then throw in the inertia of trillions of dollars of legacy applications, systems and people-skills and it makes Public Cloud a long game that is nowhere near being decided. Has the industry ever seen a single company command such a dominant perspective from a sub-$10B revenue base? Which sets of rules will everyone play from in 2016, or do we continue playing a game with multiple sets of rules?

Cloud Native Applications: While some experts are calling Pivotal (and Cloud Foundry) the early leader, we still don’t see many revenue announcements from the leading PaaS players. Most announcements are still focused on vendor investments, community memberships, code contributions and early customers logos. And the market seems to have moved away from the polyglot, cool new languages focus of 2013/2014 and is now re-focused on Java and .NET for on-premises Enterprise applications. It’s middleware replacement that needs operators that need to learn how to manage the underlying system. And the container management argument seems to dominate the discussion (structured vs. unstructured; DIY vs. pluggable vs. containers-as-a-service) – does this create too much distraction from the previous goal of driving “software is eating the world” and “digitize the economy” and build applications faster? Does the Enterprise spend more on Public IaaS vs. Private PaaS, or does it follow the Public trend up the stack, or is Public too risky for large Enterprise spending?

Containers: The leading company, Docker, has a (reported) $2-3B VC valuation, but hasn’t made any earnings announcements or given earnings guidance – and they just bought a company focus on “the next thing” – unikernels. The market is getting extremely crowded with companies that do somewhat similar things – various forms of infrastructure for containers or microservices-based applications – such as Cisco Mantl, CoreOS, Hashicorp, Rancher, Red Hat OpenShift, and many, many others. And some early data (here, here) suggests that adoption in production environments is still not at levels that will disrupt VM usage. Does it disrupt VMs, or Infrastructure, or Config-Management, or all of the above, or just the PaaS ecosystem….or none of the above?

DevOps: Does it come in a box, and what size do I need to order to get my technical organizations onto a single sheet of paper org-chart? If a SKU for DevOps doesn’t exist, can I get a SKU for NoOps, or OpsDev, or SecOpsDev? Where is the macro on my spreadsheet to summarize the ROI for empathy, or the HR policy that’s need to remedy the need for counseling if burnout from pager-duty exceeds the unlimited vacation policy of my SREs?

What other areas needs the “Old Man Shakes Fist at Cloud” treatment?

A Dose of Reality?

More than anything, I’d just like to see some revenue numbers out of companies chasing the buffet line of software that is eating the world. We got that in 2015 from AWS and a few others, which made many people rethink how they thought about the shifts in the marketplace. Will we see that in 2016 for other segments of the market?


January 24, 2016  2:52 PM

How Many Engineers Does it Take to Build a Cloud?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, OpenStack, Operations, Private Cloud, Public Cloud, Red Hat

I came across this old picture the other day, which showed a group of people (circa 2010-2011) that were assembled in a room with the task of building a VCE Vblock. This was a team of EMC vSpecialists doing a training session on this “newer” technology. I did some sloppy editing to obscure their faces – but the names and faces aren’t important here.

Building a Vblock, Circa 2010

Building a Vblock (from 2010-2011)

By my count, there are 12 people in the room, and that doesn’t include anyone that’s outside the frame of the picture. This was a collection of highly specialized engineers, with background in servers, networking, virtualization and storage. With this training structure, it typically took them a week to build a Vblock system. All of this coordination was needed just to get the system to the point where a company could begin to deploy applications on this rack of equipment. Essentially Day 0.

Fast-forward to 2016 and now most of that configuration and complexity gets done at the factory. In essence, 12 people replaced by a set of scripts. And there are now many other offerings in the marketplace today that will deliver a similar Day 0 experience, without all the people that need to come on-site to build it.

This isn’t a commentary on the technology or the company behind it.  It’s an evolution of our marketplace, and the evolution of business expectations. The days of business patience to wait for infrastructure to get built or prepared in order to run an application are gone. SaaS applications and public IaaS services (e.g. AWS, Azure, etc.) are defining the expectations for business users, not IT departments.

IT Inefficiencies?

Maybe you look at that example and think, “uh oh, that’s going to mean a lot less jobs for IT in the future.” While this is possible, although not likely due to things like Jevons Paradox, let’s look at this through another lense. Let’s look at it through inefficiencies of costs. With the example above, there were inefficiencies of cost in building data center systems. The cost of having all 12 of those people in a room for a week would be $33,600 (@ $100k/person – calculator), and for more complex skills it could easily push it to $50,000. That’s before any applications were running.

But what about the costs of day-to-day operations? This past week, Red Hat released an interesting study of the operational costs of running a Private Cloud. At the core of the study are metrics that show the operations cost of a Private Cloud (in this case, based on Red Hat technology). In Year 1, the cost is $13,609 per VM. In Year 2, the cost is $8,043 per VM. In Year 3, the cost is $6,264. By Year 6, the cost is $5,200 per VM.

Three years to be able to gain the operational expertise needed to reduce the cost by 50%. Another three years to reduce an additional 15% from those operational costs. 

To put that in perspective, the Year 1 cost is equivalent to $1.55/hour. For $0.662/hour, someone could get a m3.2xlarge AWS EC2 RHEL instance in US-East using On-Demand pricing. Reserved Instances pricing for that instance would be $3979. That pricing is for a large virtual server that could then be subdivided into many VMs.

Will businesses put up with those cost levels, when external options to manage VMs are readily available in the marketplace today?

It’s been happening for a while, but expect to see a much greater push by the marketplace to attack those levels of operational costs, and the learning curves of so many individual companies trying to gain those capabilities themselves.

Does these costs levels represent an inefficiency that we’ll talk about in 5 years like that room full of engineers it took to build a converged Vblock system in 2010-2011? Curious about your feedback…


January 9, 2016  3:14 PM

The 5 Cs of Solutions

Brian Gracely Brian Gracely Profile: Brian Gracely

download (1)Looking back at my career, I’ve had the opportunity to work for a number of groups (within companies) that have decided to expand their focus from product-centric to include more solutions-based offerings. In some cases, this combined more of their own technologies together. In other cases, it combined their technology with industry leaders.

We continue to see the IT industry attempt to bring more solution-centric offerings to market, both for existing application environments, as well as more modern Cloud-Native environments.

A move to solutions is usually done for a couple reasons:

  • Competitive pressures within the market, where customers are looking to reduce the complexity of what they buy and implement.
  • Sales/Revenue pressure to sell more products, or get better leverage from a broad portfolio.
  • Aligning your products with a large industry trend (e.g. virtualization, cloud computing, containers, etc.).

While it may seem logical to try and integrate (or just bundle together) more products within a portfolio, a move to solutions can often be difficult for companies. Here are my 5 C’s to getting solutions-based selling right at any technology company.

Coverage – By their nature, solutions are often more complex to sell than individual products. They often cross-over between buying centers and budgets. Some companies ask their core sales team to also sell solutions, while others will create a specialized “overlay” team to focus on solutions-selling. It’s important to note that most existing sales teams won’t have the depth of knowledge to immediately sell a solution, so having an overlay or specialized team to augment their field coverage is essential. But it’s equally important to plan for that overlay/specialized team to eventually either be disbanded, or folded into the core sales team overtime. Their role is as much about training the core team as it is about selling the solution.

Compensation – As solutions grow larger or more complex or more costly, they often take longer to sell. When solutions extend the traditional sales cycle, sales reps are put in the complicated situation of putting their quotas at risk by trading-off short-term deals vs. longer-term strategies deals. If companies want solutions-based selling to be successful, they need to adjust their compensation structure to encourage the sales teams to find a better balance of short and long-term goals.

C-Level – This area is both an internal and external focus. Internally, companies need buy-in from their C-Suite that solution-focused development and selling is a priority, because it will disrupt the current way of building and selling. Externally, one common mistake for companies that attempt to sell solutions vs. products is to continue to target the same buying-level within an organization. For example, hoping that the storage or networking team (alone) will own the budget for a converged stack is a mistake. Solutions-selling is done at a higher level in the organization, to groups that own large chunks of budget and a broader architectural responsibility. If that model doesn’t exist at customers, then selling solutions will be very difficult or impossible. Solution offerings not only need to align to customer problems, but the ability for customers to buy solutions to those problems. Don’t underestimate the complexities of engaging new purchasing departments at customers.

Cracking the (internal) Culture – This sounds terrible, but most technology companies are organized according to a Business Unit or Technology Unit structure, and those groups are rarely incentivized to work closely together. And solutions typically try and pick “the best” from internal technologies, which can often create animosity amongst the groups that are not included in a given solution. This can lead to all sort of internal politics and competition that was unexpected. This is an area that is often overlooked or ignored (not on purpose) until the problem becomes a big problem. Formal models should be put in place to ensure that the solutions teams and product teams have some sort of common goals that are measurable, incentivized and tracked to drive collaboration between those teams.

(Ac)Counting – If products A, B, C and D are all pulled into a single SKU and sold as a solution, who should get to account for that revenue within a company? Not sure? Welcome to the challenge of solutions accounting. Since revenues drive many upstream decisions (Sales coverage, Marketing budgets, R&D budgets, etc.), it’s important to consider which groups will take credit for sales. Trying to assign a metric to “influenced a deal” is very difficult and typically leads to more disputes than collaboration.

As the Public continues to grow, expect to see more (non-Public-Cloud) companies expand their solutions offerings. Getting solutions right is a complicated model, but the good news is there are plenty of success and failure stories to learn from.


December 23, 2015  9:32 PM

2016 Guarantees for the Tech Industry

Brian Gracely Brian Gracely Profile: Brian Gracely

joe-namath-guarantee

  1. Every vendor will have a “Digital Transformation” story, regardless of whether they help customers build applications or not.
  2. Every vendor will have an IoT story, regardless of if they sell anything that is IoT related.
  3. Every vendor will package their software in a Docker container for easier distribution and early demos.
  4. Every (other) vendor will attempt to downplay the growth or profitability of AWS.
  5. Lots of discussions about Cloud-Native Apps that integrate with legacy systems instead of just being greenfield.
  6. While new examples will (hopefully) emerge that aren’t greenfield, expect to see Uber, AirBnb and Tesla mentioned in the majority of presentations as examples of how to run your IT organization or avoid being disrupted.
  7. While interest rates are expected to rise, we’ll see just as much or more M&A activity in 2016 as we did in 2015.
  8. Some vendors will incorporate US political campaign issues or ISIS fears into their marketing campaigns, most likely around security or disaster-recovery.
  9. Microsoft will announce something that makes you say, “That’s not the Microsoft we’re used to under the old CEOs.”

So what’s #10? Let me know your guarantees in the comment section.

EDIT: I thought about #10 for a while last night. Let me go ahead and add “Company executives will publish blogs at this time next year about how all of their predictions are exactly aligned to their company’s portfolios, and how they are 75-95% right, even though VC only get technology transitions right 10% of the time.”


December 23, 2015  9:17 PM

Applying 2015 Life Lessons to IT Planning

Brian Gracely Brian Gracely Profile: Brian Gracely
Compliance, DevOps

For me personally, 2015 was a VERY INTERESTING year in lots of ways. As I take a little time off, I started wondering if there was anything I learned from my personal life that could be applied to the technology world.

TEST YOUR COMPLIANCE PLANS – ESPECIALLY THE LEGAL STUFF

We lost some family members in late 2014 and as a result I was asked to take over some legal and financial responsibility for the remaining family members. I use the word “family” loosely, because this was family that was several levels removed and we had different last names. When I agreed to this responsibility, I thought I had reviewed the associated paperwork (wills, trusts, policies) properly. And then I got the opportunity to put my thoroughness into action by having to go to battle with multiple law firms, insurance companies and government agencies. While I was fully capable of executing the necessary actions, we quickly learned that the execution path was much more complicated than the original paper trail would have suggested. LESSON LEARNED – Lots of people will tell you that the “Recovery” part of Backup and Recovery is the most important aspect. I’d also throw in that simply testing recovery might not be enough. Test things like “person in charge of encryption keys has left the company” or “our company went through a merger recently and wants to change the naming scheme for internal systems“. Did your recovery model hold up to those semi-technical issues as well as the heavily-technical aspects?

DONT IGNORE INFRASTRUCTURE ENGINEERS AND PROJECT MANAGERS

We decided to renovate portions of our house. In today’s world of Houzz, 24×7 DIY shows on TV and Angie’s List, it could be very easy to believe that anyone with basic organizational skills or the ability to click on pictures could produce a beautiful new kitchen or sunroom. Just use the on-demand services that are available on the Internet. What could possible go wrong? LESSON LEARNED – The world is becoming enamoured with DevOps and Agile development and things like “infrastructure is a commodity”, but many DevOps teams are still small (see Slide: 37) and co-located in one main location. Once many groups gets involved in building new technologies and services, communications and documentation of standards becomes even more critical. CI/CD and Cloud-Native App platforms help with this, but don’t underestimate the need for great people both in planning and building the foundational infrastructure.

OWN A WORK HORSE 

Throughout 2013 and 2014, I found myself frequently needing to borrow large vehicles to be able to move or haul “stuff” (firewood, furniture, building materials, etc.). While I had friends that would allow me to borrow their vehicles, it became a time consuming and often complicated engagement. So in 2015 I bought myself a pickup truck. It’s an older model (1994, F150), but it runs consistently and does the “heavy lifting” and “dirty jobs”. No bells or whistles needed. LESSON LEARNED – While the industry is often caught up in the latest trends and fads, there will always been a need to have equipment do the ugly work. Maybe it’s batch processing or security monitoring or just structured cabling. Whatever it is, it’s OK to make basic investments that are specific to those types of needs. They don’t get any headlines, but they cover the necessities of the business.


December 12, 2015  12:51 PM

10 Important Container Areas to Watch

Brian Gracely Brian Gracely Profile: Brian Gracely
AppDynamics, AWS, Azure, CoreOS, EMC, Google, IBM, Kubernetes

Screen Shot 2015-12-12 at 9.45.59 AM

[1] Docker’s “Batteries Included But Removable” Strategy  Before there was Docker, there was dotCloud. dotCloud was a PaaS company. Eventually they decided to separate out the technology that made setting up containers easy, and Docker was born. But that team knows how to integrate all the other elements needed to build a platform (networking, storage, scheduling, security, etc.) and have been adding those elements piece-by-piece into the Docker, Inc. portfolio. But instead of making them into a monolithic piece of software, they are making them modular and removable (or interchangeable) with 3rd-party extensions that integrate with Docker’s APIs. This is a similar approach to what VMware did in the past with vCenter plugins and APIs like VAAI. It will be interesting to watch how the market adopts the native Docker elements (Docker Networking, Swarm, etc.) vs. 3rd-party extensions.

[2] VMware’s Container Strategy – As Docker grew in popularity, many “Docker is a VMware killer” headlines were written. While VMs and Containers serve different functions and are mostly used by different groups (Ops vs. Devs), the narrative was out there. But VMware came back strong in 2015 with their VMware Integrated Containers strategy and products (some commercial, some open-source). VMware is quickly evolving to understand containers, open-source and the needs of developers.

[3] Microsoft’s Container Strategy – Microsoft and Docker have had an evolving relationship throughout 2015, and Microsoft has continued to add container-centric functionality to both Windows Server and Azure throughout the year. As they become more OS agnostic, Microsoft has the ability to rekindle their relationships with new and previous developer and ISV groups.

[4] Container Networking – While Docker solidified their networking stack in 2015 with the acquisition of Socketplane, 3rd-party companies such as WeaveWorks have built excellent native-container networking stacks that are being used by many Enterprises and Service Providers. And with the libnetwork functionality, Project Calico and Docker Networking APIs, additional 3rd-party companies can integrate networking.

[5] Container Storage – Initially, the thinking around container storage was that either a file-system was sufficient (e.g. NFS, BTRFS, etc.) or it would be stateless and the data would be kept in non-container locations (e.g. bare-metal or a VM). But as 2015 evolved, companies and projects like ClusterHQ (Flocker), Portworx, Rancher Persistent Storage Services and EMC REX-Ray emerged to offer persistent storage that was deeply integrate with container environments. Docker also extended their Storage API. Continued »


Page 4 of 19« First...23456...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: