Attending an AWS event is a strange experience, especially if you’ve been attending technology events for many years and have gotten used to a certain pattern of announcements, demos, showfloors, etc. I’ve attending several AWS events in the past, primarily AWS re:Invent, but the AWS Summit in NYC seemed a little different this time.
First, the event is primarily attended by people that wouldn’t classify themselves as “IT Professionals”. It’s mostly people that would associate themselves with the “products” that a business makes or sells, and they leverage technology as a means of creating and delivering those products or services.
Second, the host (AWS) isn’t speaking to IT professionals and telling them about transformations and changes that are needed before they can get to the next stage in delivering IT. They focus on what is available now, to help their business do something now. And the context isn’t speeds and feeds, it’s business outcomes. How to get from ideas to execution.
Third, they have mastered the art of speaking in ways that ease the mind about the challenges and complexities of building applications and running technologies environments. And make no mistake, making technology work for a business is difficult, often very difficult. They speak in terms of pennies-per-hour and massive long-term savings, and “undifferentiated heavy lifting”. Note to IT departments, they are you talking about you, the purveyors of that “undifferentiated heavy lifting”. AWS is in the IT replacement business.
Finally, they don’t expose their org chart when discussing their offerings. It’s not the Storage division or the Compute division or Desktop division. They’ve figure out how to link together services, without mandating that they be bundled for pricing or renewals. And the services/products are all offered with public APIs, so it doesn’t initially limit the ecosystem from building adjacent capabilities around the core AWS services (more about this later). Continued »
There was quite a bit of buzz today with the announcement that Tesla Motors would release their patents to the world.The automaker, and it’s CEO (Elon Musk), that have become the envy of the automotive industry is giving away it’s most prized possessions? They are playing a technology game with Ford, Mercedes, Toyota and others, so why let them have the chance to catch up? Having grown up in Detroit, but spending the bulk of my professional career in technology, this is a fascinating turn of events and crossing of the streams. [Note: Because of the wording of this announcement, some are questioning if they will have any real applicable usage by outside companies.]
In today’s world, whether it’s automobiles or Cloud Computing, there are some massive platforms in the market. Competing against the largest platforms, or introducing new technology (or business models) can be extremely complex and costly. So how does a smaller company possibly compete against the industry giants?
One thing to do is to play by a different set of rules. In the case of Tesla Motors, they have developed the knowledge and experience to create an incredible set of products. But for them to scale, they would need to invest massive amount of capital to buildout the nationwide network of charging stations. And build service stations. And optimize their supply-chain. So instead of owning a vertical business model, they are choosing to let the market share in the opportunity of electric vehicles. They are sharing the costs and sharing the potential. In a game against competitors that are 10-100x as large, it may be the only way to remain as more than a small niche in the overall transportation market. Electric vehicles are a small market segment, at this time. It will be interesting to watch and see if the market helps it grow, establishing Tesla technology as the standard for any future vehicle.
But this approach doesn’t always work. For example, several years ago Rackspace decided that the future would be on-demand cloud computing, a shift from their traditional hosting business. But the industry leader, Amazon Web Services (AWS) was the 800 lb. gorilla and many years ahead of them. While Cloud Computing is still in it’s early stages, Rackspace decided that leveraging open-source was their best chance to compete. Instead of having to hire 100s of top-flight engineers to build their environments, they created and enlisted the OpenStack community. 100s of “free” engineering resources, creating Cloud Computing software that they could use within their own data centers. But several years into OpenStack, that strategy hasn’t closed the gap between them and AWS. Continued »
It’s the dog days of summer, so things are a little quiet in the Cloud Computing world. Spring trade-show season is done, school is out, people are taking vacations, and this one post is slightly off the topic of Cloud Computing. But in another 3-5 years we will just go back to calling it Computing or Technology, so I think this one will be OK.
Unfortunately, we had another week in IT and another week of people complaining about poor behavior in Silicon Valley . The so-called “Bro Culture”. And as always, it comes with the calls for sensitivity, and people hoping that their daughters won’t have to work in this industry. I agree with the former, but I find the latter short-sighted.
As the father of two daughters, it’s difficult to read about some of the stupidity. It’s not hard to make an argument that segments of the technology industry need to take a good, hard look in the mirror and figure out why this culture of moronic, fraternity-house driven behavior has become so common-place. But let me be clear – it’s just a small segment of the technology industry.
This is because the technology industry has become all-encompassing. Every company that begins, grows or survives over the next 5-10 years (and beyond) will fundamentally be a technology company. Regardless of industry, regardless of geography, all companies will be technology companies at their core. There’s a reason the terms “Internet of Things”and “Internet of Everything” are starting to take hold. So what does this have to do with ‘Bro’ culture? In a word, everything. If we take the failures of the ‘Bro’ culture and scare off women and girls from joining “the technology industry”, we will all suffer. They are the majority of the population. They are inherently creative and caring and add great insight and viewpoints to our teams and communities.
There’s a reasonable chance that the next President of the United States will be a woman. The number of women leading major corporations and organizations has never been higher. They are in positions that should inspire young women to reach for their goals and believe they can accomplish anything. But we might screw that up by amplifying the message to our daughters that the “technology industry” is a toxic place. Just as we’ve learned from history, the best way to stop oppression or ignorance is to keep pushing forward. Sticks and stones. We need to keep including technology in the conversations we have with our daughters, as well as teaching them how business models change with technology. We must teach them how to embrace the communities that move technology forward, even if it means turning a shoulder to the occasional stupid comment or inappropriate video. Encourage them to think big and act bigger. Give them the opportunity to solve our health problems, our environmental problems, our educational problems, and our financial problems.
Given that the majority of stories we hear about the ‘Bro’ culture happen around Silicon Valley, I continue to wonder when we’ll see smart, innovative wonder begin to move and cluster in other areas. Remove the frustrations and create the future without all the hassle. Or maybe they won’t have to if the Bros stop acting a fool and the rest of us encourage the girls to keep focusing on the positive contributions they are making.
If you’ve been around the Cloud Computing industry long enough, you know the joke about how it’s not a real Cloud Computing conference if Adrian Cockcroft (@adrianco; former architect at Netflix, now VC at Battery Ventures) isn’t giving a keynote. It’s a well deserved accolade given what has been built at Netflix, what the Netflix OSS team gives back to the community and what Adrian gives back. It’s a great example of one end of the spectrum – 100% on AWS, 100% open-source, small agile code that was designed up front to scale to the web, everything on GitHub, etc. etc.
But if you do a little bit of digging around other aspects of the web and cloud computing operations, you realize that it’s extremely rare to find other scenarios like that in the wild. For example, here’s a few scenarios:
- I spoke with Ed Bowman (@ebowman) from @GiltTech about how they design and run their “flash auction” site. It was a great discussion about how they have evolved over the past few years from monolithic code to micro-services, and giving back more to the open-source community. But he was clear that not everyone open-sources all their code; not all code is on GitHub; not all the technology they use is from open-source (some problems aren’t fixed, easily, using available open-source); they use a micro-service framework but aren’t using PaaS; they mentioned the word “ERP” (because, you know…supply chain is important when managing goods and suppliers)
- I spoke with Michael Ducy (@mfdii, Evangelist at Chef) about how he evolved his DevOps skills. He told me about the environment that runs at travel site Orbitz and how some of the applications that allow you to book plane flights still go through a mainframe. Yes, a mainframe, the original container before Docker became popular.
- I spoke with another large web-scale company recently about their storage infrastructure and migration to greater usage of OpenStack. Turns out that they use a mix of open-source code, small vendor code (open-source but not free) and large vendor products. And they mix and match for boot images, NAS, Block, performance tiers and Object/HDFS for Images/Hadoop/Analytics. Not the simple “all in one box” diagram we too often see in PPT. And none of it was part of the billions of AWS S3 objects reported each quarter, although they did manage billions of objects in their Private Cloud environment.
- Cloud Foundry co-creator Derek Collison (@derekcollison, now CEO at @apcera) recently wrote about how PaaS platforms, by themselves, are not enough for Enterprise customers. Some of this was obviously a way to highlight his new company, but the explanation was founded in the reality that we don’t yet like in a 100% DevOps-fuels PaaS-centric world – in fact, far from it.
This post is going to be a little bit off-topic (not Cloud Computing centric), but since this post, I’ve seen quite a bit of discussion by people trying to figure out some guidelines, or career guidance, in this new word of self-publishing, social media and other outlets for learning and sharing. It’s definitely not simple, since there no longer any defined lines between work and life, between media and non-media.
Earlier today, a friend started a thread on Facebook because they host a technical blog which allows ad placement by sponsors. He had been approached by a certain company and wanted some guidance on if he should allow that company to sponsor his blog. The central point was that there were some inherent conflicts of interest between the advertising company and this person (and his employer). After a bunch of feedback, advice and suggestions from friends, the issue was resolved. But one of the commenters asked if some of us that live in this new media world would discuss how we deal with the potential conflicts that can arise when you have a full-time job and also have a presence on external media sites (blogs, podcasts, etc.).
This is my experience. Other’s mileage will vary and I expect others will have different situations depending on their employers and how they handle “gray areas”.
For full disclosure, my employer details are here. In addition, I’ve co-hosted a podcast for almost 4 years, as well as written on several technology blogs. Before getting into any “guidance” to others, I’ve always found that it’s important to establish some person rules to guide how I manage the interaction between personal and professional. These are MY PERSONAL rules. Others will have different opinions. I’ve written about a few of these opinions previously (here, here, here) Continued »
[Dislosure: While my employer, EMC, is a majority stakeholder in VMware, I have no insight into VMware’s long-term strategic direction. That’s way above my pay grade. All information contained in this blog is based on publicly available information – sometimes you just got to know where to look.]
Evolution, transformation, disruption. Regardless of what segment of the IT industry you follow, these buzzwords are dominating the conversation. And a common thread is that all “legacy” (read: “existing”) vendors and technology will struggle with these transitions. But what happens when vendors actively invest in technology and expertise that run in parallel to those disruptions? Isn’t that the classic playbook from The Innovator’s Dilemma?
We all know that VMware took a bold step away from their traditional business when they launched the VMware vCloud Hybrid Service (vCHS), offering an alternative way for companies to consume IT resources from the cloud – instead of within their own data centers. They became a Cloud Service Provider, beginning to disrupt the ecosystem that had been built up around their hypervisor business. It allowed them to leverage their existing technology and installed base to expand into a different segment of the IT market. With vCHS, they not only offered on-demand IaaS services, but also Desktop-as-a-Service, DR-as-a-Service and many other services on the roadmap.
But what about the technologies that didn’t have VMware’s logo on them? Continued »
Software, software, software….
It’s hard to go anywhere in tech these days without someone highlighting the virtues of software. Software is eating the world. It’s a Software-Defined Economy. It’ll run in your Software-Defined Data Center.
But it feels like there is a gap between the things that developers are using to change the economy, and the tools that fall into the definition of “Software-Defined”. There are the tools of the Devs, and then the tools of the Ops crowd. In the past, the IT community often thought of them as isolated, but that thinking is beginning to change. Examples of “Infrastructure as Code” and “Operations at Scale” are beginning to become more visible and common.
Even the definition of a “developer” is beginning to change. In the past, developer only applied to the people writing the applications that we interacted with as business users. This now applies to infrastructure and operations teams as well. Think of it as the evolution of the SysAdmin, but across all the functional areas of infrastructure and operations.
But there is still opportunity to do better. Making it easier for all kinds of developers to be able to use these new software-defined tools and environments. To eliminate all the friction between them getting the software and making it useful; integrating it into their environments and making it part of their continuous application builds. Just making software free to download is not enough.
- Is the software available via GitHub?
- Are the APIs well documented?
- Does sample code exist across a broad range of languages and frameworks?
- Can a developer run the software locally?
- Can the software be up and running in minutes, using the native deployment tools of the developer?
- Is there an online community to answer questions for developers?
- Is there an online environment that allows the developers to validate that the software can integrate and scale beyond their local machine?
- Does the software creator actively participate in the community, both online and in real life?
Maybe you clicked on this link and thought that I was going to write about the OpenStack Marketplace that was launched during the OpenStack Summit in Atlanta. It’s a natural progression for the OpenStack community to drive awareness of applications and services. But that’s not the marketplace I observed in Atlanta last week.
btw – In case you missed OpenStack Summit, here are all the videos – lots of great technical content and discussions.
The marketplace I observed was the reality of the IT marketplace, with vendors beginning to make moves and announcements which show that OpenStack is no longer a dream about “interoperable open clouds”, but instead is just another set of tools, products and APIs that will vigorously attempt to compete for the hearts, minds and wallets of IT professionals, developers and systems integrators.
This isn’t to say that the OpenStack dream is fading. It was alive and well in keynotes which drew analogies to the Star Wars Rebel alliance, fighting against the Evil Empire and the Death Star. But diving deeper, we saw that there is still division amongst contributors and developers about what should really be considered “Core OpenStack”. Will developers still actively and passionately work on projects if they are not considered “core”? Will some vendors try and claim to be “more core” in their distributions or offerings than others? Continued »
The big trend with Cloud Computing providers these days is a focus on “Enterprise” customers and workloads. We see this from AWS, VMware, EMC, Cisco, Microsoft, IBM, HP and others. Regardless of how each of those companies define “Enterprise”, this means that they are now subject to the rigors of TCO calculations prior to major purchases being made. Welcome to the wonderful world of Enterprise IT.
Now for the fun part. Building a TCO model that properly explains how much the offering will cost over a standard period of time – typically 3-5 years. Having built several of these models in the past, I know that they are always held up to huge amounts of scrutiny. Whether customers think you’re trying to hide or mislead them on costs, or whether customer believe you don’t understand realistic use-cases, no two TCO models are ever the same. They all make assumptions and they all have to make tradeoffs between completeness of information and usability (eg. too many inputs, too many calculations). With that in mind, let’s take a look at a recent attempt at a TCO calculator by AWS, comparing their server vs. traditional data centers.
Let’s begin by asking a few basic questions:
- What type of use-cases or applications does the TCO tool allow to be modeled? (eg. redundancy, security, availability, performance, etc.)
- What assumptions does the TCO tool make about how costs are calculated? (eg. cost of equipment; cost of IT resources; efficiency of IT resources – both people and equipment)
- Does the TCO tool make reasonable comparisons between each model? Too many tools compare one approach’s “best” vs. another approaches “worst” scenario.
- Does the TCO tool surface all the assumptions of both approaches, or just the approach of the tool originator?
With the announcement of the release of OpenStack “Icehouse”, the 9th version of the series of OpenStack projects, it’s now time for the community to focus their attention on the 2014 (Spring) OpenStack Summit in Atlanta, GA. This is an opportunity for the OpenStack Foundation to provide their version of the “State of the Stack”, by highlight customers using the software and interesting trends in the marketplace. It’s also a multi-day set of design sessions for engineers involved in the “Juno” (J) release, which is targeted for Fall 2014. Last but not least, it’s a massive recruiting and networking event, where every company has a vacancy sign out front for anyone with OpenStack-centric skills or experience.
This year, I’ll be interested to see a few things:
The Marquee Customer
There have been rumors for over a year that a Fortune 10 customer would be announced as a huge OpenStack user, for production applications. That name hasn’t emerged yet. I’ll be looking for big names building internal Private Clouds on OpenStack, and at least a few names using Service Provider clouds powered by OpenStack.
OpensStack has long been touted as the alternative to AWS (and often VMware), with the promise of open-clouds for customers. I’ll be looking for progress on overall interoperability between BOTH cloud providers and between distributions of OpenStack. This is still enough of an issue that Red Hat’s General Manager of Open Hybrid Cloud Programs (Alessandro Perilli, @giano) took to Twitter to highlight his belief that start-up OpenStack distributions (or products) should not be trusted by customers because the risk of start-up failure is too high. Indirectly, this is also a statement of the interoperability of various OpenStack distributions, if he’s claiming that switching costs could run into the $$ millions. Continued »