From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 1212345...10...Last »

March 29, 2015  1:36 PM

The Many Angles of Open Source in the Enterprise

Brian Gracely Brian Gracely Profile: Brian Gracely
Linux, Open source

Screen Shot 2015-03-29 at 1.36.14 PMEvery year, the landscape of open source in the Enterprise seems to make subtle changes as IT organizations struggle to find a balance between becoming more agile and having the skills to engage with open source software and communities. Even the traditional vendors are getting into the game (EMC, Cisco, HP, Juniper, VMware, etc.)

Agility

Why does open source appeal to Enterprise IT organizations?

  • Acquisition Cost – In theory, the acquisition cost of open source software should be either $0, or much lower than commercial software. Of course this can vary wildly as options for “commercial support” – “open core” – or commercial software “built on open-source” (eg. OpenStack) instead of open standards (eg. IETF, IEEE).
  • Licensing – It goes hand-in-hand with acquisition costs, but has some nuanced differences. More and more, business leaders and developers understand the power of accelerating the “idea-to-execution” paradigm, which means that they need to be willing to experiment. Flexible, open licensing means that more projects can be started. When the license costs are $0 (or lower), this better aligns the costs to the value from the IT organization or business. It flips the “vendors get paid up-front” to “vendors get paid with usage/consumption, or when the business realizes value”.
  • Community Roadmaps & TimelinesThe pace of software projects coming out of the open source communities (Apache, Linux, etc.) is typically much faster than commercial vendors – typically every 3 to 6 months vs. once a year. The ability to leverage all the creative resources that are passionate about a project is an excellent way to get leverage and speed for new projects.
  • Open Interfaces ­To succeed in open source, projects need to be flexible to the components around it. It must support open APIs and be pluggable for various architectures. In more and more cases, this is providing the trade-off of “solution” vs. “components” (and here) lock-in avoidance that many companies desire.

The Challenges

Got Skills? – There are many skills that typical Enterprise IT organizations may not have readily available:

  • Linux – Most open source projects highly leverage aspects of Linux. Free courses are available online.
  • GitHub – Being able to interact with the source control system that houses most of today’s open software. Lots of free resources and tutorials are available online.
  • Open source licensing – In most cases, Enterprise IT won’t attempt to sell the software they create, but they should be knowledgeable about the different types of licensing and when they may/must the software they add/change to a given project. Educating yourself on the options is important.
  • Writing to APIs – Does your IT organization primarily rely on GUIs, CLIs and some scripts? Evolving this to interact with APIs requires some more advanced development skills, or a willingness to work with new tools to interact with those APIs. This is a good introductory tutorial on REST APIs.
  • Funny project names – Don’t expect to see “Enterprise Edition vX.X”, instead you need to get used to things like “Pig”, “BOSH”, “Hive”, “Swift” or “Clocker”.
  • Need Documentation? If there’s anything developers dislike more than meetings, it’s writing external documentation. “Read the code” replaces “RTFM”, but either way this can lead to frustration from Enterprise IT groups that may have come to expect more complete examples in documentation. This is an areas where paid-support might add value to IT shops.
  • Learn the New Application Models – a great place to start is The Tweleve-Factor App model. This is the basis of the microservices trend that is so frequently discussed for modern application development.

What are you doing within your company regarding open source projects? Is it a company-level priority yet, or just something that you’re exploring outside of work?

March 28, 2015  3:17 PM

Existing Apps – Ignore, Rewrite or SaaS?

Brian Gracely Brian Gracely Profile: Brian Gracely
Collaboration, CRM, Email, ERP, HCM, IPT, Microsoft, Oracle, SaaS, SAP, VMware, Windows

For the last 9-12 months, my day job has keep me pretty focused on more next-gen technologies and open source. While this is fun and exciting and involves a lot of new learning, it also creates an interesting dynamic when out talking to customers and communities that aren’t based in Silicon Valley (or maybe Seattle, Austin and a couple other tech hotspots). Whether we’re talking about open source or automation or micro-services or containers, the inevitable question always comes up – “This doesn’t seem to be aligned to our current apps, so what will we do with them?” In many cases, these are packaged applications (Microsoft, SAP, Oracle) that may or may not have any customizations.

legacyLike all the great mysteries of IT, the answer to this question is “it depends”. And then like anyone giving advice to a situation that has limited context or details (since we just met), it’s important to provide a framework of possibilities.

Ignore

Many existing applications will not be re-written to take advantage of modern frameworks, not should they. Maybe there are ways to “wrap” services-centric architectures around them (good discussion on Eps.6 of The Goat Farm podcast), but often these just need basic care and feeding to continue providing the service to the business. If anything, I suspect that we’ll begin to see many UNIX/Sparc environments migrated to x86/VMware environments if that older hardware goes EoL. Get those apps onto lower-cost hardware as part of a cost-reduction project. It’ll be a big, but boring, business for VMware. All the elements to handle those large apps (lots of CPU cores, lots of RAM, dedicated I/O, node-level HA) are now embedded in VMware vSphere 6.

Rewrite

Depending on how old the applications are, there is some chance this happens, if some of the previous development team still exists and can explain the legacy code. If the application is part of a business transition, such as a move to mobile-centric devices, then this becomes more likely. Maybe the entire application isn’t rewritten, but enough is modified so it functions properly on the new devices – touch screens expect different interactions than those that use a mouse/keyboard for input. If these re-writes happen, it’s also an opportunity for IT and Business groups to look at the corresponding culture shifts that align to DevOps, creating a more agile environment to operate those applications.

Get SaaS-y 

Other than a few specialty, vertical applications (clinical trials?), there is basically a SaaS application for everything you do in-house today. Other than a few giants (Salesforce, WebEx/Go-To-Meeting, Workday, Concur, etc.), it’s an extremely fragmented market segment. Some areas will grow extremely large over time (eg. Office 365, Adobe Suite, Box, Dropbox) as installed bases are migrated. Others will offer unique value-add on top of other applications (eg. Twillio). And almost every major packaged vendor is looking to make major offerings more attractive as SaaS applications (Oracle, SAP, Microsoft, Adobe, etc.). Given that the UI and UX is almost always better for SaaS than on-prem applications (both web and mobile), you’ll rarely ever find an end-user that doesn’t prefer a SaaS application.

[NOTE: We didn’t talk about SaaS applications for adding-value around more modern environments. We’ll save that for another post]

If I had to bet on where the major of efforts will be with legacy apps over the next 3-5 years, I’d put the odds at something like this:

  • Ignore: 60-70%, with a focus on UNIX/RISC migrations to x86/VMware
  • Rewrite: 15-20%, with a focus on integration with mobile apps
  • SaaS-ify: 30-50%, with a focus on applications that drove “productivity” in the 90s and 00s (email, collaboration, etc. being realized as non-differentiated and commodities that nobody really wants to manage.

 


March 28, 2015  12:47 PM

Microsoft and Docker – Strategic or Strange-tegic?

Brian Gracely Brian Gracely Profile: Brian Gracely
cloud, Docker, Microsoft, Windows

Ever since Docker took there most recent funding round ($40M Round B at ~ $300M valuation), many people have speculated about the future of the company. Do they evolve to become the next VMware? Do they have a monetization model that would lead them to an eventual IPO? Do they get acquired by a larger company – and if so, who?

Screen Shot 2015-03-28 at 12.46.41 PM

From Scott Gru’s Blog – http://weblogs.asp.net/scottgu/docker-and-microsoft-integrating-docker-with-windows-server-and-microsoft-azure

Given that Docker is usually discussed in the context of building/deploying/managing Linux containers, it was surprising to see Docker announce a partnership with Microsoft enable Docker with Windows (and not just Boot2Docker for Windows). Interesting. Strange bedfellows? This isn’t the first time that Microsoft has appeared to announce support for the new hotness, many months or years after the initial buzz was created in the community (Zune, Phone, Tablet, Azure IaaS, etc.). But given Microsoft’s resources and reach, being a fast-follower is essentially their business model these days – and “v3″ is their new beta or GA.

But the more I thought about it, this makes alot of sense for Microsoft. As we see more computing activities move to either mobile devices (tablets, phones) or public clouds, the underlying OS is becoming less Microsoft-centric. But under the new Microsoft leadership, the willingness to embrace things like iOS or Linux (in Azure) is becoming more commonplace. It appears that they are embracing (or re-embracing) the value in monetizing the applications and frameworks above the OS.

So why does Docker make sense for Microsoft? In a nutshell, because Docker is becoming the element that’s nearest to the OS where developers care about the technology. And because Docker provides a truly portable format (unlike VMs) across many environments – laptops, clouds – it has the potential to help Microsoft more seamlessly help future developers to Microsoft’s platforms make a (more)seamless transition of applications. If the future applications can be written to use native Linux or Windows Docker containers, it not only removes friction for the developers, but it also doesn’t create a revenue friction (loss) for Microsoft. This was always a burden for them to adopt the new hotness in the past – it had to be adapted and locked into Windows.

Whether Microsoft becomes an active community contributor, a proprietary extender (MS Containers 2015?) or an acquirer is yet to be seen. Any of those outcomes is possible. There’s new leadership in place that is doing things very differently than in the past. Or maybe we’ll see (generic) Docker support in Windows along with a Microsoft-specific version, allowing developers choice with a potential for licensing uplift.

The nice thing about open source is that we can allow follow along to see how things are progessing (Docker + Azure; ASP.NET + Docker). This doesn’t mean we’ll see everything, as some features/functions could be held back from upstream contributions (eg. “kept private”), but it’s a big step forward for Microsoft.


March 14, 2015  12:11 PM

Thoughts on GigaOm…

Brian Gracely Brian Gracely Profile: Brian Gracely
Uncategorized

Earlier this week, GigaOm ceased operations. The news came quickly on Monday night, surprising most of us that have been making it a daily read since 2006. We read their site, we attended their events and we listened to their podcasts. And for those of us that don’t live in Silicon Valley, it was a nice way to get updates on the new things happening each week.

Sadly, as frequently happens in our industry, people began quickly piling on about why GigaOm ended. Rival sites immediately and vaingloriously posted articles to kick dirt on the tombstone. Other complained about financial mismanagement. And media pundits all tried to decode the economics of the disconnected dots that weren’t profitable enough.

I don’t know what happened at GigaOm. I have a few friends that worked there, but I’m not going to ask them for any inside dirt. It doesn’t matter to me. Especially in our industry, the past is the past.

If I think back on GigaOm, all the way to 2006, so many macro-level things have changed. Back then, many of the things we take for granted now didn’t exist (or were in their infancy stages):

  • Blogs – Blogger (2003); WordPress (2003)
  • Twitter – (2006)
  • YouTube popularity and fast broadband – for uploads – (2005)
  • A conference every day, everywhere
  • A meetup every day, everywhere
  • Streaming video from events
  • Online sites to easily share presentations and talks

But we don’t live in that world anymore. Everyone has a blog. Everyone has Twitter and drives the news and conversations. (Almost) everyone has a podcast. And if you haven’t experience FOMO with daily or weekly meetups, you don’t live in the tech industry. We know the speakers and keynote presenters, because their brand is accessible on Twitter. We get leaked press-releases and announcements from bloggers that are part of vendor-led “influencer groups”.  Continued »


February 28, 2015  2:24 PM

The Changing Cloud Supply-Chain

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, EMC, Open source, OpenStack, VMware

shutterstock_255521722There’s a framework that’s often taught in undergrad and graduate-level business schools called “Porter’s Five Forces“, which was created by Harvard Professor Michael Porter.  It looks at the elements within a competitive model and attempts to help students think about the cause/effect relationships as those forces change.

One element of this model looks at replacement or substitution of elements of the supply-chain. The path between the creator of value and the consumer of value. In some industries, the path is long and involves many intermediary elements. In other industries, the path is much short. Now, other than academics that study supply-chains for research, most people only care about supply-chains as they relate to their ability to create revenue/profit/value from their place in any given supply-chain. And if they’re smart, they are also considering what happens when that chain gets longer or shorter – potentially replacing some element that provided before (see: Blockbuster physical stores in the movie supply-chain).

The interesting thing about supply chains is that they are different, depending on the your point of view. For example, in the IT supply chain, many of us are used to a model like this:

VC>>>VENDOR>>>DISTRIBUTOR>>>SYSTEM-INTEGRATOR or VAR>>>CUSTOMER 

But that model is going through radical change because of two elements – open source software and public cloud services.

For example, let’s look at the basic AWS (Amazon Web Services) supply chain:

AWS>>>>>CUSTOMER

Now, let’s look at the basic supply-chain of a open source project/company Docker

OPEN(DOCKER)<<<|>>>>CUSTOMER>>>|<<<DOCKER(INC)

Pretty different than the IT supply-chain from above, isn’t it? Continued »


February 28, 2015  1:00 PM

Rethinking the CCIE – 20 years later…

Brian Gracely Brian Gracely Profile: Brian Gracely
CCIE, Cisco, Linux, Networking, VMware

shutterstock_256116760Sometimes you just can’t escape your past. It hangs around, popping it’s head out from time to time, reminding us of those decisions we made years ago. I started out life in this crazy world of IT in the networking space, working for Bay Networks and then Cisco. I did tech-support in Cisco’s TAC. One of the requirements of the job was that you needed to have your CCIE within one year. Working at Cisco, we had a distinct advantage in that we were surrounded by tons of people with deep Cisco expertise, and unlimited access to labs full of equipment.

So for about about 4 months, I spend a few hours a day and every weekend, locked in those labs trying to learn about everything from BGP and OSPF to obscure things like floating-routes with async modems and DECnet. Plugging in cables, racking new gear, and spending lots of time on green-screen consoles. Eventually I sat for the two-day test and received my CCIE.

Like many classes we take in school, I don’t think much about that CCIE anymore. It was great to learn how to learn, but I never really used the certification for career advancement.

And then a few weeks ago, I read this really good article by Matt Oswalt (@mierdin) about Five Next-Gen Networkers Skills. I agreed with everything on Matt’s list, and the great thing is that those skills are easier than ever to learn (here, here).

It got me thinking about how someone would approach a certification like the CCIE today. Given the rise of online communities, recognition programs and things like StackOverflow and GitHub, I’m not sure that certifications will be a big deal in the future. Employers or clients can find out a ton about your background, your work, your personality and your ability to communicate with just a few searches and clicks. Do they really need a certification to evaluate you anymore? Continued »


February 28, 2015  11:34 AM

Container Frameworks – Tools, Products or Solutions?

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps, Docker, Linux, Unix

Screen Shot 2015-02-28 at 10.25.33 AMEarlier, I covered some of the basics of the container technologies that are quickly evolving .within web-scale clouds and will eventually be migrating into Enterprise and Mid-Market data centers. Once you begin to understand the basics and have played with the technology, you’ll probably start asking yourself a few basic questions:

  1. How should I manage a growing number of containers?
  2. What tools are available to coordinate containers within my datacenter or on public cloud environments?
  3. Are all of these different container technologies compatible?
  4.  If I want an alternative to pure open source consumption, are their commercial alternatives for containers that I could use?

We talked about some of these things on a recent podcast with Nick Weaver (@lynxbat), from Intel’s Software-Defined Infrastructure group. Matt Asay also dove into this a little bit recently, looking at Docker + Mesos.

What it boils down to is the common tradeoff between using a set of “interchangeable” tools (the UNIX/Linux philosophy), or using a more complex system/solution. The discussion has centered around a phrase that Docker’s Solomon Hykes (@solomonstre) uses, “batteries included but removable“. Continued »


February 28, 2015  10:17 AM

The Evolution of Container Frameworks

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Foundry, DevOps, Docker, Google, twitter

containersBehind the scenes of some of the largest public clouds, there’s a transformation that’s happening (or happened) that many people in the Enterprise or Mid-Market space have never heard of. The transformation started with the use of Linux containers instead of heavier Virtual Machines (VMs) to host applications. Using LXC and cgroup technology, native within Linux, companies such as Google, Twitter, Facebook and many other web-scale companies run their applications in lightweight “containers” to speed boot times and simplify management of application stacks.

And from those core technology elements, a vibrant ecosystem has been forming over the last couple of years. Lots of timelines exist to explain container usage, but here’s a few key milestones:

  • Cloud Foundry began using containers under the covers of the Platform-as-a-Service project, through a framework called Warden.
  • dotCloud, another early PaaS platform, used a technology they called “Docker” as their underlying container management framework and runtime. Eventually, the Docker technology was spun-off and become not only it’s only independent open source project, but also a commercial company.
  • Many variants and ecosystem projects have spun up around the Docker project. In 2014, most of the major public clouds (AWS, Google, Digital Ocean, Rackspace, etc.) created services that enable containers (or various formats) to be natively consumed and managed on their clouds.
  • CoreOS was started in 2013 as a small Linux OS distribution that was optimized for web-scale deployments, security, managed updates and running containers.
  • Canonical created an alternative framework for containers + VMs, called LXD (or “lex-dee”).
  • CoreOS created an alternative container runtime called Rocket.
  • Twitter created frameworks for managing containers called Mesos and Marathon. These have since been open sourced and ecosystems created around Mesos, including MesosCon and commercial company Mesosphere.
  • Google open sourced their container management framework, called Kubernetes.
  • Even Microsoft has gotten onto the container bandwagon and committed to supporting containers.

Continued »


January 24, 2015  12:21 PM

DevOps is a Reorg, Takes 6-12 Months

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps

lighthouseFrom time to time, I’ll link to episodes of the podcast that reinforce some of the topics discussed on the blog. If nothing else, it saves me a bunch of time transcribing some of the more interesting learning. But we recently did a show with Adrian Cockcroft (@adrianco), currently a VC at Battery Ventures and formerly the Cloud Architect at Netflix. That show was so full of great insight that some of the quotes need to be directly brought over to the blog.

  • “DevOps is a re-org. It takes 6-12 months to happen. Mobile apps often lead the change.”
  • “The mobile team deploys to the AppStore, so they can bypass Ops.”
  • “Part of Docker’s success was because they weren’t threatening (at first).”
  • “Look at the SaaS ecosystems that can be build around open-source company technology.”

If you’re a CIO and you read the blog headline, you might be scared off from anything to do with DevOps. ReOrgs are typically not fun, and who knows if 6-12 months is an accurate timeline to make the changes necessary to be able to respond faster to the business. So what do you do?

Here’s a tip that we’ve learned in creating the EMC {code} team. Start small, and give them enough time to really focus on accelerating the learning curve.

What does this mean in reality?

Our team is very small, and by their own admittance (here, here, here), they didn’t start off as great hackers/coders/DevOps people. Some decent scripting skills, some basic experience working with APIs. Nothing that screams 10X ENGINEER!! But they had a few basic things that could help any group trying to evolve their DevOps capabilities:

  • Strong Levels of Curiosity – wanting to learn new things and solve problems with those new skills (just pick a program idea)
  • Dedicated Time to Learn the Skills – this is now their primary job

In just a few months of dedicated focus, I can clearly say that their learning curve across a bunch of these DevOps technologies has increased significantly. And not just on one tool or language, but a bunch of them. It’s as if the process of learning has been simplified. They are building small, loosely coupled projects that will significantly reshape how EMC will LEARN|CODE|DEPLOY technology in the future and interact with customer/partner DevOps teams. And along the way, they are learning the process needed to be successful in a world where there should be new, visible output every week, instead of taking many months.

This is something CIOs could do within their teams as well. Yes, it would require that you carve out some resources to be more dedicated. And it would require that those accelerating their learning should be responsible for giving back knowledge to the broader team. But this is how successful reorgs work – create a lighthouse, show what’s possible, and then attempt to make the lighthouse model the new normal. It creates a flywheel – early learning, teaching, new learning, sharing, new learning.

And DevOps is not just for the cool start-ups in Silicon Valley anymore.


January 24, 2015  11:32 AM

Of DevOps and Craft Beers

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps, Docker, Puppet

Lots of people are trying to figure out DevOps these days. Where should I get started? What tools do I use? What processes within our business have to change? What language do those “developer” people speak?

Craft Beers

 

One of the most common starting points for these discussions is around an automation tool. Many people will argue that the Journey of a DevOp isn’t about tools, but the reality is that tools are tangible, and bits make techies more comfortable than talking about process and emotions.

If you’re a Unix/Linux SysAdmin, things may not seem too weird. You’re used to unusual names – Sed, Awk, Grep. Project names like Pig, Zookeeper, SnakeCharmer, or Gump just sound like hostnames you might have in your datacenter. And you’re used to the endless debate over things that appear to be the same (such as a text editor), but have dozens of variants because everyone wants things just the way they like it. Apparently it’s unacceptable to have snowflake machines in a DevOps environment, but everyone is a snowflake when it comes to their tools.

And if you’ve been to a new bar/restaurant lately, you probably know the feeling. Dozens or hundreds of craft beers on tap, all with unique names, and from the naked eye they can look very much the same. What makes the Hog’s Flatulence IPA any different from the Naked Conspiracy IPA? Where do I start? How do I know which one will go down easy and which one will have you up at 3am regretting all of your life’s prior decisions? Will your friends look down on you if you’re more of a Portland or Brooklyn or Austin or Raleigh brewery fan?

So where do you start?

The easiest thing might be to take the path well-travelled. Pick one from a couple categories (eg, Puppet, Docker, GitHub) and see how far you can get. Most likely what you’ll find is that you can use common examples from communities to solve some challenges. You’ll begin to see how others are linking the common tools together (Puppet Forge, Docker Hub, GitHub Repos) and you’ll begin to find a level of experience in comfortable with the basics. And just like craft beers, once your palette adjusts, you can start to experiment with others. Maybe you’re not grasping Puppet’s DSL and you want to try something simpler like Ansible or Vagrant? Maybe you feel comfortable enough with Docker to try some of the resource management tools such as Mesos?

And working through this learning is going help with the better understanding the process because successful DevOps environments. How do people interact on community forums? What are good blogs to read to connect the dots? What’s an acceptable level of research to do before asking a question. Where are the common intersections between the “devs” and the “ops” people.

So give it a try. The first step towards DevOps has never been easier to learn.

And if you really want to go back to that Miller Lite, I’m sure there is a vendor sales-rep that would like to take you to lunch soon…

 


Page 1 of 1212345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: