From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 1112345...10...Last »

February 28, 2015  1:00 PM

Rethinking the CCIE – 20 years later…

Brian Gracely Brian Gracely Profile: Brian Gracely
CCIE, Cisco, Linux, Networking, VMware

shutterstock_256116760Sometimes you just can’t escape your past. It hangs around, popping it’s head out from time to time, reminding us of those decisions we made years ago. I started out life in this crazy world of IT in the networking space, working for Bay Networks and then Cisco. I did tech-support in Cisco’s TAC. One of the requirements of the job was that you needed to have your CCIE within one year. Working at Cisco, we had a distinct advantage in that we were surrounded by tons of people with deep Cisco expertise, and unlimited access to labs full of equipment.

So for about about 4 months, I spend a few hours a day and every weekend, locked in those labs trying to learn about everything from BGP and OSPF to obscure things like floating-routes with async modems and DECnet. Plugging in cables, racking new gear, and spending lots of time on green-screen consoles. Eventually I sat for the two-day test and received my CCIE.

Like many classes we take in school, I don’t think much about that CCIE anymore. It was great to learn how to learn, but I never really used the certification for career advancement.

And then a few weeks ago, I read this really good article by Matt Oswalt (@mierdin) about Five Next-Gen Networkers Skills. I agreed with everything on Matt’s list, and the great thing is that those skills are easier than ever to learn (here, here).

It got me thinking about how someone would approach a certification like the CCIE today. Given the rise of online communities, recognition programs and things like StackOverflow and GitHub, I’m not sure that certifications will be a big deal in the future. Employers or clients can find out a ton about your background, your work, your personality and your ability to communicate with just a few searches and clicks. Do they really need a certification to evaluate you anymore? Continued »

February 28, 2015  11:34 AM

Container Frameworks – Tools, Products or Solutions?

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps, Docker, Linux, Unix

Screen Shot 2015-02-28 at 10.25.33 AMEarlier, I covered some of the basics of the container technologies that are quickly evolving .within web-scale clouds and will eventually be migrating into Enterprise and Mid-Market data centers. Once you begin to understand the basics and have played with the technology, you’ll probably start asking yourself a few basic questions:

  1. How should I manage a growing number of containers?
  2. What tools are available to coordinate containers within my datacenter or on public cloud environments?
  3. Are all of these different container technologies compatible?
  4.  If I want an alternative to pure open source consumption, are their commercial alternatives for containers that I could use?

We talked about some of these things on a recent podcast with Nick Weaver (@lynxbat), from Intel’s Software-Defined Infrastructure group. Matt Asay also dove into this a little bit recently, looking at Docker + Mesos.

What it boils down to is the common tradeoff between using a set of “interchangeable” tools (the UNIX/Linux philosophy), or using a more complex system/solution. The discussion has centered around a phrase that Docker’s Solomon Hykes (@solomonstre) uses, “batteries included but removable“. Continued »


February 28, 2015  10:17 AM

The Evolution of Container Frameworks

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Foundry, DevOps, Docker, Google, twitter

containersBehind the scenes of some of the largest public clouds, there’s a transformation that’s happening (or happened) that many people in the Enterprise or Mid-Market space have never heard of. The transformation started with the use of Linux containers instead of heavier Virtual Machines (VMs) to host applications. Using LXC and cgroup technology, native within Linux, companies such as Google, Twitter, Facebook and many other web-scale companies run their applications in lightweight “containers” to speed boot times and simplify management of application stacks.

And from those core technology elements, a vibrant ecosystem has been forming over the last couple of years. Lots of timelines exist to explain container usage, but here’s a few key milestones:

  • Cloud Foundry began using containers under the covers of the Platform-as-a-Service project, through a framework called Warden.
  • dotCloud, another early PaaS platform, used a technology they called “Docker” as their underlying container management framework and runtime. Eventually, the Docker technology was spun-off and become not only it’s only independent open source project, but also a commercial company.
  • Many variants and ecosystem projects have spun up around the Docker project. In 2014, most of the major public clouds (AWS, Google, Digital Ocean, Rackspace, etc.) created services that enable containers (or various formats) to be natively consumed and managed on their clouds.
  • CoreOS was started in 2013 as a small Linux OS distribution that was optimized for web-scale deployments, security, managed updates and running containers.
  • Canonical created an alternative framework for containers + VMs, called LXD (or “lex-dee”).
  • CoreOS created an alternative container runtime called Rocket.
  • Twitter created frameworks for managing containers called Mesos and Marathon. These have since been open sourced and ecosystems created around Mesos, including MesosCon and commercial company Mesosphere.
  • Google open sourced their container management framework, called Kubernetes.
  • Even Microsoft has gotten onto the container bandwagon and committed to supporting containers.

Continued »


January 24, 2015  12:21 PM

DevOps is a Reorg, Takes 6-12 Months

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps

lighthouseFrom time to time, I’ll link to episodes of the podcast that reinforce some of the topics discussed on the blog. If nothing else, it saves me a bunch of time transcribing some of the more interesting learning. But we recently did a show with Adrian Cockcroft (@adrianco), currently a VC at Battery Ventures and formerly the Cloud Architect at Netflix. That show was so full of great insight that some of the quotes need to be directly brought over to the blog.

  • “DevOps is a re-org. It takes 6-12 months to happen. Mobile apps often lead the change.”
  • “The mobile team deploys to the AppStore, so they can bypass Ops.”
  • “Part of Docker’s success was because they weren’t threatening (at first).”
  • “Look at the SaaS ecosystems that can be build around open-source company technology.”

If you’re a CIO and you read the blog headline, you might be scared off from anything to do with DevOps. ReOrgs are typically not fun, and who knows if 6-12 months is an accurate timeline to make the changes necessary to be able to respond faster to the business. So what do you do?

Here’s a tip that we’ve learned in creating the EMC {code} team. Start small, and give them enough time to really focus on accelerating the learning curve.

What does this mean in reality?

Our team is very small, and by their own admittance (here, here, here), they didn’t start off as great hackers/coders/DevOps people. Some decent scripting skills, some basic experience working with APIs. Nothing that screams 10X ENGINEER!! But they had a few basic things that could help any group trying to evolve their DevOps capabilities:

  • Strong Levels of Curiosity – wanting to learn new things and solve problems with those new skills (just pick a program idea)
  • Dedicated Time to Learn the Skills – this is now their primary job

In just a few months of dedicated focus, I can clearly say that their learning curve across a bunch of these DevOps technologies has increased significantly. And not just on one tool or language, but a bunch of them. It’s as if the process of learning has been simplified. They are building small, loosely coupled projects that will significantly reshape how EMC will LEARN|CODE|DEPLOY technology in the future and interact with customer/partner DevOps teams. And along the way, they are learning the process needed to be successful in a world where there should be new, visible output every week, instead of taking many months.

This is something CIOs could do within their teams as well. Yes, it would require that you carve out some resources to be more dedicated. And it would require that those accelerating their learning should be responsible for giving back knowledge to the broader team. But this is how successful reorgs work – create a lighthouse, show what’s possible, and then attempt to make the lighthouse model the new normal. It creates a flywheel – early learning, teaching, new learning, sharing, new learning.

And DevOps is not just for the cool start-ups in Silicon Valley anymore.


January 24, 2015  11:32 AM

Of DevOps and Craft Beers

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps, Docker, Puppet

Lots of people are trying to figure out DevOps these days. Where should I get started? What tools do I use? What processes within our business have to change? What language do those “developer” people speak?

Craft Beers

 

One of the most common starting points for these discussions is around an automation tool. Many people will argue that the Journey of a DevOp isn’t about tools, but the reality is that tools are tangible, and bits make techies more comfortable than talking about process and emotions.

If you’re a Unix/Linux SysAdmin, things may not seem too weird. You’re used to unusual names – Sed, Awk, Grep. Project names like Pig, Zookeeper, SnakeCharmer, or Gump just sound like hostnames you might have in your datacenter. And you’re used to the endless debate over things that appear to be the same (such as a text editor), but have dozens of variants because everyone wants things just the way they like it. Apparently it’s unacceptable to have snowflake machines in a DevOps environment, but everyone is a snowflake when it comes to their tools.

And if you’ve been to a new bar/restaurant lately, you probably know the feeling. Dozens or hundreds of craft beers on tap, all with unique names, and from the naked eye they can look very much the same. What makes the Hog’s Flatulence IPA any different from the Naked Conspiracy IPA? Where do I start? How do I know which one will go down easy and which one will have you up at 3am regretting all of your life’s prior decisions? Will your friends look down on you if you’re more of a Portland or Brooklyn or Austin or Raleigh brewery fan?

So where do you start?

The easiest thing might be to take the path well-travelled. Pick one from a couple categories (eg, Puppet, Docker, GitHub) and see how far you can get. Most likely what you’ll find is that you can use common examples from communities to solve some challenges. You’ll begin to see how others are linking the common tools together (Puppet Forge, Docker Hub, GitHub Repos) and you’ll begin to find a level of experience in comfortable with the basics. And just like craft beers, once your palette adjusts, you can start to experiment with others. Maybe you’re not grasping Puppet’s DSL and you want to try something simpler like Ansible or Vagrant? Maybe you feel comfortable enough with Docker to try some of the resource management tools such as Mesos?

And working through this learning is going help with the better understanding the process because successful DevOps environments. How do people interact on community forums? What are good blogs to read to connect the dots? What’s an acceptable level of research to do before asking a question. Where are the common intersections between the “devs” and the “ops” people.

So give it a try. The first step towards DevOps has never been easier to learn.

And if you really want to go back to that Miller Lite, I’m sure there is a vendor sales-rep that would like to take you to lunch soon…

 


December 28, 2014  10:35 PM

IT Jobs in the Next 3-5 Years

Brian Gracely Brian Gracely Profile: Brian Gracely
Agile, DBA, DevOps

From time to time, I run into colleagues that are trying to figure out what the next few years of IT will entail and what skills to focus on to improve their jobs. During a recent conversation, we came up with the following “buckets” of technology jobs that we believe will be highly valuable. The timeframe isn’t immediate, but areas for people to focus on over the next couple of years, as they will be relevant for the next 3-5 years.

NOTE: Beyond 5yrs, I have no idea what will happen with the technology market – I don’t own that crystal ball.

Application Developers (“The New Kingmakers”)

This area really never changes. Applications are the most important part of IT, or technology to a business, so the application developers will always be in demand. Mobile developers, back-end developers, data-analytics, etc.

Data Manipulators (“Follow the Data, Follow the Money”)

This is where the DBA (Database Admins) used to be. But things like RDS or NoSQL services via the public cloud exist now, so less people will maintain databases (uptime & infrastructure). We believe this will evolve to people that can build the connectors between lots of public and private data sources, as well as all the ETL (Extract, Transform, Load) functionality to get data to a point where it can be useful across applications. This will require the ability to work across many APIs, and potentially many languages (or writing wrappers for other languages).

DevOps (“Agile Operations”)

We understand that DevOps isn’t a technology or a skill, but rather an operational model and a culture. But we do believe that the tools/technology that are at the center of DevOps will be the new normal for IT groups (and business groups). This will be a mix of on-prem and off-prem (SaaS) tools – (here,herehere). This area will overlap with the infrastructure teams, that will be evolving to figure out how to deal with Networking, Storage, Storage in “software” – yep, that “Software Defined” everything. This is the most logical role for the current SysAdmin and Infrastructure groups, with the SysAdmin teams having closer ties to the Application and Data teams. This team would likely operate a PaaS platform, if that’s the model that the Application and Data teams choose to developer applications against.

So what about the Infrastructure people? We do believe that most of that technology will evolve to being Software-Defined, allowing it to take advantage of Moore Law on CPUs and whitebox networking platforms (merchant silicon). Will these roles still exist in a few years? Probably. But they will mostly be focused on maintaining legacy environments and legacy applications.

[Update]: The more I think about this dichotomy, the more I realize that this might look a lot like the typical three-tiers that we have today: Apps, Databases, and Infrastructure. Fair enough criticism, but I’d argue that a few significant things have changed:

  • Public cloud resources and services – For the infrastructure teams, this means they aren’t focused on rack & stack and vendor-specific HW functionality, instead, they are focused on using those resources. For the DBA teams, they can begin using DB-centric services to provide a foundation for what they do, freeing up cycles to focus on Data-centric capabilities.
  • SaaS services – Whether you’re building, deploying or managing apps, the ability to link to intelligent SaaS services (logging, cost-mgm’t, monitoring, analytics, security, etc.) is incredibly powerful and another set of things you don’t have to manage day-to-day.
  • Open-Source Software - The ability to interact with open-source software will be a critical skill across all the buckets, since there is so much sample code, training, examples and functionality that is available via communities. Even if IT people aren’t using open-source, they can benefit from the communities that are solving similar problems to what they might be experiencing with closed software.


December 28, 2014  8:48 PM

2015 Predictions Sure to be Wrong

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, DevOps, Docker, SaaS, VMware

Let’s review the 2014 Predictions:

Let’s take a look at a few 2015 Predictions:

  1. Containers are Reality – While things are chaotic around containers, 2014 was the tipping point for the technology advancing, investments from VCs, and the strategies of major companies being laid out. And given the competitive landscale that’s evolving, we can expect to see the pace of change increasing in 2015. Expect to see major IT organizations making “container strategy” one of their top priorities in 2015.
  2. Hyper-Confusion – Hyper-Converged Infrastructure, at least in terminology, has been around since 2009 when VCE launched the Vblock and things like NetApp FlexPod soon followed. Things got more converged with offerings like Nutanix, Simplivity and Scale Computing. But it tipped to another level when VMware announced EVO:Rail. Almost every vendor has announced an EVO offering, but how will they differentiate themselves, and will there be offerings that don’t match the EVO framework (see: NetApp). And will the EVO brand mean anything if every “supporting” vendor creates a different version by adding their own software (Backup, Management/Monitoring, WANOp, Security, etc.)?
  3. Son of HyperConverged – If we look at the evolving DevOps technologies, we find that while they are powerful, they are often complex to setup. I believe that we’ll see some companies roll out pre-packaged &/or hyper-converged HW offerings that have many of the DevOps technologies pre-installed, along with several tools to making managing their environments easier.
  4. Do SaaS tools consolidate? I’ve written several times (here, herehere) that I’m fascinated by the emergence of SaaS tools to simply IT Ops. Services like Thousand Eyes, Loggly, Boundary, Cloudability, DataDog, PagerDuty, CloudPhysics, etc. But they are all separate services and separate companies, meaning that their customers need to learn different pricing models, operating models, APIs, for each service. Will we see a larger vendor (or cloud provider) start to consolidate some of these services?
  5. VMware Professionals will learn lots of non-VMware technologies – We’ve already seen VMware create a Cloud-Native applications groups, signifying that technologies like OpenStack, Docker, Mesos, Kubernetes, and many others could be integrated with native VMware technologies. IT professionals that spent much of the last 3-5 years getting familiar with the VMware virtualization and managements will probably be getting crash courses in Linux, Open-Source and many non-native VMware technologies over the next couple years.

Those are just a few of the areas I believe will be worth watching in 2015. What will you be looking forward to?


December 28, 2014  5:48 PM

DevOps Skills 101

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, DevOps, Docker, IOS, OpenStack, Puppet

In the past, I’ve written that it’s never been easier to learn about Cloud Computing and develop the skills needed for success. In parallel to learning about the underlying technology behind Cloud Computing, many IT professionals are trying to learn how to evolve the skills needed to create a successful DevOps environment for their business. And keep in mind, DevOps skills are highly in demand.

NOTE: DevOps is a cultural/operational model, but there is still a set of technologies that will help enable those best practices.

Learning about these technologies is getting easier and easier, with many options for learning available via the Internet (no equipment necessary).

Creating a culture within your company that can effectively model themselves on DevOps principles can be difficult, but learning the technology skills needed has never been easier. Pick a tool, pick a tutorial and see what you can learn in a few days or a few weeks.


December 28, 2014  4:43 PM

Container Wars

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Docker, Google, IBM, Microsoft, VMware

container-wars-smallThe last few weeks and months have been very interesting in the world of Linux containers, culminated by a number of announcements at DockerCon Europe in November.

Earlier in the year, Docker made containers very interesting. After separating from dotCloud (an early PaaS company), they announced a $40M round of funding and a number of interesting strategic moves:

And outside of the Docker-specific announcements, there was quite a bit of ecosystem expansion, competitive announcements and some confusion about the direction of Linux container technology: Continued »


November 26, 2014  9:23 PM

Unboxing an Amazon FireTV

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, Android, AWS, Cloud Computing, IOS, Netflix

Usually I use this blog to write about Cloud Computing and modern apps, but since it’s the holiday season, I thought I’d write about some technology that I’m sure some of the readers are considering for purchase.As much as they love Enterprise technology, they love their gadgets just as much.

As an Amazon Prime member, I got an offer for early access to the new Amazon Fire TV “stick”, which is a small device which plug into an HDMI port on your HDTV. Since I’ve never been happy about my Apple TV, I decided it was worth a try.

Here’s my experience.

  • Unboxing was nice. Simple, clean packaging. Couple devices, cable, batteries.
  • The Fire TV device doesn’t get power via HDMI, so it requires an external cable. The cabling is too short, and connector has the cable at the wrong angle. The cable is 4-5′ long and should be more like 7′. And the power plug looks like a AC/USB type connector, but the mini-USB portion is on the side, not the top, so it’s a hassle to plug into any power strip that has a bunch of existing plugs.
  • Initial setup was simple. Plug it in, Fire TV tries to find the WiFi SSID, then enter the WiFI password.
  • Fire TV initially tried upgrading software – this failed, and then told me to unplug/power-off.
  • Another software upgrade attempt. Completed, rebooted.
  • Another software upgrade; required another WiFi password with different keyboard. It seems that they changed the keyboard layout between versions of software. Of note, they changed how capital letters were entered.
  • Fire TV was able to communicate with Amazon.com and it knew that this device was registered to my Amazon Prime account. Nice job by Amazon. Will be interesting to see what happens if I need to return/exchange the physical device to see if it updates the registration.
  • The device is now working after 3 software updates.
  • Introduction Video. Nicely done. Simple, easy to understand. Basically a walk-through of the remote, plus an ad to buy Amazon Prime.
  • Fire TV includes integration with 5Gb of Google Drive. Was sort of surprised it wasn’t liked to an AWS S3 services, but I suppose this is because S3 isn’t really an en-user consumer service (eg. Google Drive, Dropbox, One Drive, etc.)
  • FireTV comes with a simple remote. There is also a remote app for Android and Amazon Fire Phone (not yet available on iOS). This include the voice-activated features, which replace search. It may also have some cool console-level functionality for games.
  • 2nd Screen capable with Fire Tablet or FireTV app. I don’t have one of these devices, so I couldn’t test this.
  • Have to “download” certain apps, such as NetFlix, Hulu Plus. Not sure how big the device is in terms of capacity and how many apps can be downloaded. After downloading a few, I suspect this is just a small chunk of code and most of it runs in the AWS cloud.
  • Had to adjust the TV sizing for NetFlix – not auto-detecting the screen size or screen display settings. Was fairly simple – they gave you arrows on the screen and you just made sure they fit within the screen display.
  • NetFlix login uses QWERTY keypad – yet another keypad for account entry (others were alphabetical). Sort of odd that Amazon doesn’t mandate any user-experience consistency.
  • NetFlix navigation is different than on AppleTV – no concept of a “home” button – home button on FireTV (within NetFlix app) takes you all the way back to FireTV home screen.
  • The games are decent. They aren’t high-end console quality, but most are free and seem to be of the quality of the early Wii games, which is good enough for young kids. And no extra consoles or controllers to buy.

I’ve been playing with it for a couple days. After some of the initial setup hickups, it seems like a nice device. The responsiveness is better than an Apple TV, and the UI (while still clunky from the native remote), is still better than an Apple TV. Buying things off Amazon Prime, such as HD movies, is very simple and it’s great that they play immediately – vs the long download times from Apple. Makes the kids happy, especially during holiday breaks and bad weather.

Overall I’d give it a thumbs up and a good value for under $40. Hopefully they release the iOS remote soon and I’ll be a much happier user.

Nicely done Amazon.


Page 1 of 1112345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: