In the past, I’ve written that it’s never been easier to learn about Cloud Computing and develop the skills needed for success. In parallel to learning about the underlying technology behind Cloud Computing, many IT professionals are trying to learn how to evolve the skills needed to create a successful DevOps environment for their business. And keep in mind, DevOps skills are highly in demand.
NOTE: DevOps is a cultural/operational model, but there is still a set of technologies that will help enable those best practices.
Learning about these technologies is getting easier and easier, with many options for learning available via the Internet (no equipment necessary).
- Try GitHub – a great way to experiment and learn about “Git” and the popular online service that is used by more and more developers (both Apps and Ops)
- Try Docker – a great way to learn the basics of building, deploying and managing linux containers. Also, check out the Docker Book, a great resource in written format.
- Try OpenStack – let the OpenStack Foundation run the underlying resources you need to try out any of the OpenStack projects.
- Try Apache Mesos – the container scheduling technology that originated at Twitter
- Try Google Kubernetes – learn about the container management system that Google uses on the Google Cloud
- Zero to Docker (NetFlixOSS) – you’ve heard about all the cool things that NetFlix does to run their cloud, and now you can quickly experiment with any of the open-source projects they have created (and use in their own cloud)
- AWS Free Tier (and Free Training) – need to learn about why so many people are talking about and using AWS? all of those resources are available to you, on-demand, for free (at least the smallest sizes)
- Beginner’s Guide – http://lifehacker.com/5744113/learn-to-code-the-full-beginners-guide – don’t consider yourself a developer? here are the basics that will help you if you want to develop applications, or just write some code to help you better manage the Ops side of your IT environment.
- Participate in a Community Project on Social Coding – 12 Days of Commitmas #commitmas
- Learn about Puppet – https://docs.puppetlabs.com/learning/
- Learn about Chef – https://learn.chef.io/
- Learn about Ansible – http://www.ansible.com/get-started
- Learn about Vagrant – http://www.erikaheidi.com/blog/a-begginers-guide-to-vagrant-getting-your-portable-development-e
- Code School – https://www.codeschool.com/
- Code Academy – http://www.codecademy.com/
- Write an iOS App – http://lifehacker.com/i-want-to-write-ios-apps-where-do-i-start-1644802175 – maybe you’ve got a great idea and want to see if others agree? iOS is the mobile platform that has paid the greatest amount to application developers.
Creating a culture within your company that can effectively model themselves on DevOps principles can be difficult, but learning the technology skills needed has never been easier. Pick a tool, pick a tutorial and see what you can learn in a few days or a few weeks.
The last few weeks and months have been very interesting in the world of Linux containers, culminated by a number of announcements at DockerCon Europe in November.
Earlier in the year, Docker made containers very interesting. After separating from dotCloud (an early PaaS company), they announced a $40M round of funding and a number of interesting strategic moves:
- Docker Hub – a public, centralized marketplace and repository for storing (and distributing) Docker images.
- Docker Hub Enterprise – an on-prem version of Docker Hub that companies could run themselves for private Docker images. And when partners like IBM get on-board, it adds Enterprise credibility.
- Microsoft announced support for Docker late in the year, which will be interesting to watch how “Linux container management” potentially get integrated into Windows technology.
- Then the public cloud providers all jumped on-board to either offer Docker services, or broader container (and container management) services – Digital Ocean, Tutum Cloud, Google Cloud and AWS.
- The scheduling technologies for deploying containers at scale, Google Kubernetes and Apache Mesos (podcast) also had big announcements (and conferences)
- Docker (the company) finished the year with some new technology announcements at DockerCon Europe – including Docker Machine, Docker Swarm and Docker Compose.
And outside of the Docker-specific announcements, there was quite a bit of ecosystem expansion, competitive announcements and some confusion about the direction of Linux container technology: Continued »
Usually I use this blog to write about Cloud Computing and modern apps, but since it’s the holiday season, I thought I’d write about some technology that I’m sure some of the readers are considering for purchase.As much as they love Enterprise technology, they love their gadgets just as much.
As an Amazon Prime member, I got an offer for early access to the new Amazon Fire TV “stick”, which is a small device which plug into an HDMI port on your HDTV. Since I’ve never been happy about my Apple TV, I decided it was worth a try.
Here’s my experience.
- Unboxing was nice. Simple, clean packaging. Couple devices, cable, batteries.
- The Fire TV device doesn’t get power via HDMI, so it requires an external cable. The cabling is too short, and connector has the cable at the wrong angle. The cable is 4-5′ long and should be more like 7′. And the power plug looks like a AC/USB type connector, but the mini-USB portion is on the side, not the top, so it’s a hassle to plug into any power strip that has a bunch of existing plugs.
- Initial setup was simple. Plug it in, Fire TV tries to find the WiFi SSID, then enter the WiFI password.
- Fire TV initially tried upgrading software – this failed, and then told me to unplug/power-off.
- Another software upgrade attempt. Completed, rebooted.
- Another software upgrade; required another WiFi password with different keyboard. It seems that they changed the keyboard layout between versions of software. Of note, they changed how capital letters were entered.
- Fire TV was able to communicate with Amazon.com and it knew that this device was registered to my Amazon Prime account. Nice job by Amazon. Will be interesting to see what happens if I need to return/exchange the physical device to see if it updates the registration.
- The device is now working after 3 software updates.
- Introduction Video. Nicely done. Simple, easy to understand. Basically a walk-through of the remote, plus an ad to buy Amazon Prime.
- Fire TV includes integration with 5Gb of Google Drive. Was sort of surprised it wasn’t liked to an AWS S3 services, but I suppose this is because S3 isn’t really an en-user consumer service (eg. Google Drive, Dropbox, One Drive, etc.)
- FireTV comes with a simple remote. There is also a remote app for Android and Amazon Fire Phone (not yet available on iOS). This include the voice-activated features, which replace search. It may also have some cool console-level functionality for games.
- 2nd Screen capable with Fire Tablet or FireTV app. I don’t have one of these devices, so I couldn’t test this.
- Have to “download” certain apps, such as NetFlix, Hulu Plus. Not sure how big the device is in terms of capacity and how many apps can be downloaded. After downloading a few, I suspect this is just a small chunk of code and most of it runs in the AWS cloud.
- Had to adjust the TV sizing for NetFlix – not auto-detecting the screen size or screen display settings. Was fairly simple – they gave you arrows on the screen and you just made sure they fit within the screen display.
- NetFlix login uses QWERTY keypad – yet another keypad for account entry (others were alphabetical). Sort of odd that Amazon doesn’t mandate any user-experience consistency.
- NetFlix navigation is different than on AppleTV – no concept of a “home” button – home button on FireTV (within NetFlix app) takes you all the way back to FireTV home screen.
- The games are decent. They aren’t high-end console quality, but most are free and seem to be of the quality of the early Wii games, which is good enough for young kids. And no extra consoles or controllers to buy.
I’ve been playing with it for a couple days. After some of the initial setup hickups, it seems like a nice device. The responsiveness is better than an Apple TV, and the UI (while still clunky from the native remote), is still better than an Apple TV. Buying things off Amazon Prime, such as HD movies, is very simple and it’s great that they play immediately – vs the long download times from Apple. Makes the kids happy, especially during holiday breaks and bad weather.
Overall I’d give it a thumbs up and a good value for under $40. Hopefully they release the iOS remote soon and I’ll be a much happier user.
Nicely done Amazon.
Last week at AWS re:invent, the AWS team introduced a huge number of new products/services. A few of them are available now, but many are still in beta or won’t be available until 2015. Here are my notes from reviewing the services.
AWS continues to grow, but it does appear that the growth is slowing somewhat – always more difficult to continue high-percentage growth as overall revenues grow. They seem to have a trend of being up in Q1, down in Q2/Q3 and then up in Q4 (historically). Lots of longer-term, strategic announcements at this event, many of the new services building on top of (and combining) foundational services – EC2, S3, SNS, CloudWatch, CloudTrail. Somewhat surprised that they announced so many services that are not yet available or don’t have GA dates, although that tends to happen the more you engage with larger Enterprise customers that ask for features to solve complex use-cases. AWS seems to have no issues cannabalizing the successful segments of their ecosystem of technology partners to further the number of direct services they can offer to customers – Oracle, GitHub, Puppet/Chef, Jenkins, Cloud Foundry, Heroku, Dell, Rightscale, VMware, etc. No explicit prices were announced, but I suspect that we’ll see greater analysis of pricing for some of the new services as they become GA and overall cost/ROI will be slightly lower than building/managing all of those individually.
- 40% YoY revenue growth
- Several services only in limited availability (eg. alpha/beta) into 2015, with no specific GA dates announced
AWS growth (Ben Kepes – Forbes) – http://www.forbes.com/sites/benkepes/2014/07/29/just-how-big-is-amazons-cloud-business/
AWS growth (VentureBeat) – http://venturebeat.com/2014/07/24/aws-revenue-2q14/
Aurora – RDS – available in 2015 – [beta now]
- Next-gen MySQL RDS
- Stated as 4x performance of previous RDS
- Manages the sizing of underlying EC2 instances (eliminate EC2 instance confusion)
- Only available in VPC – targeted at the Enterprise
- Don’t provision storage ahead of time – allocated based on DB size (eliminate Storage Admin)
- Multi-AZ replication; Multiple Copies (eliminate Backup Admin)
- Need to check on pricing difference from existing MySQL RDS
- Write up from Ben Whaley (@AmTheWhaley; AWS Hero Award winner) on Aurora, KMS and Code
Code Management & Deployment (CodeDeploy, CodeCommit, CodePipeline) – [only CodeDeploy GA available now, others are TBD]
- Targeted at Automation Tools – Chef/Puppet/Ansible (CodeDeploy) – can also be used with those tools
- Requires an agent on each machine
- Focused on scalable deployments and the associated availability services (ELB, AZs, etc.)
- Blueprints (versioned “Deployments”) can be stored in S3, GitHub or CodeCommit
- Multiple deployment options (each machine, groups of machines, all-at-once) and customization options
- Targeted at GitHub (CodeCommit)
- Hosts Git repositories and interacts with existing Git tools.
- Targeted at Jenkins (CodePipeline)
- Graphical view of pipeline and deployment process
- Serial and Parallel actions
- Time-based and Manual actions
Key Management Service (KMS)
- User managed key service
- Integrated with S3, EBS, RedShift
- Integrated with CloudTrail to view logs of key usage, changes – for regulatory & compliance
- Supports AWS IAM for multi-user environments
- AWS KMS – Cryptographic Details
- Inventory Existing AWS Resources/Services
- Track Changes and Associations of Resources
- Pull Config data into 3rd party tools (Logging, Auditing, Compliance, Config-Mgmt, etc.)
- Stores triggers (and snapshots of triggers) in S3; uses SNS to distribute updates – additional costs for those services on top of Config charges – interesting that it isn’t bundled like resources in Aurora.
EC2 Container Service (ECS) – Container Management for AWS – [still in preview] Continued »
It has been said, “Beware of Geeks Bearing Gifts“, especially when those gifts come packaged as open-source software. It can easily be argued that the growth of the modern Internet and many of the largest players (Amazon, Google, Facebook, Twitter, etc.) have been build upon open-source technology. In some cases, open-source can be used as a competitive weapon (see: Android vs iOS). In other cases it reduces supply-chain costs for producing digital goods.
But just because it works for the consumer Internet, does that means it will work for the Enterprise and Mid-Market segments of business that leverage IT for productivity and profit? And does it have to, considering the parallel rise of public cloud computing resources, which are fiercely competing for those same IT budgets?
It’s a challenging business model, but some vendors have made open-source the core of their business – such as RedHat, Canonical, Docker, Puppet, Chef, and many of the OpenStack distributions. Others have open-source as an option, but target their sales primarily towards commercial endeavors – Pivotal, Cloudera, Hortonworks. And still others are beginning to more actively contribute to open-source projects – Cisco, HP, EMC, VMware, etc. To augment this, they are also adding programs to focus on developers and provide code and resources to augment various technologies.
The other side of this equation is the Enterprise and Mid-Marketing companies that may choose to use these open-source technologies. While the economics of open-source can seem attractive on the surface (FREE??), the realities are that open-source is different that commercial software – support, documentation, integrations – and it requires changes to existing skills and processes. They also need to track the projects if they aren’t buying directly from a vendor. Hence why so many open-source centric companies also have healthy consulting and training practices to support the software distributions. Continued »
The Paris OpenStack Summit completed this week (Day 1, Day 2, Day 3) and while there were many new vendor and project announcements,there was also an underlying buzz that left me wondering if OpenStack is reaching a crossroads.
That buzz was people questions how or if containers should fit into the framework. There was a project (Magnum) that was kicked off, but the buzz around Twitter and from the show if this overly complicates OpenStack, or if it creates too much overlap with other (existing) schedulers such as Google’s Kubernetes and Apache Mesos.
As we’re seeing on a day-to-day basis, the growth and interest around Docker (container management) is accelerating rapidly. Google added to that acceleration by announcing a number of new container-centric services for Google Cloud Platform (@googlecloud). Many are expecting AWS to follow-up with a Docker announcement next week at AWS re:Invent.
4 years in, and the AWS crowd still seems to be figuring out what problems they are focused on solving – AWS competitor, VMware competitor, Hosting Services competitor? Maybe it’ll be all of them, but there isn’t massive momentum in any of those areas yet. And now Docker is the cool new kid on the block. And Docker doesn’t seem to be confused about what areas it’s focused on – modern applications.
Modern applications were supposed to be the focus area of OpenStack. But there are still too many customers hoping that it will evolve to be “free VMware” – but struggle with the lack of “VMotion” and other so-called Enterprise features. OpenStack pundits don’t want to go down that path, because that’s just automated virtualization and not cloud computing. Continued »
It’s that time again, OpenStack Summit. The semi-annual gathering of the OpenStack masses to engineer the next set of projects (and “Kilo” release), and marketeers to tell us how OpenStack is real, has lots of Enterprise customers and will overtake the world soon…eventually…or already has.
And if we step back and look at how OpenStack has evolved over just the last four years, it’s been an “interesting” journey.
It started with Rackspace and NASA deciding to collaborate on Compute (Nova) and Storage (Swift) projects. And they would open-source the work. And while open-source projects we by no means new, the fast that this was called “OpenStack” threw lots of people into a tizzy – especially those that sold competitive non-open-source projects.
For at least the first 18 months of public OpenStack existence (there was “secret” stuff happening behind the scenes well before this went public), you couldn’t attend an event or meetup without having to hear the Rackspace/NASA history. And of course people tried to explain this new model, which used commodity hardware was best aligned to these magical applications that handled failures in a new way.
Over time, more programs were added, and opinions varied about how OpenStack would survive in the real world. Should it innovate or clone? Should it create a compatible ecosystem of commodity providers, or work to create unique business opportunities?
And of course there was the question of who should drive the ship. Will OpenStack be driven by a community (opinion, opinion), or should there be a benevolent dictator (eg. Linus Torvalds in LInux)? What occurred was the creation of the OpenStack Foundation, to mute some of the influence that Rackspace had over the programs and bring new levels of transparency and governance into the community. And then all the major vendors jumped on the bandwagon to be sponsors and attempt to influence the project in various ways.
And as 2014 comes to a close, the landscape is far different than it was just a couple years ago. The small companies are being acquired (MetaCoud/Cisco, Cloudscaling/EMC, Inktank/RedHat, eNovance/RedHat) by the large vendors. Some of the early evangelists (Joshua McKenty, Jessie Andrews, Randy Bias, etc.) now work at other companies or are out of the OpenStack ecosystem. And while VC’s are still investing in the space (see Infographic), the exits have mostly been sub-$100M.
Eucalyptus has been acquired by HP, to power their OpenStack strategy. Apache CloudStack still has a strong community, but recent changes at Citrix have led some to question it’s future. The talks of open-source cloud wars have most definitely died down as Microsoft Azure, Google Compute Platform and AWS continue to grow and add new functionality, without being powered by OpenStack. Cisco is pushing the OpenStack for SP’s agenda with their InterCloud strategy, and Mirantis seems to be behind many of the successful projects (in all markets).
People weren’t very happy when I asked if Paris would be the last major OpenStack Summit. While events in Vancouver and Tokyo have been planned for 2015, I still think it’s a valid question in the context of broad community involvement vs. vendor-specific efforts and activities. I believe there is still quite a bit of consolidations to happen in 2015.
OpenStack has come a long way since 2010. We no longer talk about the Rackspace/NASA history and the grandeur of disruptive movements. Now we talk about vendor strategies and if wide-scale deployments will happen. The market has changed and the largest players (both clouds and vendors) are placing heavy bets on the future. OpenStack will be part of those bets, but whether it’s a direct factor or indirect in deciding $$ winners and losers is still TBD.
About six months ago, we decided to switch the focus of The Cloudcast podcast from being about “cloud computing” to being more focused on DevOps, SaaS (the AWS ecosystem) and trends for developers. In particular, the focus on the SaaS ecosystem that enables services around AWS has been very interesting to watch evolve. They have broken up the mindset that Ops needs a “single-pane-of-glass” approach to tools. They allow customers to create the Ops models that works best for them, but creating tons of native API-based integrations with other services.
The consumption model of these SaaS applications is different than you’re used to in traditional IT environments. They charge based on usage, whether it’s in hours or in capacity of data analyzed, thus eliminating huge bills for management software that often become shelfware. And they allow the Ops environment to be flexible and “customized” because most integrate with a massive amount of other 3rd-party SaaS services via APIs (example)
Some companies such as Cloudability (podcast), New Relic, DataDog (podcast), Loggly (podcast), PagerDuty (podcast), Evident (podcast), StackStorm (podcast) and BigPanda (podcast) focus primarily on AWS environment. Other such as CloudPhysics, Platform9, BlueBox take a broader view of the clouds and applications they support. But in every case, they are collecting tons of information about customer usage and gaining insight and experience about building out massively scalable infrastructures. In other words, they are creating learning curves that are orders of magnitude faster than any individual IT/DevOps group could by themselves.
As a customer, being able to take advantage of that learning curve is incredibly powerful. It’s analogous to being able to hire rockstar engineers, except that it doesn’t matter where your business is based or what industry you’re in – since not everyone lives in Silicon Valley or wants to pay those rents/mortgages. You’re renting outstanding talent, and only paying for the software for as long as you want to use it. And when companies like Spanning (recently acquired by EMC) offer the ability to backup data from SaaS applications into your data center or another clouds, you begin to have data recovery or portability if a service goes out of business or you find another one you like better.
To me, the next step is figuring out how to gain insight and knowledge into those learning curves. As I’ve spoken with these SaaS vendors, the feedback has been mixed. Some host events and discussions with select customers to share insight. Others publish learnings to their blogs, or speak about their experience at meetups/events. Still others are taking a page out of AWS’ playbook and turning those customer trends and actions into new features or guidance to customers – such as what Cloudability does to help customers spend less or more intelligently.
As these SaaS services begin to offer links into more cloud environments (Azure, GCE, VMware, OpenStack, etc.). the possibilities to integrate them into your cloud environment will only expand in the near-term future. I believe they are worth exploring, especially if you have challenging areas that are complex to acquire talent, or your current management software isn’t giving you the insight that you need. You can only benefit from the learning curves of these SaaS providers.
At least a couple times a week, colleagues or people within the industry will ask for career advice. What should I do next? Should I work for this company? Where do you think the industry is going next? What’s the next cool technology to learn? I’ve written about this a couple times before. It’s never a one answer fits-all conversation. There are always critical factors to take into consideration – What’s the opportunity? What skills do you have today? What skills are you learning? Where do you live, and does this matter? What’s the next step going to be after this one?
Before I get into the discussion I’ve been having with myself lately, I thought I’d share a story from many years ago. I went to college to study finance and marketing. When I graduated, the jobs in technology were more interesting than cold-calling for stock brokers, so I threw away an education (or so I thought) and jumped into technology. That was scary. I didn’t know the 7-layer OSI model from the Maslow’s Hierarchy of Needs, but I studied like crazy and loved the pace of change and competition. After a couple years of doing sales and consulting, my boss came in my office on a Friday afternoon. He said that I had three choices:  move to Massachusetts for a corporate job (burr…cold!!),  be fired, or  as a long-shot, take a couple engineering classes and be a field engineer installing networking equipment. I had 15 minutes to decide. Sometimes life is funny and complicated. I chose option #3. That was scary. For the next 6 months I flew almost every day of every week, reading manuals on the flights and learning by fire about the technology. It was painful, but I learned how to learn. This was the greatest experience I’ve ever had and I’m grateful to have stumbled into it. It was 20 years ago and I had no planning that it was coming.
Fast forward 20 years and quite a lot have changed. I’ve been lucky to have been able to use that “ability to learn” to transition back and forth between technical, marketing and “other” jobs, across multiple technology companies. During that time, life changed and priorities changed. Learning became easier, but planning became more complicated. 20 years ago, technology transitions happened over 10-20 years. Mainframe to Mini to PC to Web. I now believe that similar transitions happen 2x as fast, taking 5-10 years. The economics and supply chains have been radically impacted by things like Open-Source Software (OSS) and Public Cloud Computing. [Tip: Download a copy of “The New Kingmakers” by Stephen O’Grady from Redmonk to get a better appreciation of that change.] Continued »
While it’s interesting to watch Oracle OpenWorld keynotes, it’s often more interesting to watch the commentary on Twitter from people that are directly or indirectly impacted by Oracle announcements. They have a (sort of) new CTO, a track record of acquisitions and unlimited amounts of cash for M&A, so it’s fun to consider who they might buy next.
Still, no word from Oracle on Docker during the keynotes. Even VMware, which could logically be fearful of containers replacing VMs (or not), mentioned Docker at VMworld. Docker is sort of important to infrastructure teams and sort of important to application teams, and Oracle cares about applications and sort of announced a Platform-as-a-Service (hint: WebLogic), but still no mention of Docker.
But what if CTO Larry Ellison decided that his last chapter was going to be filled with modern acquisitions in order to preserve his legacy and set up his company for the next decade or more? What if Oracle decided to buy Docker? Besides all the initial apocalyptic fury, it would create some very interesting questions and scenarios. Oracle does have a history of buying open-source technologies (directly or indirectly), such as Java (via Sun) and MySQL. Continued »