From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 1112345...10...Last »

December 28, 2014  8:48 PM

2015 Predictions Sure to be Wrong

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, DevOps, Docker, SaaS, VMware

Let’s review the 2014 Predictions:

Let’s take a look at a few 2015 Predictions:

  1. Containers are Reality – While things are chaotic around containers, 2014 was the tipping point for the technology advancing, investments from VCs, and the strategies of major companies being laid out. And given the competitive landscale that’s evolving, we can expect to see the pace of change increasing in 2015. Expect to see major IT organizations making “container strategy” one of their top priorities in 2015.
  2. Hyper-Confusion – Hyper-Converged Infrastructure, at least in terminology, has been around since 2009 when VCE launched the Vblock and things like NetApp FlexPod soon followed. Things got more converged with offerings like Nutanix, Simplivity and Scale Computing. But it tipped to another level when VMware announced EVO:Rail. Almost every vendor has announced an EVO offering, but how will they differentiate themselves, and will there be offerings that don’t match the EVO framework (see: NetApp). And will the EVO brand mean anything if every “supporting” vendor creates a different version by adding their own software (Backup, Management/Monitoring, WANOp, Security, etc.)?
  3. Son of HyperConverged – If we look at the evolving DevOps technologies, we find that while they are powerful, they are often complex to setup. I believe that we’ll see some companies roll out pre-packaged &/or hyper-converged HW offerings that have many of the DevOps technologies pre-installed, along with several tools to making managing their environments easier.
  4. Do SaaS tools consolidate? I’ve written several times (here, herehere) that I’m fascinated by the emergence of SaaS tools to simply IT Ops. Services like Thousand Eyes, Loggly, Boundary, Cloudability, DataDog, PagerDuty, CloudPhysics, etc. But they are all separate services and separate companies, meaning that their customers need to learn different pricing models, operating models, APIs, for each service. Will we see a larger vendor (or cloud provider) start to consolidate some of these services?
  5. VMware Professionals will learn lots of non-VMware technologies – We’ve already seen VMware create a Cloud-Native applications groups, signifying that technologies like OpenStack, Docker, Mesos, Kubernetes, and many others could be integrated with native VMware technologies. IT professionals that spent much of the last 3-5 years getting familiar with the VMware virtualization and managements will probably be getting crash courses in Linux, Open-Source and many non-native VMware technologies over the next couple years.

Those are just a few of the areas I believe will be worth watching in 2015. What will you be looking forward to?

December 28, 2014  5:48 PM

DevOps Skills 101

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, DevOps, Docker, IOS, OpenStack, Puppet

In the past, I’ve written that it’s never been easier to learn about Cloud Computing and develop the skills needed for success. In parallel to learning about the underlying technology behind Cloud Computing, many IT professionals are trying to learn how to evolve the skills needed to create a successful DevOps environment for their business. And keep in mind, DevOps skills are highly in demand.

NOTE: DevOps is a cultural/operational model, but there is still a set of technologies that will help enable those best practices.

Learning about these technologies is getting easier and easier, with many options for learning available via the Internet (no equipment necessary).

Creating a culture within your company that can effectively model themselves on DevOps principles can be difficult, but learning the technology skills needed has never been easier. Pick a tool, pick a tutorial and see what you can learn in a few days or a few weeks.


December 28, 2014  4:43 PM

Container Wars

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Docker, Google, IBM, Microsoft, VMware

container-wars-smallThe last few weeks and months have been very interesting in the world of Linux containers, culminated by a number of announcements at DockerCon Europe in November.

Earlier in the year, Docker made containers very interesting. After separating from dotCloud (an early PaaS company), they announced a $40M round of funding and a number of interesting strategic moves:

And outside of the Docker-specific announcements, there was quite a bit of ecosystem expansion, competitive announcements and some confusion about the direction of Linux container technology: Continued »


November 26, 2014  9:23 PM

Unboxing an Amazon FireTV

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, Android, AWS, Cloud Computing, IOS, Netflix

Usually I use this blog to write about Cloud Computing and modern apps, but since it’s the holiday season, I thought I’d write about some technology that I’m sure some of the readers are considering for purchase.As much as they love Enterprise technology, they love their gadgets just as much.

As an Amazon Prime member, I got an offer for early access to the new Amazon Fire TV “stick”, which is a small device which plug into an HDMI port on your HDTV. Since I’ve never been happy about my Apple TV, I decided it was worth a try.

Here’s my experience.

  • Unboxing was nice. Simple, clean packaging. Couple devices, cable, batteries.
  • The Fire TV device doesn’t get power via HDMI, so it requires an external cable. The cabling is too short, and connector has the cable at the wrong angle. The cable is 4-5′ long and should be more like 7′. And the power plug looks like a AC/USB type connector, but the mini-USB portion is on the side, not the top, so it’s a hassle to plug into any power strip that has a bunch of existing plugs.
  • Initial setup was simple. Plug it in, Fire TV tries to find the WiFi SSID, then enter the WiFI password.
  • Fire TV initially tried upgrading software – this failed, and then told me to unplug/power-off.
  • Another software upgrade attempt. Completed, rebooted.
  • Another software upgrade; required another WiFi password with different keyboard. It seems that they changed the keyboard layout between versions of software. Of note, they changed how capital letters were entered.
  • Fire TV was able to communicate with Amazon.com and it knew that this device was registered to my Amazon Prime account. Nice job by Amazon. Will be interesting to see what happens if I need to return/exchange the physical device to see if it updates the registration.
  • The device is now working after 3 software updates.
  • Introduction Video. Nicely done. Simple, easy to understand. Basically a walk-through of the remote, plus an ad to buy Amazon Prime.
  • Fire TV includes integration with 5Gb of Google Drive. Was sort of surprised it wasn’t liked to an AWS S3 services, but I suppose this is because S3 isn’t really an en-user consumer service (eg. Google Drive, Dropbox, One Drive, etc.)
  • FireTV comes with a simple remote. There is also a remote app for Android and Amazon Fire Phone (not yet available on iOS). This include the voice-activated features, which replace search. It may also have some cool console-level functionality for games.
  • 2nd Screen capable with Fire Tablet or FireTV app. I don’t have one of these devices, so I couldn’t test this.
  • Have to “download” certain apps, such as NetFlix, Hulu Plus. Not sure how big the device is in terms of capacity and how many apps can be downloaded. After downloading a few, I suspect this is just a small chunk of code and most of it runs in the AWS cloud.
  • Had to adjust the TV sizing for NetFlix – not auto-detecting the screen size or screen display settings. Was fairly simple – they gave you arrows on the screen and you just made sure they fit within the screen display.
  • NetFlix login uses QWERTY keypad – yet another keypad for account entry (others were alphabetical). Sort of odd that Amazon doesn’t mandate any user-experience consistency.
  • NetFlix navigation is different than on AppleTV – no concept of a “home” button – home button on FireTV (within NetFlix app) takes you all the way back to FireTV home screen.
  • The games are decent. They aren’t high-end console quality, but most are free and seem to be of the quality of the early Wii games, which is good enough for young kids. And no extra consoles or controllers to buy.

I’ve been playing with it for a couple days. After some of the initial setup hickups, it seems like a nice device. The responsiveness is better than an Apple TV, and the UI (while still clunky from the native remote), is still better than an Apple TV. Buying things off Amazon Prime, such as HD movies, is very simple and it’s great that they play immediately – vs the long download times from Apple. Makes the kids happy, especially during holiday breaks and bad weather.

Overall I’d give it a thumbs up and a good value for under $40. Hopefully they release the iOS remote soon and I’ll be a much happier user.

Nicely done Amazon.


November 17, 2014  7:44 PM

Thoughts on the AWS re:Invent Announcements

Brian Gracely Brian Gracely Profile: Brian Gracely
Aurora, AWS, Docker, MySQL, OpenShift, Oracle, PaaS, Pivotal, Puppet, RightScale, VMware

Last week at AWS re:invent, the AWS team introduced a huge number of new products/services. A few of them are available now, but many are still in beta or won’t be available until 2015. Here are my notes from reviewing the services.

AWS Growth:

AWS continues to grow, but it does appear that the growth is slowing somewhat – always more difficult to continue high-percentage growth as overall revenues grow. They seem to have a trend of being up in Q1, down in Q2/Q3 and then up in Q4 (historically). Lots of longer-term, strategic announcements at this event, many of the new services building on top of (and combining) foundational services – EC2, S3, SNS, CloudWatch, CloudTrail. Somewhat surprised that they announced so many services that are not yet available or don’t have GA dates, although that tends to happen the more you engage with larger Enterprise customers that ask for features to solve complex use-cases. AWS seems to have no issues cannabalizing the successful segments of their ecosystem of technology partners to further the number of direct services they can offer to customers – Oracle, GitHub, Puppet/Chef, Jenkins, Cloud Foundry, Heroku, Dell, Rightscale, VMware, etc. No explicit prices were announced, but I suspect that we’ll see greater analysis of pricing for some of the new services as they become GA and overall cost/ROI will be slightly lower than building/managing all of those individually.

  • 40% YoY revenue growth
  • Several services only in limited availability (eg. alpha/beta) into 2015, with no specific GA dates announced

AWS growth (Ben Kepes – Forbes) – http://www.forbes.com/sites/benkepes/2014/07/29/just-how-big-is-amazons-cloud-business/

AWS growth (VentureBeat) – http://venturebeat.com/2014/07/24/aws-revenue-2q14/

Aurora – RDS – available in 2015 – [beta now]

  • Next-gen MySQL RDS
  • Stated as 4x performance of previous RDS
  • Manages the sizing of underlying EC2 instances (eliminate EC2 instance confusion)
  • Only available in VPC – targeted at the Enterprise
  • Don’t provision storage ahead of time – allocated based on DB size (eliminate Storage Admin)
  • Multi-AZ replication; Multiple Copies (eliminate Backup Admin)
  • Need to check on pricing difference from existing MySQL RDS
  • Write up from Ben Whaley (@AmTheWhaley; AWS Hero Award winner) on Aurora, KMS and Code

Code Management & Deployment (CodeDeploy, CodeCommit, CodePipeline) – [only CodeDeploy GA available now, others are TBD]

CodeDeploy

  • Targeted at Automation Tools – Chef/Puppet/Ansible (CodeDeploy) – can also be used with those tools
  • Requires an agent on each machine
  • Focused on scalable deployments and the associated availability services (ELB, AZs, etc.)
  • Blueprints (versioned “Deployments”) can be stored in S3, GitHub or CodeCommit
  • Multiple deployment options (each machine, groups of machines, all-at-once) and customization options

CodeCommit

  • Targeted at GitHub (CodeCommit)
  • Hosts Git repositories and interacts with existing Git tools.

CodePipeline

  • Targeted at Jenkins (CodePipeline)
  • Graphical view of pipeline and deployment process
  • Serial and Parallel actions
  • Time-based and Manual actions

Key Management Service (KMS)

  • User managed key service
  • Integrated with S3, EBS, RedShift
  • Integrated with CloudTrail to view logs of key usage, changes – for regulatory & compliance
  • Supports AWS IAM for multi-user environments
  • AWS KMS – Cryptographic Details

Config

  • Inventory Existing AWS Resources/Services
  • Track Changes and Associations of Resources
  • Pull Config data into 3rd party tools (Logging, Auditing, Compliance, Config-Mgmt, etc.)
  • Stores triggers (and snapshots of triggers) in S3; uses SNS to distribute updates – additional costs for those services on top of Config charges – interesting that it isn’t bundled like resources in Aurora.

EC2 Container Service (ECS) – Container Management for AWS – [still in preview] Continued »


November 7, 2014  8:05 PM

Can Enterprise Vendors Grok Open-Source?

Brian Gracely Brian Gracely Profile: Brian Gracely
Android, Cisco, Cloud Computing, Docker, EMC, Hadoop, HP, IOS, Open source, Pivotal, RedHat

It has been said, “Beware of Geeks Bearing Gifts“, especially when those gifts come packaged as open-source software. It can easily be argued that the growth of the modern Internet and many of the largest players (Amazon, Google, Facebook, Twitter, etc.) have been build upon open-source technology. In some cases, open-source can be used as a competitive weapon (see: Android vs iOS). In other cases it reduces supply-chain costs for producing digital goods.

But just because it works for the consumer Internet, does that means it will work for the Enterprise and Mid-Market segments of business that leverage IT for productivity and profit? And does it have to, considering the parallel rise of public cloud computing resources, which are fiercely competing for those same IT budgets?

It’s a challenging business model, but some vendors have made open-source the core of their business – such as RedHat, Canonical, Docker, Puppet, Chef, and many of the OpenStack distributions. Others have open-source as an option, but target their sales primarily towards commercial endeavors – Pivotal, Cloudera, Hortonworks. And still others are beginning to more actively contribute to open-source projects – Cisco, HP, EMC, VMware, etc. To augment this, they are also adding programs to focus on developers and provide code and resources to augment various technologies.

Still other companies decide to open-source their previously commercial technology, midway through it’s life-cycle. Joyent, Midokura and Big Switch are recent examples of this.

The other side of this equation is the Enterprise and Mid-Marketing companies that may choose to use these open-source technologies. While the economics of open-source can seem attractive on the surface (FREE??), the realities are that open-source is different that commercial software – support, documentation, integrations – and it requires changes to existing skills and processes. They also need to track the projects if they aren’t buying directly from a vendor. Hence why so many open-source centric companies also have healthy consulting and training practices to support the software distributions.  Continued »


November 7, 2014  4:46 PM

Is OpenStack at a Crossroads?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Docker, OpenStack, VMware

The Paris OpenStack Summit completed this week (Day 1, Day 2, Day 3) and while there were many new vendor and project announcements,there was also an underlying buzz that left me wondering if OpenStack is reaching a crossroads.

That buzz was people questions how or if containers should fit into the framework. There was a project (Magnum) that was kicked off, but the buzz around Twitter and from the show if this overly complicates OpenStack, or if it creates too much overlap with other (existing) schedulers such as Google’s Kubernetes and Apache Mesos.

As we’re seeing on a day-to-day basis, the growth and interest around Docker (container management) is accelerating rapidly. Google added to that acceleration by announcing a number of new container-centric services for Google Cloud Platform (@googlecloud). Many are expecting AWS to follow-up with a Docker announcement next week at AWS re:Invent.

4 years in, and the AWS crowd still seems to be figuring out what problems they are focused on solving – AWS competitor, VMware competitor, Hosting Services competitor? Maybe it’ll be all of them, but there isn’t massive momentum in any of those areas yet. And now Docker is the cool new kid on the block. And Docker doesn’t seem to be confused about what areas it’s focused on – modern applications.

Modern applications were supposed to be the focus area of OpenStack. But there are still too many customers hoping that it will evolve to be “free VMware” – but struggle with the lack of “VMotion” and other so-called Enterprise features. OpenStack pundits don’t want to go down that path, because that’s just automated virtualization and not cloud computing.  Continued »


November 2, 2014  3:57 PM

The Evolving World of OpenStack

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cisco InterCloud, EMC, Google, Microsoft, OpenStack, Rackspace, RedHat
via @RyanFloyd from Storm Ventures - http://ryanfloyd.org/openstack-startup-ecosystem-its-real/

via @RyanFloyd from Storm Ventures – http://ryanfloyd.org/openstack-startup-ecosystem-its-real/

It’s that time again, OpenStack Summit. The semi-annual gathering of the OpenStack masses to engineer the next set of projects (and “Kilo” release), and marketeers to tell us how OpenStack is real, has lots of Enterprise customers and will overtake the world soon…eventually…or already has.

And if we step back and look at how OpenStack has evolved over just the last four years, it’s been an “interesting” journey.

It started with Rackspace and NASA deciding to collaborate on Compute (Nova) and Storage (Swift) projects. And they would open-source the work. And while open-source projects we by no means new, the fast that this was called “OpenStack” threw lots of people into a tizzy – especially those that sold competitive non-open-source projects.

For at least the first 18 months of public OpenStack existence (there was “secret” stuff happening behind the scenes well before this went public), you couldn’t attend an event or meetup without having to hear the Rackspace/NASA history. And of course people tried to explain this new model, which used commodity hardware was best aligned to these magical applications that handled failures in a new way.

Over time, more programs were added, and opinions varied about how OpenStack would survive in the real world. Should it innovate or clone? Should it create a compatible ecosystem of commodity providers, or work to create unique business opportunities?

And of course there was the question of who should drive the ship. Will OpenStack be driven by a community (opinion, opinion), or should there be a benevolent dictator (eg. Linus Torvalds in LInux)?  What occurred was the creation of the OpenStack Foundation, to mute some of the influence that Rackspace had over the programs and bring new levels of transparency and governance into the community. And then all the major vendors jumped on the bandwagon to be sponsors and attempt to influence the project in various ways.

There was some initial tension, but that seems to have died down somewhat.

And as 2014 comes to a close, the landscape is far different than it was just a couple years ago. The small companies are being acquired (MetaCoud/Cisco, Cloudscaling/EMC, Inktank/RedHat, eNovance/RedHat) by the large vendors. Some of the early evangelists (Joshua McKenty, Jessie Andrews, Randy Bias, etc.) now work at other companies or are out of the OpenStack ecosystem. And while VC’s are still investing in the space (see Infographic), the exits have mostly been sub-$100M.

Eucalyptus has been acquired by HP, to power their OpenStack strategy. Apache CloudStack still has a strong community, but recent changes at Citrix have led some to question it’s future. The talks of open-source cloud wars have most definitely died down as Microsoft Azure, Google Compute Platform and AWS continue to grow and add new functionality, without being powered by OpenStack. Cisco is pushing the OpenStack for SP’s agenda with their InterCloud strategy, and Mirantis seems to be behind many of the successful projects (in all markets).

People weren’t very happy when I asked if Paris would be the last major OpenStack Summit. While events in Vancouver and Tokyo have been planned for 2015, I still think it’s a valid question in the context of broad community involvement vs. vendor-specific efforts and activities. I believe there is still quite a bit of consolidations to happen in 2015.

OpenStack has come a long way since 2010. We no longer talk about the Rackspace/NASA history and the grandeur of disruptive movements. Now we talk about vendor strategies and if wide-scale deployments will happen. The market has changed and the largest players (both clouds and vendors) are placing heavy bets on the future. OpenStack will be part of those bets, but whether it’s a direct factor or indirect in deciding $$ winners and losers is still TBD.


November 2, 2014  2:26 PM

Can the Cloud know your IT-Ops better than you?

Brian Gracely Brian Gracely Profile: Brian Gracely
API, AWS, Cloud Computing, DevOps, EMC, OpenStack, SaaS, Uncategorized, VMware

 

Halloween-infographic-virtual-datacenter-haunted

Inforgraphic by CloudPhysics

About six months ago, we decided to switch the focus of The Cloudcast podcast from being about “cloud computing” to being more focused on DevOps, SaaS (the AWS ecosystem) and trends for developers. In particular, the focus on the SaaS ecosystem that enables services around AWS has been very interesting to watch evolve. They have broken up the mindset that Ops needs a “single-pane-of-glass” approach to tools. They allow customers to create the Ops models that works best for them, but creating tons of native API-based integrations with other services.

The consumption model of these SaaS applications is different than you’re used to in traditional IT environments. They charge based on usage, whether it’s in hours or in capacity of data analyzed, thus eliminating huge bills for management software that often become shelfware. And they allow the Ops environment to be flexible and “customized” because most integrate with a massive amount of other 3rd-party SaaS services via APIs (example)

Some companies such as Cloudability (podcast), New Relic, DataDog (podcast), Loggly (podcast), PagerDuty (podcast), Evident (podcast), StackStorm (podcast) and BigPanda (podcast) focus primarily on AWS environment. Other such as CloudPhysics, Platform9, BlueBox take a broader view of the clouds and applications they support. But in every case, they are collecting tons of information about customer usage and gaining insight and experience about building out massively scalable infrastructures. In other words, they are creating learning curves that are orders of magnitude faster than any individual IT/DevOps group could by themselves.

As a customer, being able to take advantage of that learning curve is incredibly powerful. It’s analogous to being able to hire rockstar engineers, except that it doesn’t matter where your business is based or what industry you’re in – since not everyone lives in Silicon Valley or wants to pay those rents/mortgages. You’re renting outstanding talent, and only paying for the software for as long as you want to use it. And when companies like Spanning (recently acquired by EMC) offer the ability to backup data from SaaS applications into your data center or another clouds, you begin to have data recovery or portability if a service goes out of business or you find another one you like better.

To me, the next step is figuring out how to gain insight and knowledge into those learning curves. As I’ve spoken with these SaaS vendors, the feedback has been mixed. Some host events and discussions with select customers to share insight. Others publish learnings to their blogs, or speak about their experience at meetups/events. Still others are taking a page out of AWS’ playbook and turning those customer trends and actions into new features or guidance to customers – such as what Cloudability does to help customers spend less or more intelligently.

As these SaaS services begin to offer links into more cloud environments (Azure, GCE, VMware, OpenStack, etc.). the possibilities to integrate them into your cloud environment will only expand in the near-term future. I believe they are worth exploring, especially if you have challenging areas that are complex to acquire talent, or your current management software isn’t giving you the insight that you need. You can only benefit from the learning curves of these SaaS providers.


October 16, 2014  3:00 PM

EMC {code} – The Next 5-10 Year Journey

Brian Gracely Brian Gracely Profile: Brian Gracely
Automation, community, Developer, DevOps, EMC, EMC ViPR, EMC World, Open source, Sandbox, Software

At least a couple times a week, colleagues or people within the industry will ask for career advice. What should I do next? Should I work for this company? Where do you think the industry is going next? What’s the next cool technology to learn? I’ve written about this a couple times before. It’s never a one answer fits-all conversation. There are always critical factors to take into consideration – What’s the opportunity? What skills do you have today? What skills are you learning? Where do you live, and does this matter? What’s the next step going to be after this one?

Before I get into the discussion I’ve been having with myself lately, I thought I’d share a story from many years ago. I went to college to study finance and marketing. When I graduated, the jobs in technology were more interesting than cold-calling for stock brokers, so I threw away an education (or so I thought) and jumped into technology. That was scary. I didn’t know the 7-layer OSI model from the Maslow’s Hierarchy of Needs, but I studied like crazy and loved the pace of change and competition. After a couple years of doing sales and consulting, my boss came in my office on a Friday afternoon. He said that I had three choices: [1] move to Massachusetts for a corporate job (burr…cold!!), [2] be fired, or [3] as a long-shot, take a couple engineering classes and be a field engineer installing networking equipment. I had 15 minutes to decide. Sometimes life is funny and complicated. I chose option #3. That was scary. For the next 6 months I flew almost every day of every week, reading manuals on the flights and learning by fire about the technology. It was painful, but I learned how to learn. This was the greatest experience I’ve ever had and I’m grateful to have stumbled into it. It was 20 years ago and I had no planning that it was coming.

Fast forward 20 years and quite a lot have changed. I’ve been lucky to have been able to use that “ability to learn” to transition back and forth between technical, marketing and “other” jobs, across multiple technology companies. During that time, life changed and priorities changed. Learning became easier, but planning became more complicated. 20 years ago, technology transitions happened over 10-20 years. Mainframe to Mini to PC to Web. I now believe that similar transitions happen 2x as fast, taking 5-10 years. The economics and supply chains have been radically impacted by things like Open-Source Software (OSS) and Public Cloud Computing. [Tip: Download a copy of “The New Kingmakers” by Stephen O’Grady from Redmonk to get a better appreciation of that change.] Continued »


Page 1 of 1112345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: