From Silos to Services: Cloud Computing for the Enterprise

Page 2 of 812345...Last »

January 26, 2014  1:00 PM

Five Trends to Track in 2014



Posted by: Brian Gracely
Cisco, Cloud Computing, Converged Infrastructure, Enterprise, Hybrid Cloud, IaaS, Microsoft, Multi-Cloud, OpenStack, PaaS, Private Cloud, Public Cloud, SaaS, SDN, Software-Defined, Software-Defined Networking, Start-up, Virtualization, VMware

Will an OpenStack Leader Emerge?

We’ve all watched the evolution of OpenStack for the past 3+ years, from incubation to massive open-source trend. Sometimes it’s been good and others times less so. Lots of commentary on how much code has been contributed and which vendors are part of the OpenStack Foundation. But will a true leader emerge in 2014, like RedHat did from the Linux wars of the late 1990s? Will we see strategic consolidation (eg. companies like Piston Cloud or Cloudscaling acquired)? Will someone buy the experience of Mirantis? Will an OpenStack powered cloud begin to challenge AWS in services, scale and pricing? Will Oracle or VMware be the catalysts to bring OpenStack to the mainstream Enterprise, or mostly just make announcements?

What Segment of SDN will Emerge?

“SDN” has been through so many evolutions over the last 3-4 years that sometimes it’s hard to even know what the terminology means anymore. Recently articles from various vendors highlight this (here, here). But just like the networking industry, SDN will have applicability in many segments of the network, and many layers of a network stack. Beyond the L2 “virtual switch” in servers, virtual networking still hasn’t established a strong foothold yet.  Does it begin to emerge in the L4-L7 services space, or L2-L3 overlays in the datacenter? Do the NFV-driven business motivations of Service Providers occur faster than Enterprise IT changes their operational models? Or will Cisco’s market dominance prevent SDN from gaining significant market traction?

What has a Bigger Impact on Storage – HyperConvergence, Flash or Software-Defined?

Pure Storage gets $150M, Nutanix gets $100M, Nimble has a successful IPO and EMC/VMware launches ViPR, ScaleIO, XtremIO and VSAN. That’s a lot of VC funding for emerging start-up companies, balanced against the market leaders driving disruptive models of both the industry and their own portfolios.

Do Clouds become the new IT Silos?

In the past, Enterprise data centers were filled with application-centric silos. They were consolidated into a small number of locations, but they were still often unique, segmented and isolated. And then IaaS offerings like AWS began showing the market how to build environments that could host many types of applications on shared infrastructure environments. Within the last year, Adobe, Apple, Cisco UC, VMware, Microsoft, SAP and Oracle have all built their own Cloud offerings.  Do Enterprise customers continue to build Private Clouds and leverage AWS, or do they flock to the vendor-specific public clouds? Or does better technology emerge to help IT organizations manage the hybrid offerings across all these clouds?

How do we measure PaaS Success?

Only a few years into it’s existence and some people are already calling it “dead” (or not dead), or misdefined. But yet, we’re still seeing large companies publicly put their support behind PaaS initiatives, both in the public cloud and within the Enterprise space. Or maybe PaaS is really old and we should be doing a better job keeping a scorecard of its progress. Will we see PaaS become a trackable market segment in 2014?

January 25, 2014  10:20 AM

The Real Reason Private Cloud is Dead



Posted by: Brian Gracely
Cloud Computing, Hybrid Cloud, Private Cloud, Public Cloud

Another year, another set of predictions and opinions about Private Cloud. The list below is just a small spectrum of how believe the market is going to vote with their actions. The reality is that this was never really a debate. Even before the term “Private Cloud” got introduced in 2009-2010, that game was over. IT organizations were already using a broad mix of internal and external resources to deliver technology to their business. It was SaaS applications and packaged applications and homegrown applications. Inside the firewall and outside the firewall.

Screen Shot 2014-01-11 at 1.36.06 PM

 

 

And it’s true that lots of IT organizations either have built or will build a variation on what they believe a Private Cloud should be. For some it’s about efficiency and cost-reductions.For others it’s about agility and ever-growing business demands. But the reality is that no IT organization is going to only have a Private Cloud strategy. They never did before and the competitive world won’t let them going forward.
Screen Shot 2014-01-11 at 1.33.33 PM

 

 

The reality is we live in a hybrid world. When the CIO sits down at C-suite meetings and looks around the table, they find that every other organization has already adopted a hybrid model. Finance, Marketing, HR, Sales, Engineering. Every one of those organizations has already optimized their segment of the business to leverage the best resources for each departmental requirement, whether they are internal resources or external resources. In essence, they have already build a Hybrid Cloud organization, just not with servers, storage and networks.

Screen Shot 2014-01-11 at 1.35.21 PM

 

The real challenge for IT organizations going forward is how to best manage the Hybrid Clouds they have created.

  • How to continue to have visibility across clouds?
  • How to continue to reduce friction between business ideas and execution?
  • How to create flexibility so that they are at least marginally prepared for the next technology shift (what comes after mobile?)


January 25, 2014  9:45 AM

Developers don’t Want to Take a Bath



Posted by: Brian Gracely
Cloud Computing, DevOps, Enterprise

You hear that?

It’s the sound of frustrated developers and SysAdmins. The groups that are on the shop floors of 21st century bit factories, trying to keep things running more quickly and more smoothly. The groups that have been in the trenches of the evolution known as DevOps. And they are frustrated. Not at the tools that are available to help them do their jobs. Those are plentiful and improving all the time (Github, Jenkins, Vagrant, Docker, Puppet, Chef, Ansible, Zookeeper, etc.).  Not at the people that have been in the trenches sharing their experiences (Jez Humble, Patrick Dubois, Gene Kim, etc.). They are frustrated with the marketers that are trying to manipulate DevOps and turn it into another round of Cloudwashing.

Screen Shot 2014-01-25 at 8.06.46 AM

 

 

 

 

 

It wasn’t great when the great Cloudwashing era of 2009-2012 came over us. Cloud-in-a-Box, Personal Clouds and all the other variations of “whatever we sell is now Cloud”. But at least IT Admins could point to basic elements of virtualization or automation and prove that some things in IT cost less and could be accomplished faster than before. It wasn’t really Cloud, but it was an improvement.

The same thing won’t happen for those trying to push Big DevOps or New DevOps or DevOps is the new Orange. Developers don’t suffer fools. They will ignore you. They will shun you and shame you.

Screen Shot 2014-01-25 at 8.08.26 AM

 

 

 

 

 

Leave DevOps alone. The developers and sysadmins are making progress, learning on the fly and continuously improving. They don’t need your DevOps training wheels.


January 5, 2014  12:51 PM

Does VMware need a vOpenStack offering?



Posted by: Brian Gracely
Cloud Computing, Cloud Foundry, Cloud Management, Data Center, Enterprise, IaaS, Mission-Critical Applications, OpenStack, PaaS, Pivotal, Private Cloud, SDN, Software-Defined Data Center, Virtualization, VMware

Three or four years ago, I had a unique opportunity to spend two weeks touring manufacturing facilities across China as part of a program to study complex systems, international business and patent law (or lack of it). We spent time with many well-known companies looking at their operations processes as well as discussing how they compete with low-cost labor and lack of Intellectual Property (IP) laws in China. While many of the stories can’t be repeated due to NDAs, there was one example that has stuck with me as a unique example of how to compete in a world where the competition plays by very different rules than your company.

[Side note 1: I have an on-going fascination between the shifts in manufacturing and the shifts in the data-center / cloud segment of our technology industry - here, here]

[Side note 2: We talked about some of these shifting rules for technology companies on our recent podcast]

The story goes like this. Large (global) automative company wants to sell products throughout China. To compete on price, they have setup manufacturing facilities locally in several Chinese cities. Chinese workers are hired, trained and build automotive products. After a short while, near replicas of those products start showing up on the market with various Chinese brands, at 40-60% of the cost of the global brand.

So what should they do? Do they follow The Innovator’s Dilemma model and get out of those markets because some automotive products are becoming commoditized and let the low-cost competitors take that market? Do they continue to build their existing products, but double-down on marketing to educate the market on why their “value-added” capabilities are worth the price premium?  [btw - does this sound familiar in a technology context?]

Actually, they did none of the above. What they did was embrace this new market demand – similar capabilities, but acceptable lower quality – and built a level of expertise around this new approach. The first thing they did was segment a line in the plant to building lower-cost “knock-offs” of their own products. They used lowered cost paint, single-pane instead of double-pane glass, lower lifetime tires, etc. They leveraged the experience they already had in manufacturing and willingly adjusted the product/material quality to satisfy a market that had lower expectations. The margins on these products were lowered (I don’t recall the difference), but it was viewed on getting $0.40-0.50 on the dollar vs. $0 on the dollar to lost business.

But it didn’t stop there. They expanded the number of manufacturing lines for these lower-cost products within the same facilities. They also took the customer feedback and considered it within the higher-end product lines. Ultimately they were able to leverage the expertise they already had (manufacturing, sales, etc.) and augment it with a line of thinking (and products) than satisfied an under-served segment of the market that would pay less for less capabilities, but still wanted some capabilities.   Continued »


December 27, 2013  9:57 AM

Three Improvements Needed for Cloud Computing in 2014



Posted by: Brian Gracely
AWS, Budgeting, Cloud Computing, Cloud Management, Consumerization of IT, Costs, DBaaS, DevOps, Enterprise, IaaS, PaaS, Private Cloud, Public Cloud, SaaS, Transformation

self-improvementOn-Demand, Self-Service access to IT resources. That’s the promise of Cloud Computing. It’s a reality that has been coming together for the past 5-7 years. At times we marvel at the pace of change that cloud computing has had on the entire IT industry, and other times we grow frustrated that broad adoption and availability of new services aren’t happening fast enough. So as I think about the year ahead, these are some areas where I believe we need to see improvements in 2014 to keep this industry shift moving forward at the pace we’re all expecting.

Move from Building to Buying Private Cloud

We all lived though the “Cloud in a Box” era of HW+SW packaging (sometimes just HW) that was targeting Private Cloud. While this market has continued to grow rapidly (and be renamed “Converged Infrastructure”), it’s only a piece of the puzzle. Those are building blocks. What we need to see is an evolution to “buying blocks”, where the entire solution can be purchased in a way that better aligns to the services that will be consuming those underlying resources. This improvement will require some basic assumptions (or economic changes):

  • We may need to have consumption-based pricing of on-premise HW+SW
  • We will need to see on-premise resources priced in a way that makes comparisons with public cloud services fairly straight-forwards (hourly, daily, monthly)

Enable the IT Product Manager by Default  

While there is still some debate about what the appropriate (or inappropriate) HW+SW to use to build a cloud computing environment, most people agree that making changes to the associated people and process is the more difficult side of the equation. One way to improve this is to enable the IT-as-a-Service “Product Management” function by default. The service catalogs and portals need to be on by default. The basics of the menu need to be delivered as templates, inclusive of the service offering, the pricing models and the basic policies for usage/security/etc. Not every one can become a master chef, but many people can successful learn to make acceptable (and often delicious) meals by giving them a good cookbook filed with recipes. This is also needed for the groups that will be operating the new cloud services. Don’t just give them the tools, but also give them the recipes to enable successful consumption and usage of the services. Continued »


December 26, 2013  3:14 PM

Cloud Computing and Lock-In – Understanding and Managing Costs



Posted by: Brian Gracely
APi, Budgeting, Cloud Computing, Consumerization of IT, Enterprise, Licensing, Lock-In, Open Source, Opportunity Cost, Private Cloud, Public Cloud, Shadow IT, Transformation

Part 1 Vendor Lock in What is it and how to avoid it-resized-600Until a few years ago, you could almost guarantee that every technology vendor pitch would include a claim about how they could help a customer avoid lock-in. It could be through open-standards or open-APIs or an SDK that would allow customization. And then the industry changed and avenues to acquire technology expanded to include public cloud computing and the mainstream use of open-source technologies. These new “vendors” weren’t the same names that had dominated previous generations of IT, so they could use vendor lock-in as FUD to highlight why their new capabilities we worth a look from the IT department (or directly by the lines of business).

Back in the day, technologists would wait until the committees at IETF, IEEE, SNIA, W3C or some other standards body had decided what was officially a standard. Standards were intended as a way to manage interoperability, but also control costs so that no single vendor could dictate a technology and hence hold a market hostage on prices and profit margins. It was the leverage that customers had against the technology vendors. Interoperability was customer’s leverage against lock-in.

Somewhere along the way, may people seem to have forgotten what lock-in really means. They have forgotten that it happens every time any new technology is downloaded, deployed or implemented. Lock-in is the cost of making an initial decision + the cost of learning that new technology + the cost of maintaining that technology + some (potential) future cost of changing that technology. There are other “opportunity costs” that can also be factored in, such CAPEX vs OPEX (eg. public cloud) or agility costs of being able to modify the technology directly (eg. open-source), but those are really just variations on maintenance costs. Continued »


December 18, 2013  12:40 AM

Time for 2014 Predictions. I’ll take RSTLNE.



Posted by: Brian Gracely
Uncategorized

6a00d834515db069e200e550215f1b8833-640wiIt’s that time of year. Time for that time-honored technology tradition of pontificating bloggers making their brilliant 2014 predictions. The time when we dazzle you with our prognostication skills and crystal clear view of the future.

But the big challenge is we’ve all heard these before. This same problem happened to everyone’s favorite game show, Wheel of Fortune, as people got tired of contestants guessing the same letters in the Final Bonus Round. So to spice things up, they automatically gave the contestants the letters R-S-T-L-N-E, allowing the viewers to bypass the monotony of repetition.

For you, my loyal readers, I’m going to give you the R-S-T-L-N-E of 2014 predictions, so you can bypass all those other blogger predictions. You can thank me later…

R - 2014 will be the Year of VDI. Write it down. Tattoo it on your back. The technology has finally come around and it’s ready to move the ball to the finish line.

S – <Insert_Blogger’s_Company_Focus> will lead the industry in a radical technology shift. In the process, the legacy competitor will have no way to keep up with their world-class technology and overall company nimbleness.

T – Software-Defined OpenAwesomesauce will signal the death of 2005′s leading technology. It’s finally dead. Carve the tombstone.

L – Open Source will win. It’s free, like beer, or puppies, or other free stuff like lollipops at the dry cleaners, and free always wins. And if your company doesn’t open-source all of it’s technology then you will lose.

- There’s a 50/50 chance that your company will abandon ITIL and embrace DevOps, while converting all of your infrastructure to whitebox hardware and you’ll only hire people in IT that can write production code on Day 1, just like they do at Facebook. All of your internal company meetings will be replaced by hackathons.

- Adrian Cockcroft (@adrianco, Netflix) will be highlighted (or keynote) at many Cloud Computing events. This will be balanced against Oracle announcing that they are now supporting a technology or partnership that they condemned just 2 years ago. Continued »


December 8, 2013  6:11 PM

The Ebbs and Flows of SDN



Posted by: Brian Gracely
AWS, Cisco, Cloud Computing, Data Center, Enterprise, Lock-In, Multi-Tenancy, Open Source, OpenDaylight, OpenFlow, Overlay Network, Private Cloud, SDN, Software-Defined, Software-Defined Data Center, Software-Defined Networking, VLAN, VMware

Screen Shot 2012-11-27 at 8.56.20 AMAfter 10+ years of limited change in the networking industry, we entered 2013 with a great amount of fanfare and promise with a new set of networking technologies to address the challenges brought about by server virtualization and web-scale data centers. The $1.2B acquisition of Nicira by VMware, considered one of the leading pioneers of SDN, got people thinking that maybe there was a chance that leading architectures and marketshare could be changing over the next few years. It also signaled that major companies believed in this new networking paradigm and were willing to put significant dollars behind the SDN trend.

Then things got interesting…

Big Switch Networks had attempted to capture the early marketshare by open-sourcing their Floodlight controller, but this was muted when the OpenDaylight project was launched by the Linux Foundation. Major vendors including Brocade, Cisco, Citrix, Ericsson, IBM, Juniper Networks, Microsoft, NEC, Red Hat and VMware joined as founding members. Both the Floodlight and OpenDaylight controller were options, but since then Big Switch has dropped out of the project. Juniper eventually filled the gap of alternative technology when it contributed the OpenContrail project.

Along the way, we also saw the launch of interesting start-ups like Plumgrid, Nuage Networks, and Embrane, and Arista Networks continues to march towards an IPO.

And then things got even more interesting… Continued »


November 30, 2013  11:36 AM

The Ebbs and Flows of OpenStack



Posted by: Brian Gracely
Cloud Computing, DevOps, Multi-Cloud, Open Source, OpenStack, Platform

openstack-cloud-software-vertical-largeNothing creates more opinions in the Cloud Computing industry than OpenStack. More that Predictions about “Year of VDI” or “What is SDN?” or “Cloud vs. Cloudwashing”. Nothing. In my 20+ years in the IT industry, I’ve never seen anything like this before.

During the OpenStack Summit, the talk is almost unanimously positive. # of developers, # of committers, # of new projects, training projects, documentation projects and a growing list of customers from companies that you’ve actually heard of. On the surface, the OpenStack community appears to be well organized and making large strides in delivering a massive set of projects via a community model.

Then we start seeing commentary like this from the experienced OpenStack engineers. Soon after, we see commentary from Gartner (here, here) about several major challenges that the OpenStack community and OpenStack-centric companies face in 2014. Those two led to several well written responses, from both vendors and analysts.

The difference in tone between the week of OpenStack Summit and the following weeks was significant. It would have been one thing if the negativity had come from the CloudStack community, or AWS directly. It would have been understandable. But this was coming from within the OpenStack community, or analysts with deep understanding of Enterprise customers and the overall Cloud Management space.

Regardless of your opinion on OpenStack, it’s hard not to wonder where the truth really resides.

And then Solum happens…

Just when you thought the OpenStack community had begun to hit their stride in terms of hype vs. projects vs. execution, somebody swapped the “I” for a “P” and introduced a PaaS-like project (“Solum”) into the mix. OpenStack already had commitments and projects to work with both Cloud Foundry and OpenShift, well-defined PaaS projects. And then for some reason, a faction within the OpenStack community decided that it needed it’s own variation on a PaaS. Confused? I don’t blame you. You wouldn’t be alone.  Continued »


November 30, 2013  10:51 AM

Can Private Cloud Management be Hybrid?



Posted by: Brian Gracely
AWS, Budgeting, Cloud Computing, Cloud Management, Data Center, Hybrid Cloud, Private Cloud, SaaS, SLA, Transformation

Screen Shot 2013-11-29 at 5.06.50 PMOne of the interesting slides from the keynote at AWS re:Invent was one that showed the pace of new features/capabilities over the past few years.When building on a platform, the ability to rapidly increase “learning curve” is tremendous, and AWS is obviously increasing the pace at which they can innovate. Collectively seeing customer problems; sharing best-practices for service delivery across their teams, and building a culture of services-led delivery for each new service.

We’re seeing this ability to rapidly innovate in the SaaS Management companies as well. They simplify the ability to consume their services and they are taking their day-to-day learnings and using it to constantly improve their product. From a software development perspective they are leveraging Agile principles to increase their pace of delivery, but they are increasing their operational learning-curve, which is equally important.

I’ve explored if more Cloud Management Platform companies should be moving to a SaaS model before. At the time, I was thinking that it made quite a bit of sense from a financial perspective. Allow companies to grow their cloud environment at a pace that makes sense for them. Moving to a cloud operations model (eg. on-demand, self-service) can often be challenging, involves more process change than technology change.

The more I think about it, the more I think the learning-curve is the most important part of the equation for both the cloud management vendors and their customers. As Bernard Golden wrote earlier this week, the core software platform that controls your cloud is “Magic”. When systems are complex, the ability to learnings about the system are critical.

Is there a need for a Hybrid/Private Cloud?

If a company wants to keep their data on-premise, wants to have network response times that allow them to create known SLAs, and they are still concerned about public cloud security – then this tends to lead to look at Private Cloud solutions. But then there’s the challenge of getting their people and process aligned to properly deliver cloud services. The operations team typically would need to get trained on new tools. They might need to get re-organized to allow them to focus on “service delivery” instead of just running technology silos (compute, storage, networking). They may even need assistance in how to think about “building services” (eg. create “products” that cloud users can get from self-service portals).

We’ve always talked about Hybrid Cloud as being a mix of Private Cloud resources and Public Cloud resources, potentially with a unified management framework that interconnects them. But what about the IT organization that wants Private Cloud characteristics for performance, security and data retention, but has been struggling to get re-organized or have a functioning Cloud Management Platform?

Isn’t this an opportunity that a CIO might consider looking at to bring agility to their IT organization and their business users? Isn’t it a win-win situation for both companies and vendors, to increase their learning-curve?

We’re still seeing survey data that says that most CIOs want to build Private Clouds (with or without additional Hybrid/Public Cloud resources), but the major success stories haven’t been significant – lots of virtualization, not as much self-service. Is 2014 the year where we begin to see vendors start bringing new options to fill these opportunities?

 


Page 2 of 812345...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: