From Silos to Services: Cloud Computing for the Enterprise

Page 8 of 10« First...678910

March 23, 2013  3:43 PM

Big Data Thoughts from Structure:Data

Brian Gracely Brian Gracely Profile: Brian Gracely

This past week I had the opportunity to attend the GigaOm Structure:Data conference in NYC. Unlike many industry conferences, which are sponsored by a vendor or the agenda is dictated by a specific technology, this show did an excellent job of bringing together a broad mix of technologies, vendors, customers and thought leaders. While the hype of the conference was “Big Data”, the technology and deployability are still in the early stages for all but the top 1-2% of the industry. There is a summary from GigaOm here, as well as broad media coverage. Going back through my notes, I found the following thoughts most worthy of follow-up.

  1. Big Data is Difficult.
  2. Data Huggers are the New Server Huggers - Company after company I spoke with highlighted that existing organizational structures are their #1 challenge to Big Data strategy success. Organizations love their data. Organizations don’t love sharing their data with other groups, even within the same business.
  3. Forget the Economy, Big Data is the 1% Club - While Business Intelligence and Data Warehousing have been around for quite a while and are deployed at many companies, the companies that are able to leverage the newer technologies (Hadoop, NoSQL databases, R, etc.) to unlock business insight in real-time is still extremely small.
  4. Big Data != Fast Data - It becoming clear that there is a big difference between Big Data and Fast Data, both in technologies and use-cases.
  5. Hadoop is the Foundation, but beyond that… -  While the Hadoop market is competitive (Apache Hadoop, Cloudera, Hortonworks, IBM, MapR, Oracle, Pivotal, SAP) are all trying to sell a Hadoop-centric product, the real wars will be with the tools, frameworks and extensions that are layers on top of Hadoop.
  6. “Telemetry” will make its way into your vocabulary – Whether it’s called “Internet of Everything” or “Sensor Data” or something else, you will begin to hear a massive push about how telemetry data will be attached to people and machines to drive real-time fast data and unlock new markets.
  7. Connecting to the legacy is key – Many companies are focused on being able to not only integrated legacy datastores into Hadoop-based “Data Lakes” or “Data Reservoirs”, but also focusing on how to integrated existing SQL tools and skills into a Hadoop environment. The SQL aspect is attempting to overcome the shortage of Data Scientists and extend Big Data out to more generalist business users.
  8. Data Scientists are in massive demand – This has been highlighted before, but it’s still a massive shortage in our industry. Not only is there demand for people to analyze the data, but also massive demand for people that can setup/run Hadoop environments and integrated legacy systems with Hadoop.
  9. Huge Opportunities for Big Data On-Demand – While many Cloud Service Providers offer various types of on-demand IaaS resources or on-demand Database services, the ability to experiment on Big Data or Fast Data use-cases is massive. With setup being (still) complicated, there are huge opportunities for Cloud SPs to expand their offerings to be turn-key, as various sizes, to accelerate the time to analysis and action.
  10. Bandwidth is Still a Problem – While Big Data might be a big deal, it still hasn’t overcome that pesky little physics issue – the speed of light. It will be interesting to watch how the location of data (on-premise vs. in public clouds) shapes the industry over the next 3-5 years.
  11. Get familiar with Open-Source Frameworks - Whether you’re deploying with Puppet or Chef, coordinating resources with Zookeeper, or developing tools that leverage Pig or Hive, it’s time to start familiarizing yourself with open-source frameworks and community-based knowledge sharing. Big Data (or Fast Data) is attempting to solve challenges that are beyond a single organizations, so using the tools and frameworks of the community will help accelerate your chance at success.
  12. Your Data is Your Next Product/Market – It was interesting to hear how many side conversations involved companies that currently possess massive amounts of industry-specific data that are now looking to unlock (and sell) this to external industries. For example, intelligent weather data could be extremely valuable to dozens of companies (finance, insurance, farming, transportation, grocery stores, airlines, etc.) that may be able to make better decisions from data that was never previously available to them.
  13. Big Brother Knows About You – You’re welcome to keep fooling yourself into believing that you have a level of privacy or information security. Think again. Every device you interact with, every transaction you make and every location to visit is being tracked, correlated, analyzed and acted upon by someone.

March 23, 2013  12:12 PM

Understanding Cloud Computing Forecasts

Brian Gracely Brian Gracely Profile: Brian Gracely

As the market for Cloud Computing products and services evolves, the stakes for success or failure (for companies, vendors, integrators, etc.) continue to rise. With that in mind, the amount of research that will come to market will continue to grow. For anyone analyzing this data, or using it to help make future strategic or tactical decisions, it’s important to keep several factors in mind. Being able to read between the lines and understand what might be below the surface can make the difference between leading, spotting trends or following the crowd.

  1. Audience - Who is the target audience of the survey? Are they IT professionals that currently work in IT operations, IT architecture or application developers? It’s especially important to understand if they come from IT, or they come from the groups trying to move around IT.
  2. Area of Focus - Do the survey results come from people focused on existing IT systems or future-looking systems (eg. Mobile, Big Data, SDN, Automation, Open-Source, etc.). IT silos can create unique viewpoints about what problems exist and how they can be solved.
  3. Decision-Making / Budget-Owner – Which group(s) within the organization have responsibility for IT budget? Which groups are able to obtain funding for IT services outside the existing IT organization?
  4. Length and Scope of Projects – Is the research focused on length or scope of projects? Long-term projects have a completely different framework (planning, strategic-alignment, project management, budgeting, etc.) that short-term projects, which are primarily driven by immediate needs. Continued »


March 17, 2013  10:01 AM

Bringing Big Data to Big Projects

Brian Gracely Brian Gracely Profile: Brian Gracely

Everyday we get bombarded by technical acronyms (BYOD, CoIT, MDM, APIs, IaaS, etc.) and vendor speak about new ways that IT can bring agility to business. IT organizations need to Mobile-enable their workforce to harness the power of Big Data to uncover new insights that will unlock differentiation and agility. And after a while, the market begins to turn off because the noise to signal ratio gets overwhelming.

Too often we hear technology vendors say that if all IT organizations would just operate like Google or Facebook or Twitter, then IT costs would be reduced and business productivity increased. Except this leaves many companies saying that they don’t have a “deliver digitals ads” problem, so how does that approach make sense for them?

Two years ago, I was introduced to Christian Reilly (@reillyusa), who is part of the IT organization at construction leader Bechtel. Bechtel had been looking at how to solve some massive business challenges (global workforce, complex projects, internal and external employees, etc.) by better leveraging their technology investment. It required them to transform how they thought about technology, as well as implementing a new set of technologies to enable new applications. As I quickly learned from Reilly, this set of changes wasn’t something they could buy shrink-wrapped in a box, but rather it was a multi-year transformation that involved people, process and technology changes.

It had been a while since I last caught up with Reilly, but this past week I saw a very interesting video that Bechtel jointly created with Apple about their iPad rollout. While the video is produced in typical high-production-value Apple manner, under the covers it highlights the implementation of tons of very interesting technology. Their solution is not being used to serve ads or update their social network, but instead is focused on things that aren’t sexy but are critical for Bechtel to solve their business challenges and bring value to their customers. Let take a look at some of the things behind the scenes. Continued »


March 11, 2013  6:11 PM

Cloud Computing – Platforms vs. APIs vs. Tools vs. Features

Brian Gracely Brian Gracely Profile: Brian Gracely

One of the more interesting aspects of public Cloud Computing, beyond all the elements of on-demand (pricing, scaling, etc.), is the number of add-on services that have emerged from the ecosystem to add value around core platforms like Amazon AWS, Rackspace, Azure, Google Compute Engine, etc. Some of these services include Boundary, New Relic, enStratius, Rightscale, Cloudability, ShopForCloud, Cloud Checkr, Newvem, Cloudyn, CloudPassage and many others. These services are allowing customers to not only fill in gaps with the service offerings from those platforms, but also consume these add-on services in the same on-demand manner as the underlying IaaS, PaaS or SaaS platforms.

But an interesting thing tends to happen with software platforms, both on-premise and in the cloud. Over time, they tend to eat their ecosystems. We’ve all experienced it with platforms such as Windows, where things like TCP/IP stacks, web browsers, media players and all sort of other functionality used to require 3rd-party add-on capabilities. And now we’re beginning to experience it with Cloud Computing platforms. We saw it over the past couple weeks with announcements from Amazon AWS – the OpsWorks and TrustedAdvisor services. It’s a classic case of the platform provider wanting to deliver an end-to-end experience to the customer, as well as adding stickiness to the platform. For the 3rd-party tools vendors, it becomes a inflection-point where they have to decide if they now want to compete on price, features, unique technology, or just fold up shop. We discussed some of this on The Cloudcast Eps.77 (starting at 19:30 mark).

So if you’re a customer of any of these services, what should you do? Continued »


February 26, 2013  11:11 PM

10 Questions for the 2013 OpenStack Summit in Portland

Brian Gracely Brian Gracely Profile: Brian Gracely

With the Spring version of the OpenStack Summit coming up in just a few weeks, I’ve been thinking about the key indicators or questions that I have about OpenStack as 2013 continues.

1. Who are the major OpenStack customers?

While each OpenStack summit highlights a new set of users or use-cases, the majority of them are either small-scale or only using a limited number of OpenStack services. This would align to the modular nature of the projects, and to some extent their competitive goal vs. AWS, but it doesn’t align to a complete “stack” solution. When is it realistic to see Enterprise customers that were previously VMware-centric move to a complete OpenStack environment?

2. Are there already too many distributions? Should they be considered competitive, similar to Linux distributions in 1990-2000s?

For a project that is three years old, what is a reasonable number of distributions to have appeared on the market? How are customers supposed to be able to keep track of all the variations? Does the OpenStack community expect this number to grow (limited / significant) before it begins to pare down?

  • Rackspace (2 versions – Private, Public)
  • HP Cloud (public cloud)
  • Piston Cloud
  • Nebula (shipping details TBD)
  • Cloudscaling
  • Cisco
  • Dell
  • Red Hat
  • IBM (shipping details TBD)
  • Various Linux distributions

3. What is the “Open” goal for OpenStack these days? (open-source, multi-cloud)

One of the main goals of OpenStack is to allow open, interoperability between clouds to (potentially) facilitate open movement for applications or data. We’re already seeing the early Service Providers (Rackspace, HP Cloud) having incompatible versions. Is open cloud still a goal, or have market priorities made that almost impossible? Continued »


February 17, 2013  3:57 PM

Is a New Journey Needed for Business-Critical Applications?

Brian Gracely Brian Gracely Profile: Brian Gracely

For the past 3-4 years, we’ve seen tremendous growth in the level of virtualization that has been adopted within Enterprise and Mid-Market data centers. Statistics show that we reached the tipping point for Virtual Machines vs. Physical Machines in 2009, with that lead expected to grow to nearly 2x by end of this year.

And as VMware CEO told us during his VMworld 2012 keynote, virtualized workloads now account for 60% all workloads in the data center.

So we have lots and lots of VMs being created, but yet we seem to be somewhat stuck in terms of which applications are getting virtualized. And in case it’s not clear which applications make up the “other 40%”, it’s those business-critical ones. ERP, CRM, HCM, Exchange, and a bunch of other nasty applications that cost a lot of money to operate and which don’t immediately save money when they get consolidated.

VMware has been going after this market for the last couple years, by adding advancements to their ESX hypervisor to handle larger VMs (more RAM, more vCPUs, new clustering and HA mechanisms) and more granular I/O capabilities (Storage I/O Control, Network I/O Control, QoS). It would appear, on the surface, that the pieces should be in place to virtualize those next 40% of applications. So what’s holding this back from gaining mainstream adoption?

Here’s a list of considerations: Continued »


February 17, 2013  2:30 PM

Unlocking the 3rd Option – Hybrid Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely

Almost every aspect of both our personal and professional lives have evolved to the point where a variety of choice is the expected norm. We buy things how we want; we work where it makes the most sense; we personalize how we appear and communicate; and we’re partnering with a greater number of organizations than every before. Just look at how many apps are on your smartphone or open tabs on your browser, and it doesn’t take long to realize that we have internalized how to find the right fit for each challenge.

When it comes to IT organizations, we haven’t been nearly as flexible. While SaaS adoption has grown for many non-differentiated services, the adoption of Cloud Computing is often considered the 3rd-option after internal data-center resources or outsourcing contracts. But this way of thinking is beginning to change. We’re starting to see large organizations become frustrated with their outsourcing contracts (here, here). We’re quickly seeing a significant change in the companies identified as leaders and visionaries (20102011, 2012) in the cloud service provider market, especially towards those that offer differentiated services. Throw in the emergence of several viable PaaS platforms (Heroku, CloudFoundry, Apprenda, etc.) and we’re on the cusp of the 3rd option, variations of Hybrid Cloud, becoming more and more mainstream for IT organizations.

So when is the right time to consider either migrating existing applications, or beginning a journey with new application models? Here are some triggers to consider:

  • The end of existing outsourcing contracts that haven’t kept up with technology trends, especially those longer than 3yrs.
  • Uncertainty over the longevity of existing/legacy hardware platforms, such Itanium or RISC-based servers.
  • Uncertainty about the longevity of existing/legacy hardware providers, such as Dell or HP.
  • The opportunity to truly change the economics of business-critical applications by moving to both a virtualized environment and OPEX-based cloud deployment model.
  • Shifting business environments, driven by mergers, globalization, or evolving industry regulation (HIPPA, FedRAMP, PCI-DSS, etc.).

Continued »


February 10, 2013  11:43 PM

Shadow IT is not Good Guys vs. Bad Guys

Brian Gracely Brian Gracely Profile: Brian Gracely

For the past few years, there has been greater recognition that a few major trends are invading the IT landscape – smarter business users, challenging IT budgets (here, here), and greater availability of Cloud Computing services (especially IaaS and SaaS). Unfortunately, in parallel to those realizations, there is a growing desire by some to classify this as “Shadow IT“, as if this new desire to drive productivity is the equivalent to an illegal black market.

As analyst Ben Kepes points out, there is quite a bit of demand from end-users to leverage new services to help them drive productivity and better compete in their markets.

So who are the good guys in the Shadow IT discussion?

Who are the bad guys?

And does it do anyone any good to draw a definitive line between productivity and risk?

Does it do IT organizations any good to not consider leveraging every potential resource that can help give their business an advantage, the same way every other direct-report to the CEO does? Does it do the lines of business owners any good to not consult their technology experts?

If we didn’t work in IT and one of our employees came up to us with a great idea about how to drive productivity, would you call it “Shadow Worker Productivity”? I doubt it.

I completely understand that this evolution of IT service, delivered in-house or via Cloud Service Providers, introduces a whole new set of technology, process and cultural changes. But they are being driven by productivity. They are being driven by risk management (time-to-market vs. following existing rules). And they are being driven by excess DEMAND for the use of technology to solve business problems.

In all reality, “Shadow IT” has very little to do with traditional IT. It’s Economics 101 – Supply and Demand stuff. Traditional IT isn’t structured or funded to keep with today’s new demand models. But that demand isn’t a black market.It’s not illegal goods and services. It is an opportunity. Actually, it’s many opportunities.

But if our industry keeps calling it “Shadow IT”, keeps trying to make it about Good Guys vs. Bad Guys, then we’ll miss the opportunity to actually define how impactful technology can be in accelerate the cycle from great idea to great execution.


January 31, 2013  10:48 AM

Most Cloud SLAs don’t Cover Your *aaS

Brian Gracely Brian Gracely Profile: Brian Gracely

Every few months (or weeks), the Cloud Computing industry seems to pick a topic and beat it to death from a technology or religious point of view. The concept of “Cloud SLAs” has been doing the rounds lately. Conveniently, these particular discussions came up after a few well publicized Public Cloud outages.

Lydia Leung (Gartner, @cloudpundit) recently got the pot stirring with her piece about HP and Amazon AWS SLAs. Lydia is very well respected in the industry and she does a nice job of digging into the details of various vendors SLAs. She obviously has a deep understanding of this space, especially as it relates to Enterprise customers, as she leads the Gartner IaaS Magic Quadrant program. 

There is some interesting back and forth in the comments about what is a proper definition of an SLA. That would be all well and good if Cloud Computing used lawyers or auditors to solve business problems. But it doesn’t. It uses technology. And quite honestly, the business leaders that are paying for various Cloud Computing services don’t care about the legalese or the underlying technology. They care about the business. They care about moving the business forward and managing business risks. Cloud SLAs, in their current form (in most cases), don’t align the business risk and the technology risk very well.

Let’s step back a second and look at this in a slightly different context…

Continued »


January 22, 2013  9:50 AM

Extending the Concept of “Application Tiering” to Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely

While the discussion about Cloud Computing has evolved over the past few years, far too often it still devolves into a semi-religious debate about Private Cloud vs. Public Cloud. Traditional IT viewpoints say that security and reliability should rule the day, while more progressive viewpoints argue that this old thinking is slowing innovation and the pace of business growth. Not surprisingly, these viewpoints tend to align to either a Private or Public slant.

What is somewhat surprising is that IT organizations have not followed a strategy that has been proven over many years and against various ROI calculations. The practice of “tiering” their applications. In the past this meant applying various levels of resources (typically faster CPUs, more RAM, faster network, various levels of redundancy, etc.) to different classes of applications. While some will say that dragging along any legacy concepts into the new cloud world is a disaster waiting to happen, the reality is that most Enterprises have a huge variety of application needs and application types. Expecting them all to run in a similar manner, with similar SLAs or costs is not realistic. It would be like saying that everyone can wear a suit and tie to the office, so they should all be paid executive compensation.  Continued »


Page 8 of 10« First...678910

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: