Ask the IT Consultant


February 4, 2012  10:30 AM

Cloud Use Cases – Making the cloud work for you!

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  My company is exploring building a private cloud.  What uses that will best leverage my cloud infrastructure investment?

The magic of the cloud is that it can do anything.  It is both robust and flexible, the best of both worlds.  Ok, I admit that I have been spending far too much time reading cloud marketing materials lately.  Now back to reality.  Yes, the cloud is highly flexible and it can do almost anything, but if you want to get the most out of your private cloud investment, you need to pay attention to the underlying hardware as I discussed previously, and you need to define what you are planning on using it for by creating and testing use cases.

Use case planning seems counter-intuitive.  After all you can sign up for a web server with Amazon in about 5 minutes.  Amazon does not know what you are planning on doing with it.  Wrong.  Amazon’s product management department spends plenty of time figuring out exactly what would be attractive to their typical customer and honing the service to deliver it.  For the enterprise, the planning process is no different, but instead of planning for an external paying customer, for example, the use could be for internal application developers or a web portal.

To give you an idea of how this works, let us say, you are planning on using the cloud for the company’s E-commerce website.  This means that you will need to plan for applications that will support thousands of sessions per second, variable workloads and lots of complex and changing data.  By identifying the key metrics such as number of concurrent transactions per second, size of database, etc. you can then build a method for testing your assumptions.

To get the conversation started here is a short list of possible use cases for a private cloud.  Over the next few weeks I will be digging deeper into how to leverage the cloud model in the enterprise.

Archive storage — Many companies have moved to keeping their archives on line instead of on backup tape for many excellent reasons.  Using SAN or near-line storage is still expensive.  Cloud object or block storage is an attractive alternative because of its optimized approach to high availability.  It also scales nicely as archives grow over time.

Federated hypervisor/VM management – This is one of the main reasons that the enterprise is interested in the cloud in the first place – any server, any service, any time.  Adding self-service, charge back and transparent delivery of the right resources from a federated pool can be very cost effective.  Look for a cloud that provides cross platform hypervisor support and robust VM management tools.

Development and test – One of the best use cases for an enterprise cloud is a shared development and test environment.  Self-service is essential, but the private version allows much more control on resource use by using a rules based delivery model to optimize IT investments.  Creating an enterprise PaaS environment is also desirable because it allows better integration across applications and more standardized application development.

Application spaghetti rationalization – An enterprise cloud delivers better application portfolio management and more efficient deployment by leveraging self-service features, rules for deployments based on types of use.

Web services, portals and e-commerce – Web services of all sorts are a natural for the enterprise cloud.  They are well suited to being able to take advantage of the inherent elasticity and automated workload based provisioning and deployment capabilities.

VDI Support – VDI is another natural for an enterprise cloud.  VDI is often used to better maintain control over workers’ compute environments, but the workloads are inherently highly variable, which is an excellent reason for implementing such systems on the cloud.  An obvious extension is mobile application support which is a growing part of the enterprise service portfolio.

Disaster Recovery/Business Continuity — Again the cheap storage and VM management makes a good case for using the cloud as a secondary site.  The public cloud is already heavily used for these purposes, but moving the function in-house could be cost effective for a very large enterprise.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Transforming Businesses with Cloud Solutions

January 15, 2012  3:00 PM

Cloud Redundancy – A different approach to component failure

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  What is the best way to manage the thousands of components in a typical cloud?  How does managing “at scale” change my systems administration practices?

People have been managing data centers for 30-40 years now, so that should mean that there are a good set of standard best practices for building highly available resilient components.   That is true for the old style data center, but the old best practices are expensive and do not scale well for cloud architectures.  Duplicating hardware to protect against failure works well when you have hundreds of components but the costs are linear so it does not scale.  Unlike traditional IT operations, over-design to protect against obsolescence is not desirable when scaling to thousands of nodes.  For example, spending an extra $6000/rack for 10GB switches might seem to be a sensible way to protect against hardware obsolescence if you have 10 racks, but that extra cost is much harder to justify when you are provisioning a 100 racks and it has turned into an extra $6 million!

The principal of ‘replacement management’ takes on great importance when managing the thousands of physical devices required for a cloud deployment.  The advantage of the cloud is that you do not need to build expensive high availability redundant systems because an assumption that components will fail is built into the architecture.  By leveraging the huge pools of cloud resources, the level of redundancy can be considerably reduced.  If a component fails, the system will continue to work until someone replaces it.  Since commodity low price devices typically have a high rate of failure, the whole architecture needs to be based on “availability” and “partial failure”.

In a cloud environment, it makes much more sense to just replace a component than worry about what caused the failure and trying to troubleshoot it.  The most common components to fail are disks, since they are mechanical moving parts.  A typical disk failure rate in a cloud data center is about 10-15%.  However, fans, power supplies and memory will also fail less frequently.  For example, the OpenStack Swift architecture assumes that disks, systems and entire zones can and will disappear (fail) at any time.  Yet, there are only three copies of every file, and no additional redundancy in the hardware.

This approach to failure at scale can be very cost effective, but it takes different mindset from traditional operations.  Every cloud operations engineer for cloud should learn what is in the service, where the critical parts are located, and how to replace a failed component, then incorporate the knowledge into standard operations processes.  Automated tools need to be written to help identify the location of failed disks and other components so they can quickly be isolated from the environment and replaced.  To maintain a high level of robustness without sacrificing cost efficiency, the system needs to be designed to replicate data on the application/software level, not disk or network level.

In conclusion, the biggest paradigm shift is that development and operations groups need to work together to optimize the systems and drive down costs.  Tests and metrics need to be created to determine the optimum systems configurations.  By understanding how changes in the components affect the systems as a whole, it will allow you to flexibly configure the systems to meet the application requirements as they change.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Transforming Businesses with Cloud Solutions


January 3, 2012  1:30 PM

Cloud Hardware – Sacrificing system efficiency for low cost

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  Is the cloud really hardware agnostic?

The wonderful thing about cloud architectures is that they are designed to be cost effective at massive scales.  The major cloud providers are profitable not only because they can aggregate customers and use the available equipment more efficiently, but they can leverage their considerable market muscle to purchase truckloads of components at steep discounts.  As Google discovered and published in Failure Trends in a Large Disk Drive Population , the brand and cost of a hard drive had little to do with its reliability.  Another paper delivered at the same 2007 Usenix Conference, Disk Failures in the Real World: What Does an MTTF of 1,000,000 Hours Mean to You?, came to similar conclusions.  The key to building reliability in the cloud is not quality components; it is building a hardware architecture that assumes that the components will fail and plan for that failure.  Since the individual components are essentially interchangeable, it stands to reason that a good cloud architecture should be completely hardware agnostic.

The fallacy of that kind of thinking is that failure rates are the only criteria for choosing a given component.  As you know hardware is a moving target, new and better hardware is always coming around the next corner.  Any good storage engineer knows that enterprise customers do not pay the EMC or NetApp premium just because they feel more comfortable buying from a known brand.  They are typically paying for the better tools, faster performance or bigger capacity that they need for their high performance applications.

It turns out that this applies to cloud hardware architectures as well.  Hardware does in fact matter if a cloud is going to run at peak efficiency.  Which hardware components are chosen can make a significant difference under stress conditions.  When the objective is to optimize the environment, the ideal cloud environment should be running at close to peak capacity – essentially under some stress — most to the time.  For example, in a storage array, the two constraints are always going to be system network bandwidth and disk I/O, i.e. how fast the disks can push the data around.  By specifying a faster disk controller and tweaking the configuration to boost the throughput by eliminating disk write caching for example, the entire system will run that much more efficiently.  Yes, in this case you will be reducing disk reliability, but since you already have a mechanism that provides disk failure resiliency in other ways, that risk can be tolerated in exchange for the faster throughput.

In conclusion, at the proof of concept and small system level, cloud hardware agnosticism works just fine, but for massive cloud installations that want to run at peak efficiency, paying attention to specifying the right hardware components to eliminate the throughput bottlenecks, has the potential to boost overall performance significantly.  The trick is determining if the hardware cost differential is worth the increased performance.  Of course at truly Amazonian scales, that cost differential essentially disappears.  However at more modest enterprise scales, in my opinion, in most cases the TCO business case for the better hardware will prevail.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


December 27, 2011  10:00 AM

May the Best Cloud Reference Architecture Win!

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  How can the cloud standardize on a reference architecture when there are so many different ones?

In our work, clients often ask for a cloud reference architecture.  They see it as a holy grail that will provide clear vision and help define their cloud strategies.  In response to a recent client inquiry about a cloud hardware reference architecture that could be used to define a joint venture with another company, I realized that defining a cloud reference architecture as the same hardware platforms was the wrong way to look at the question.  While I do agree a reference architecture is certainly a good starting point, it not something I would build an entire corporate cloud strategy around.

Reference architectures are supposed to be by their nature somewhat theoretical.  Like the OSI model and the cloud stack model, they are artificial constructs or frameworks that can be used to create real systems.  By themselves they tend to be so generic and broad that building an enterprise cloud based on a reference architecture without considerable work defining business objectives and system requirements would only be an exercise in frustration.  For example, an enterprise cloud designed to be used for internal use as a more flexible development platform is a far different animal than an enterprise cloud planned to be used to support massively scalable customer facing applications.

After looking at a few available cloud reference models, a reference architecture means very different things depending on the agenda of the creator.  A useful way to compare the perspectives of the various cloud reference architectures is to map them to the older and simpler Cloud Service Stack Model: IaaS, PaaS and SaaS.  For example: both the NIST and its closely related IBM architecture are relatively generic and high level, but they both have an operational focus which closely matches the IaaS layer.  The recently published RackSpace Private Cloud Reference Architecture while specific to OpenStack, also primarily has an operational/IaaS bias.  Microsoft has defined the cloud not surprisingly more from a development platform/PaaS view, but it also has an IaaS flavored version based on Hyper-V.  The HP and VMWare versions are more appropriate for companies building end to end applications, SaaS or otherwise.

In the end, a good cloud reference architecture should be robust enough to allow it to be used from a variety of different perspectives: business, operations, development and consumer.  However, any company that is foolish enough to try to use a cloud reference architecture to create its strategy without the application of a rigorous amount of common sense and basic good business practices, is likely to be disappointed with the results.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


December 18, 2011  1:30 PM

Recovering from Big Bangs, Glops and Legacy Linguini

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  How can cloud migrations cut through the mess of legacy applications and failed IT initiatives?

The growing IT best practices consensus is that creating enterprise clouds are smart money for corporations.  Private cloud infrastructures empower user with self-service portals, stable application building platforms, and flexible service delivery models.  In addition, warming the hearts of corporate bean counters around the world, enterprise clouds can save significant amounts of money by maximizing utilization rates of expensive data center systems.  Cloud nirvana is achievable; there are plenty of successful case studies around proving the value.  However, for many senior IT executives just a review of the corporate IT portfolio is a daunting task, let alone systematically determining which applications should move to the cloud, then figuring out the best way to achieve that goal.

The sorry reality is that no matter how well disciplined a company is about its IT governance, the average enterprise has 20 plus years of legacy applications in its portfolio that it needs to manage.  Over time, even the best run company will acquire what one of my co-workers affectionately refers to as, glops and legacy linguini.  We all know the recipe; start with a couple of semi-successful enterprise application implementations (the kind where the company is using both the old and the new systems and cannot shut down either), throw in a few mergers and acquisitions (include a handful of management regime changes and reorganizations just to spice things up a bit), then stir in thousands of undocumented patches, temporary fixes that turn permanent and shadow IT applications.  Bake until solidified into an impenetrable morass of siloed functions and tangled application inter-dependencies.

Fortunately, there are some emerging tools, methodologies and best practices that can cut through layers of ossified applications, and offer real actionable guidance on the right approach to moving an enterprise application portfolio to the cloud.  If you are seriously considering adding a private cloud to your IT portfolio, here are just a few questions to ask about your portfolio to get started:

  • How ready are the applications for migration to the cloud? – There are tools available that can drill down within each application to determine the exact amount of effort it will take to make it cloud ready.
  • Would moving the applications be operationally disruptive? – Again, an organizational cloud readiness skills assessment provides guidance to how the IT organization needs to change to meet the different requirements of managing an enterprise cloud.
  • Are they built on x86 based Windows or Linux platforms or are they Solaris, HP-UX or some other more exotic platform? — Only a few years ago I ran into an ERP system that was still running on a Data General platform. A rewrite of the application was a foregone conclusion in that case.
  • How much of the existing portfolio is already virtualized? – Virtualization is not cloud (a common mistake), but being virtualized makes moving to the cloud that much easier. The average enterprise is typically only 30% virtualized.
  • Can the application be replaced by existing SaaS applications that offer similar functionality at a fraction of the cost? — Cloud email, document management and supply chain applications for all except the most unique circumstances are far superior to the legacy systems they are replacing.

From the inside undertaking an enterprise cloud project might seem hopeless, but these types of situations are where bringing in outside experts with real expertise in cloud infrastructures and application transformations can really pay off.  I can tell you from hard earned personal experience, we have seen it before and whatever mess your portfolio is in, others were far worse and they successfully moved their portfolios to the cloud.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


December 4, 2011  10:00 AM

Cloud in a Box – Shortcut to the Cloud?

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  Building a private cloud can be an overwhelming project involving a holistic understanding of operations and development.  Is it really possible to shortcut the process?

Not surprisingly according to just about every research study out there, including the Open Data Center Alliance recent survey of its members, enterprise cloud is the next BIG THING.  This survey predicts an adoption rate five times faster than predictions from other research firms such as IDC and Gartner.  Of course, the numbers could be skewed by the self selected nature of the participants.  I do agree that the enterprise needs to take it seriously and move their IT portfolio to a cloud model.  Every indication is that most enterprises are mapping their cloud strategies and seriously investigating how to do just that.  However, anyone who has tried to build an enterprise cloud knows how hard it actually is.  So for enterprises anxious to move to a private cloud that do not want to take the fifteen years and thousands of engineers that it took Amazon, there are a plethora of companies that are jumping on the enterprise cloud bandwagon with simplified “cloud in a box” type offerings.  Can these premium priced products deliver on their promise to get an enterprise on a cloud platform that much faster or are they just marketing rebranding?

Most of the cloud-in-a-box products are from the usual big IT services shops.  Many of them, such as BizCloud from CSC and VirtualSystem from HP are based on VMWare with some combination of standardized network, storage and systems hardware.  vFabric from VMware seems to be more a kitchen sink of VMware owned “cloudy” products than a vertically integrated service such as Azure.  Cloudburst from IBM and VCE of course could both be considered cloud-in-a-box products, but they are hardware only integration and both come with big price tags that in my mind defeat the purpose of why companies move to the cloud to begin with — which is to save money.

That VMware is the underlying hypervisor of choice is not overly surprising since VMware owns 90% plus of the enterprise data center virtualization market already.  The enterprise is comfortable with their reliable and mature virtualization tools, so that approach would seem to be a natural fit for risk adverse organizations that want to move to the cloud as painlessly as possible.  For enterprises who do not want to pay the VMware or the big vendor premiums, there are some interesting tools from StackOPS, Dell (Crowbar) and RackSpace based on the Open Source OpenStack project.  So far these tools are more recipes and automation utilities designed for enterprises that have access to good dev/ops resources and are willing to live on the enterprise cloud leading edge.

The fallacy is that cloud architectures are not just virtualization with some added bells and whistles.  It is a different way of thinking that requires enormous changes to the enterprise IT organization, because it requires a different understanding of how IT services are delivered at scale.  The only way to realistically support tens or hundreds of thousands of systems is to automate everything; and the only way to achieve the level of automation required for a successful enterprise cloud deployment is for development, test, integration and operations to become part of a continuous process, not the separate functions they are in traditional organizations.  In the end no matter how big a check you are willing write, the current crop of cloud-in-a-box software and hardware offerings are not addressing the fundamental organizational differences between cloud infrastructure operations and IT infrastructure business as usual.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


October 17, 2011  4:00 PM

Emerging Cloud Security Products

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: What are some emerging cloud security products and approaches that assuage IT and Business executive concerns?

New technologies require new approaches; the cloud is a new technology that has already had a profound effect on IT service delivery.  As I discussed in a previous blog on the current state of cloud security, building private clouds is just one way to at least temporarily duck security concerns.  Ultimately however, the best solutions will take an entirely fresh approach.

Overall the cloud security market is just starting to address this need.  While some new cloud security products are just variations on previous themes or relabeled products with a cloud spin, others are taking a radically new approach to the problem.  Many startups are just emerging or still in stealth mode.  At the other end of the spectrum, established companies are announcing security add-ons and products.  Since this is such a nascent market, there is little industry consolidation yet.  Now that the cloud has the attention of the venture community, hopefully any gaps in market will be quickly addressed by cloud security startups.

While the Cloud Security Alliance has been working diligently on developing new standards in anticipation of the upcoming second annual conference in Orlando in November, there is still little consensus on what is needed and what constitute the best approaches to take.  The cloud is far more defuse than any previous technology; think of it as the ultimate intangible portfolio of IT services.  The abstraction makes it very difficult to identify which entity owns the responsibility for securing the systems, or even the best place to apply security.  When your network is the Internet, a separate hardware firewall makes little sense, even if it were possible to place it at all.  Another major issue is the need for the proper separation of applications, data and users to support multi-tenancy both inside and outside the enterprise.

Here is just a small sampling of some emerging companies that are taking cloud security seriously and will deliver workable new solutions that benefit everyone.  They are organized in rough buckets of security concerns mapped on the now standard cloud layer model.

Application layer

At the application layer security often targets identity and user access management.  This might be a combination of VPN and application authentication services or security built into the application itself.  Federated authentication and authorization is always a complex issue in any company, so adding enterprise cloud applications and SaaS to the corporate portfolio only adds to the headache.  Some ID management/SSO solutions include products from AEP Networks, Citrix and Symplified to name just a few.  AEP. Networks (www.aepnetworks.com) has a mix of VPN and authentication solutions for a globally distributed workforce, including a SaaS based authentication service.  For the more traditional enterprise, Citrix Open Cloud Access adds the ability to authenticate a portfolio of SaaS applications to its unified ID management products.  Symplified (www.symplified.com) and Ping Identity (www.pingidentity.com) offer comprehensive SSO that extends authentication out to mobile interfaces for comprehensive enterprise solutions.

Platform layer

Depending on your definition of the platform layer, platform layer security would include secure development tools or tools to secure the instances.  An example of the former is MLSlate (www.mlstate.com), a French company with OPA, a secured programming language and development platform.  For instance security, High Cloud Security (www.highcloudsecurity.com)  uses a key management system to encrypt images of highly sensitive applications in the cloud.

Infrastructure/Network layer

I deliberately added the network to the definition because so many of cloud security solutions available now touch the network in some way.  There are several cloud based firewalls, such as Cloud Flare (www.cloudflare.com) which is a community based firewall. It works a bit like Postini and other cloud based anti-spam services.  Blue Coat Systems (www.bluecoat.com) offers a similar cloud based solution derived from their firewall appliance.

Vyatta (www.vyatta.com) takes a somewhat different approach and offers a virtual firewall with network shaping capabilities.  Bromium secures the hypervisor, so it is more appropriate for a private cloud or service provider.  Of course access to cloud applications through VPN sessions is one of the most common methods, so systems that manage sessions and authentication often touch the network.

Not all products or services fall into neat buckets, some like AEP offer services that cross boundaries, while others such as  AFORE Solutions (www.aforesolutions.com) and Hytrust (www.hytrust.com) approach the problem from a compliance perspective by limiting access to the hypervisor layer and creating auditable logs.

No matter what a security product does, as with anything in IT security, no single solution will address all the vulnerabilities, so it is best to use a mix of products to secure your public, private, hybrid or community cloud.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


October 11, 2011  12:00 PM

OpenStack – An emerging cloud IaaS standard

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: When will cloud computing have standards so that more companies will feel comfortable implementing cloud architectures and using cloud services without feeling locked in?

Cloud computing is finally maturing as a technology.  After many false starts and the proven success of proprietary products and solutions such as Amazon and VMWare, the Open Source community is finally coalescing to devote its considerable resources and energy to the long recognized need for cloud computing standards.  OpenStack, with close to 6000 contributors and over 100 companies backing the project, including such giants such as Dell, Rackspace, Citrix, HP and NetApp, it is one of the fastest growing Open Source initiatives ever undertaken.  The project just celebrated its first year (well actually 14 months, but who’s counting) anniversary.  At the recent OpenStack Conference it was announced that the initiative has been spun out of Rackspace and officially organized as an industry standards organization similar to the Linux Foundation.  What that means is still open to definition; see Andy Oram’s recent blog at OpenStack Foundation requires further definition for more on that.  However, it is a positive sign that the OpenStack community is serious about keeping the technology available to any and all who want to create software and tools that support real cross platform IaaS cloud integration.

For a project that is just over a year old, there is already plenty of exciting work to show for the effort.  Following the Diablo release, the Essex Release Developers Summit held in Boston October 3-7, 2011, was where 250 hardcore developers, product managers and movers and shakers in the OpenStack community met to brainstorm the roadmap for the Essex release scheduled for April 2012.  There are plenty of opportunities to create the architecture and the tools to make OpenStack enterprise ready, but it will not be an easy or trivial task to do it.  There are already some shiny new companies who have the attention of the venture community.  Piston Cloud Computing, founded by several luminaries from the OpenStack Project, including former NASA Nebula Chief Technical Architect Joshua McKenty, and former Rackspace technologist Christopher MacGown, was launched with $4.5M in funding with no customers or product.  While these folks have solid credentials, can we say there might be more than a hint of another Dotcom bubble here?

From the technology perspective, there is considerably less hype and more meat to the project.  Led by Rackspace, with considerable support from plenty of others, the development of the two main OpenStack components, the compute engine code named Nova and the object store code named Swift are moving along quickly.  Integration of the Keystone authentication engine, the Glance image library management tool and other necessary elements, such as a more professional GUI Dashboard are planned for the upcoming release.  There seems to be general agreement in the community that Essex development will primarily be focused on making the components robust enough for enterprise production environments.

Speaking of production deployments, sitting in on the OpenStack deployment panel at the conference was interesting.  Yes, there are already some OpenStack deployments in full production, most notably at Rackspace.  For companies without a deep bench of very senior Linux systems administrators and DevOPS engineers who are happy mucking with syslogs to draw on, OpenStack does not yet have the tools and ecosystem in place to support a deployment managed by mere mortals.  That does not mean that companies should not be looking at it seriously now, because the tools are coming very fast.  There are already offerings from Rightscale, Cloudscaling and StackOPS, to name just a few.  With the solid Swift and Nova components that are already at least partially built, the foundation is already there for entrepreneurs to build the tools needed to make the technology production ready by early next year.  The time to start planning for a production deployment is now.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


September 18, 2011  4:30 PM

Blue Skies Ahead – A New Paradigm for Cloud Security

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: Cloud security is still a major concern for IT and Business executives.  Are there emerging security products that address the unique challenges of the cloud?

Progress has been made; business executives no longer give security as the primary reason for not implementing cloud architectures in the enterprise.  Cloud security is definitely still a concern, but many enterprises are taking it in-house and deploying private clouds.  It does not solve the fundamental problem, but it at least it ducks the pressing security issues of the public cloud, for a while, anyway.  Most currently available enterprise security tools are retrofits of existing approaches that assume a private controllable network of some sort that needs to be protected.  Too many of the older products essentially assume security is the protection of the soft inside with a hardened outside.  However, there are plenty of new vendors are taking a fresh look at cloud security and working on exciting new solutions that assume that system exposure is a fact of life.

To really take cloud security to the next level, the cloud has to be looked at from a different perspective.  Cloud infrastructure and applications are all on the ubiquitous network, so the relationship between users and the systems they are using are very abstracted.  Users are not accessing applications from inside protected networks and locked-down clients.  Therefore taking this metaphor and applying it to security means that any solution (or set of solutions) needs to take the cloud own its own terms and address the problem at all levels of the cloud stack.  Cloud Security Alliance (CSA) has identified the following threats to cloud security:

  • Abuse and Nefarious Use of Cloud Computing – I would add “Hacking as a Service” which leverages the relative anonymity of the cloud to prey on users.
  • Insecure Application Programming Interfaces – This is probably the most easily understood, but hardest to fix problem. API developers are not highly motivated to make security a priority.
  • Malicious Insiders – The temptation to leave backdoors and get back at employers never goes away.
  • Shared Technology Vulnerabilities – The abstraction and complexity of cloud architectures makes this very difficult to identify and fix.
  • Data Loss/Leakage – Now that data is anywhere and everywhere, this will only get worse.
  • Account, Service & Traffic Hijacking – A growing problem.
  • Unknown Risk Profile – We do not know what we do not know, but the hackers will figure it out for us!

Addressing the entire cloud stack sounds great on paper, but due to the diffuse nature of the cloud comprehensive integrated solutions are not going to work.  That does not mean that security cannot be applied to the applications, platform, hypervisor, and networks up and down the stack.  To address the new reality the emphasis has shifted to a portfolio of application based security, ID management, VPN sessions, and end to end data protection, and away from more traditional monolithic security approaches such as firewalls and port based security.  Some of the newer security solutions are taking existing comprehensive ID management systems and extending them into supporting mobile apps for example.

Another major issue that is important to public and community cloud customers is separation of data and security for multi-tenancy.  It is not enough to only look at it from the customer perspective, the provider needs to address it at the hypervisor, storage and instance perspectives as well.  SaaS is a particularly difficult nut to crack because the responsibility for securing the systems is shared all along the supply chain.  If any of the constituents decide to cut corners or are not as savvy about security as they should be, they are exposing all of their partners downstream to potential embarrassment.  The strings of successful attacks on Sony, Heartland, TJX and others demonstrates that the hackers are well ahead in their understanding of the weak points in the systems.

Many of the emerging technologies are still focused on securing the infrastructure.  New products are virtualized firewalls and secured hosts and other methods of securing the cloud that are not fundamentally different from previous thinking about security.  There are some companies that are taking a totally new approach to cloud security.  Those are the ones I am excited about.  Next time I will discuss some emerging companies that are taking cloud security seriously and will deliver workable new solutions that benefit everyone.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


September 6, 2011  4:15 PM

Patchy Clouds with a Chance of Rain

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: As the cloud model of IT service delivery matures, have the security standards and technologies kept up?

Cloud Computing has already fundamentally changed the way consumers and small businesses use the Internet.  However, as with any new technology model there are going to be some hurdles to overcome before universal acceptance.  According to a 2010 Kelton Research survey of 537 IT and business executives, security concerns were the top reasons cited for not adopting cloud technology.  Two recent survey articles on cloud security offer some insights on the differences of opinion about cloud security within the cloud technology community.  While the two articles cover much of the same materials, Blumenthal’s Is Security Lost in the Clouds?, takes a considerably more pessimistic view of the ability of existing technology to address the problem than Bisong, A., & Rahman, S. M.in their Overview of the Security Concerns in Enterprise Cloud Computing.

Bisong and  Rahman suggest that if the cloud implementation properly follows IT industry best practices, securing the cloud is primarily a technical problem that can be easily addressed.  Their overall message is that cloud security is nothing to worry about and the existing technology and services are more than adequate for the task of protecting enterprise data in the cloud.  They spend relatively little time discussing how to quantify the many complexities of the legal, operational, business and technical risks of a cloud computing implementation.  They barely mention the problem of cloud ownership and who is responsible for maintaining the integrity and privacy of data in the cloud, concerns I have discussed extensively in the past. While there have been improvements in cloud security –the work of the Cloud Security Alliance is particularly noteworthy– there is still plenty of room for more innovation.  There must be a fundamental shift of thinking about cloud security before IT executive fears can be permanently assuaged

On the other end of the spectrum, Blumenthal is clearly more paranoid.  She postulates some additional threats unique to the cloud environment, such as clouds as hacker fronts she terms “hacking as a service” and clouds as havens for illegal activities.  She digs into not only the technical security issues, but she addresses the potential business risks by discussing the cloud strategy tradeoffs of giving up autonomy in return for lower costs and elasticity.  While she agrees that there are great advantages to moving enterprise applications to the cloud, she cautions the reader to note that once all the proper safe guards are implemented, the “apparent economic advantages of the public cloud” might well be eroded.  She advises the enterprise that is considering moving their IT applications into the cloud to fully analyze the risks and move carefully.

Diagram of Cloud Security Risks

Figure 1: Diagram of Cloud Security Risks

In conclusion, network security people generally tend to be a paranoid group and both articles clearly spell out the many dangers inherent in moving the enterprise to public cloud architectures.  However, in comparing the two articles it is clear that Blumenthal is far more knowledgeable about not only the technical issues but the overall complexities of delivering secure enterprise cloud services that meet the business requirements for risk mitigation.  I would trust her conclusions that the inherent insecurity of cloud services has not been properly addressed by the community or the vendors yet.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: