Ask the IT Consultant


July 26, 2011  12:00 PM

SDaaS – New tools for a new architecture

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: Does the cloud require a new approach to software development? What tools are available to build new applications in the cloud?

Previously I broached the idea of adding a new Software Development as a Service (SDaaS) layer to the commonly understood three tier cloud infrastructure architecture standard.  One of the transformational aspects of cloud computing is that it both integrates the entire IT stack so nicely, while at the same time allows vendors to slice it into very narrowly defined services.  That is both its great strength and its great weakness.  The advantage of this approach is that vendors are able to build deep expertise in a single knowledge area without having to worry about the underlying infrastructure.  On the other hand, one of the hidden pitfalls of the cloud is the difficult integration across the plethora of vertical services that have developed and the lack of specialized cloud development tools.  Now that cloud has become more widely accepted and more companies are interested in building customized applications in the cloud, it is time to define the SDaaS layer.  I envision SDaaS as set of cloud optimized development tools designed to allow companies to build new SaaS services and/or easily migrate existing applications into the cloud.

In my mind, SDaaS rightfully belongs placed solidly between the PaaS layer and the SaaS cloud layers.  It needs to be on top of the platform (PaaS) layer because modern software development tools and environments are should be independent of the operating system environment.  There is still room for improvement.  It is true that most applications do not depend on the platform that they run on, but that’s because they are thoroughly undemanding — not infrequently, because anything depending on sophisticated facility is done without.  Many applications which could make good use of 64 bit word-size where it exists, for example, do not because it requires complicated conditional coding.  Certainly things have improved.  The development of large standardized libraries that transparently support and extend various facilities available on all platforms has helped a lot.  Unicode, XML, XML based standards and other standards have done much to enable software to be independent of platform but it comes at a high cost

It should be that a Java library is a Java library regardless of whether it is on some flavor of Linux, Solaris or Windows.  Yes, underneath there is an operating system, but as any systems administrator can tell you, most developers have little awareness of the underlying platform; which is exactly how it should be!  We are not quite there yet, but abstracting the development tools will go a long way towards achieving that goal.

On the other hand, clearly, SDaaS should be placed below the SaaS layer because it supports the development of SaaS applications.  SaaS services as they have evolved are defined as applications that are multi-tenant, configurable, not customized, and have a subscription based pricing model.  The operative word is configurable user facing applications, not development environments.  The SDaaS layer would need to include standards and libraries that address the particular needs of the highly horizontally distributed applications and cloud infrastructure.

Until recently, most SaaS applications have been developed using standard web services tools, but as the cloud application business grows and the technology becomes more sophisticated, there is a need for a set of development tools that address the particularities the cloud.  With cloud elasticity and horizontal scalability in mind, SDaaS development tools need to have the following characteristics:

  • More tools to support asynchronous transactions
  • Transparent horizontal scaling capabilities
  • Mechanisms to allow on-demand platform neutral delivery of services
  • Baked in code security – we have been living with bolt-on application security for far too long.
  • Support of applications in the run-time state to allow the creation of end-to-end development/test/production environments

What I am talking about is developing languages that would include functionality specifically for developing cloud software — not just libraries that provide classes to use the cloud and not XML schema that specify a semi-readable format for describing data or configuration.  These work, but they are clumsy and frequently baroque.  In thinking about what such a language would include, the most obvious thing that occurred to me was direct language support for async calls.

We are not starting completely from scratch.  Some services currently available meet my definition of a SDaaS system.  Microsoft Azure and other similar services, such as, AppEngine by Google and Force.com by Salesforce.com clearly are providing services and tools for building new applications and supporting existing software, but unlike SaaS (Software as a Service) these services are designed specifically to be used by developers.  In future weeks I will be drilling down into the currently available SDaaS offerings in more detail.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!

July 5, 2011  4:30 PM

SDaaS – Introducing a new layer to the cloud computing model

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: I am confused about the Microsoft Azure cloud offering, is it a PaaS as Microsoft says, or a set of cloud based software development tools?

A recent presentation I saw about Microsoft’s Azure cloud service claims that Azure is a PaaS (Platform as a Service), while offerings from Rackspace and Amazon are IaaS (Infrastructure as a Service).  Coming from the network administration world I was left scratching my head.  You would think that by now cloud computing definitions would be settled.  After all, cloud technology as a concept has been around for at least five years.  As noted in my previous blog on cloud business and operations , NIST has published a viable definition even with its somewhat operational bias.  To me and the majority of the IT community, the definition of PaaS is that it provides the operating system but any tools or applications installed on that platform are beyond the scope of a PaaS service.

Drilling down further into the mystery, there seems to be a gap in the three layer model that requires an additional component.  Cloud computing follows the traditional IT model of infrastructure support and application development as separate functions.  IaaS and PaaS are both services that are designed, built and supported by people who are IT infrastructure engineers.  They have little knowledge or interest in the applications that sit on top of the systems they build.  SaaS products, on the other hand, are developed as applications designed to be used primarily by end users.  Modern SaaS products typically leverage IaaS and PaaS services for their infrastructure.

Microsoft Azure and other similar services, such as, AppEngine by Google and Force.com by Salesforce.com clearly are providing services that do not fit into that classic three layer cloud computing model.  They deliver tools for building new applications and supporting existing software, but unlike SaaS (Software as a Service) these services are designed specifically to be used by developers to create new applications.  Azure does have some built-in runtime support tools, but unlike VMware’s vFabric the tools are designed from the developer’s, not operational perspective.

So I would argue that Microsoft’s definition of Azure as a PaaS is misleading.  Clearly Microsoft is co-opting the term PaaS by its own unique definition for marketing purposes, but I think that just muddies the waters unnecessarily.  Microsoft Azure as a comprehensive development platform in the cloud built from familiar components has few real competitors and offers a valuable service for Microsoft centric shops that have no other viable way to easily migrate to their applications to the cloud.  A more accurate view is that Azure is an application development environment as a service.  Maybe the best term for Azure and other such tool kits should be Software Development as a Service or SDaaS.  By labeling these offerings separately it clears up the confusion between the IT operations and development functions.  The four layer cloud model, IaaS, PaaS, SDaaS, and SaaS more closely maps to the required staff skills and matches the IT functional model that exists in most organizations today.

As their web portal says, “Windows Azure and SQL Azure enable you to build, host and scale applications in Microsoft datacenters.  They require no up-front expenses, no long term commitment, and enable you to pay only for the resources you use.”  Sure reads like a cloud Software Development as a service offering to me.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


June 20, 2011  9:00 PM

Culture Eats Strategy for Breakfast – IT innovation in the real world

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: How can established companies create a culture that encourages and rewards innovation?

Years ago, Fred Tuffile, my entrepreneurship professor at Bentley University once said the biggest advantage that new start-up companies have over established businesses is a blank piece of paper.  They might not have any money or customers, but they do have some ideas and a capability for innovation.  Typically, as companies grow and mature over time, they develop processes, bureaucracies and the dreaded, “that’s not the way we do it here” attitude.  Things do not always need to end up this way.  The companies that do maintain an innovative culture consistently out perform their more conservative counterparts.  With the right sponsorship and support from management, it is possible to create a culture that nurtures and encourages innovation in IT and all areas of the business.

At the recent MHT – New England CIO Innovation Summit, unlike the buttoned down MIT CIO Symposium a few months ago, the panelists were all sending a clear message that businesses need to embrace innovation throughout the organization.  Bill Oakes the CIO of the City of Boston, found that even the most traditional cultures are accepting of innovation when the benefits are clear to the rank and file city workers.  Tsvi Gal, the keynote speaker, noted that 85% of IT services are the same across all organizations, but it is the last 15% that are the critical differentiators.  Think of the cloud as a way to make that 85% of the IT infrastructure completely transparent, so that the corporate IT resources that really know the business can concentrate on the 15% that really delivers business value.

No matter where you are on the corporate cultural spectrum, it is possible to drive innovative thinking.  The key is to work at different levels of the organization simultaneously.  If the executive management is actively encouraging an innovative culture, even the most hide-bound staff will catch the excitement.  At the other end of the spectrum, those skunk works projects that bubble up from the groups of smart engineers continually generate 80% of the new ideas in a company.  Those groups are creating the future products.  If they are not nurtured within the corporate structure, they will eventually take their good ideas someplace else or strike out on their own.  You need corporate executives to support smart staff so they can be creative and innovative with in the enterprise ecosystem, and the smart innovators need to know they are supported.  Together you will take over the world.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


June 13, 2011  5:00 PM

Vaulting into the Cloud – Creating your Cloud Portfolio

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: Are there any good evaluation criteria for determining which applications in a corporate portfolio are good candidates to migrate into the Cloud?

Now that businesses are convinced that they should be moving at least some of their applications and systems to the Cloud, the next hurdle is determining not only how to migrate applications into the cloud, but more importantly, what types of functions to move.  There are some obvious answers, such as test/development, but best practices and standards are still emerging as companies continue to struggle with the definitions and the complexities of available Cloud services.  That process will continue for another few years.  In the meantime, some basic principles can be applied to any IT portfolio Cloud readiness assessment.  By focusing first on a review of the internal applications portfolio, rather than the available Cloud technology valuable insights can be gained about the right Cloud migration approach for your business.

At the 50,000 foot level, any organization needs to evaluate its existing IT portfolio and suitability for the Cloud in context of the organization using the following four criteria:

  • Business – TCO (Total Cost of Ownership), ROI (Return on Investment) value to the organization
  • Operations – Business processes and efficiency
  • Technical – Maturity and ease of adoption
  • Security – Access control, regulations, legal, risks and exposure

Business value and objectives should be the primary factor to apply, followed by the others as appropriate.  The relative priority of the other components is going to be driven by the specific industry requirements and organizational business model.  For example, a defense contractor is going to be far more concerned with security, then a manufacturing company, while a logistics company will focus on operational efficiency and partner integration.

These criteria can be further broken down into sub-categories and developed into standardized evaluation matrices and checklists for the Cloud decision.  To obtain a more meaningful outcome, evaluate a spectrum of applications, rather than looking at each application independently.  This common mistake often results in disjointed Cloud policies and SaaS island hotspots in the organization.  Even in a large enterprise with thousands of applications, looking at a sampling of just a few hundred applications can supply valuable governance and perspective to any transformational Cloud project.  By determining the internal IT portfolio priorities, the direction of how Cloud architectures fit into the overall IT strategy for the company.

Once the applications have been mapped against these criteria and a prioritized list of projects created, only then is it time to delve into a discussion about appropriate types of Cloud architectures.  As with anything in the business world, there are quite a few checklists available on the web. Unfortunately, the ones I have seen so far have been difficult to customize, far too complex and hard to customize.  Smart IT executives would benefit from using the services of a Cloud consulting company to help sort through the hype and create customized criteria.

About the Author

Beth Cohen, Cloud Technology Partners, Inc.  Moving companies’ IT services into the cloud the right way, the first time!


June 6, 2011  2:00 AM

Taking Business to the Clouds – Getting out of the Operations Ghetto

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: Recently there has been a big push to develop private cloud computing services for the enterprise.  How can IT management really to take advantage of the benefits and avoid the hype?

It is official.  Cloud computing is finally mainstream enough to earn its own NIST definition.  On the surface it is simple enough; by NIST standards the definition is pretty short – only 2 pages.  It has five essential characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity and measured service.  The three service models are well-known: IaaS, PaaS and SaaS, and the four deployment models are public, private, community and hybrid.

None of this should be news to anyone who has been following the cloud for a few years.  What I find interesting is how IT operations-centric this perspective seems.  From the vendor messages at VMWorld last summer, the MIT CIO Symposium in May, and the most recent Boston CloudCamp,  you would think that Cloud Computing is primarily a new way of building IT infrastructure to run IT operations more efficiently.  Certainly, if you listen to the big consulting company of your choice, all you need to do to achieve your IT operations cost reduction goals, is sign a multi-million dollar deal and they will sell you a shiny new private cloud data center.  I will grant that it does achieve that goal (with a few asterisks of course), but if it was just a way to manage IT operations better, faster and cheaper, Cloud computing would not be all that exciting.

In my mind, Cloud computing and its companion technologies, mobile access and social networking tools are really a new paradigm of delivering IT tools and services to the enterprise.  Cloud shifts the focus away from essentially boring operations and systems by turning them into utilities.  Once the operations layer is abstracted away, users can focus on services, the essence of what we use computers for, to build better tools.  Yes, we have all heard that before.  Some of the older folks might even say Cloud is the new timeshare, but it is far more than that if we can push past the old way of thinking about the enterprise.

To really take advantage of the Cloud, take a look at what is happening in the consumer cloud space.  While some corporations are still dipping their toes in the water, some really exciting mass market tools and services are emerging.  Gmail, Skype and Flikr have become the new de facto standards for social communications, while Facebook and Linkedin are giant real-time experiments in the power of social networking writ very large.  Love it or hate it, Groupon and its ilk are bringing sophisticated mass customization marketing techniques and tools to small businesses.  Groupon would not even been able to exist if it were not for the Cloud.  It leverages the Cloud infrastructure to build out its systems, taps into its markets using the techniques of the social media pioneers, and delivers its services through mobile devices.  Clearly it is working in the consumer space.

These new architectures can translate back into the more cautious enterprise by using a combination of risk analysis and Agile methodologies.  You can can use them to rapidly create test cases within the enterprise setting to quickly determine if a feature is useful or desirable.  Some very successful companies have taken this new approach to heart.  Not surprisingly, Google uses it to build new tools, but long before the Internet, Capital One successfully used these methods to bring credit services to the masses, reaping huge profits along the way.  The good news is that the techniques are sophisticated, but they are not out of reach of any enterprise willing to shake off the status quo, build a culture that encourages lateral thinking, and truly rewards bold achievement.

About the Author

Beth Cohen, Cloud Technology Partners, Inc. Moving companies’ IT services into the cloud the right way, the first time!


May 30, 2011  2:00 AM

Taking Back IT – SaaS Portfolio Management

Beth Cohen Beth Cohen Profile: Beth Cohen

Question: New SaaS applications seems to be popping up everywhere in my organization.  How can IT address the pressure from the business units to support these ad hoc applications and integrate them into the overall enterprise IT portfolio?

Forrester forecasts the SaaS global market will grow from $25.5 billion in 2011 to $159.3 billion in 2020.  With those numbers, there is no question that SaaS applications in the enterprise are here to stay.  The plethora of new SaaS (Software as a Service) offerings can be extremely tempting to business managers who are told they can have a functioning system in just a few days, not the three months that IT is promising.  Of course the reality is quite different, but the biggest challenge is that since business unit decision makers, not technology leaders, are bringing these applications into the enterprise, they do not have a clear understanding of where they fit in the enterprise big picture.  It is not uncommon to find an enterprise that already has a well established and functioning corporate CRM system, has two, three or more rogue SaaS CRM applications lurking out in different marketing or sales departments throughout the organization.

Handled correctly within the IT framework, SaaS applications can significantly increase agility and competitiveness.  However, like any powerful tool, handled incorrectly they can be a ticking time bomb.    While there is nothing inherently wrong with having SaaS applications out at the edges of the organization, having multiple stealth application outside of the control of IT is at the very least inefficient and at the worse represent a serious threat to the security of the organization as a whole.  The main issues boil down to the usual suspects:  business, technical, operational and security (which includes data integrity).  The following are some of the common problems encountered with “ad hoc” SaaS implementations:

  • Technical Issues
    • One off” data models that are inconsistent with enterprise standards
    • Poor or missing integration with existing enterprise applications
  • Business Issues
    • Contract issues related to SaaS applications that didn’t deliver the promised functionality
    • Poor performance and SLA issues related to up-time, data loss, or service requests
    • Vendor lock in, application, data and/or contract
  • Operational Issues
    • Lack of appropriate DR/BC functionality
    • Unclear ownership of support or lack of internal, third party and vendor support services
  • Security Issues
    • Privacy, Security problems, access management, information security, and compliance
    • Unclear or missing Identity Access Management (IAM) – No SSO strategy or federated ID management system in place

The headaches caused by these rogue applications can be endless, but the trick is to first learn what the scope of the problem is.  Once it is clear how many SaaS applications are out there, the next step is to spend some time to understand what is driving the business units to install them in the first place.  In some ways that is half the battle, because once that has been determined, then the final step is to develop a SaaS governance process to allow business units to leverage the value of the SaaS approach, but also maintain the integration and oversight that staying under the IT organizational umbrella brings.

About the Author

Beth Cohen, Cloud Technology Partners, Inc. Moving companies’ IT services into the cloud the right way, the first time!


May 21, 2011  5:30 PM

The Big Enterprise Technology Disconnect

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  How can we reconcile the rapid uptake of cloud, social networking and mobile communications in the consumer market with the cautious approach of the risk adverse enterprise?  Are big enterprises missing an opportunity or just being prudent?

“Everything is mobile, everything is in the cloud.”

At the recent MIT CIO Symposium the theme was Beyond the Crossroads, the intersection of cloud computing, social media and mobile applications.  On the surface, the connection with these three themes would seem obvious to a technology savvy audience.  As proven over the past 150 years, first mover advantage can offer lasting benefits for a company that is smart about taking educated risks.  You would think that the enterprise would be embracing cloud and mobile technologies that have proven to be so successful in the consumer market.

However after spending a day listening to industry leader CIO’s primarily in the technology sector, what struck me was how much these supposedly leading lights in technology were stuck in the old school not invented here mentality.  The most exciting technology I saw all day was at the Innovation Showcase held long after most of the 900 attendees had left for the day.  Too bad, because these companies, such as Hadapt , a company that is creating the next generation of distributed database tools, and Apperian, a company building tools for managing enterprise mobile applications in the cloud, are demonstrating that there is plenty of room for innovation for the enterprise beyond the crossroads.

Meanwhile back at the main program, the discussion in the keynote panel on Opportunities and Strategies in the Digital Business World revolved around how the CIO needs to be thinking about how they can become the CEO of the company.  In the companies I have seen, you do not get to be in the top spot by being bold and innovative, rather a demonstrated ability to cut costs and produce short-term gains for investors wins every time.

According to Brian Halligan, CEO & Co-founder, Hubspot,  “Cloud and mobile is not the future, it is a couple of years ago.  IT needs to be deflationary, destructive and disruptive.”  The CIO should be leading the cultural change to the new modern flat organization by leveraging the cloud and mobile application in new and interesting ways.  Instead the reality is that real innovation happens at the edges in the marketing dept, etc., or more often, completely outside of the typical enterprise where conformity and process thinking is encouraged over creativity and originality.  The Cloud moves IT from a capital intensive function to an operational expense activity.  The venture folks understand that paradigm; no right minded startup today is building an internal IT infrastructure.  At the same time, the typical corporate CIO is more challenged by the business manager with a credit card and a grunge against the poor service they have been getting from the IT function.

The dilemma is how can you pull innovation into the IT core functionality without disrupting the flow of new ideas, when the modern understanding of IT at the enterprise is focused on operational excellence and cost control?  Innovation is the marriage of the technology and organization change.  You do not see innovation in IT about 98% of the time because there has been zero input to how the IT tools are actually going to be used.  The trick is creating ways for people to quickly test and validate their ideas so they can implement new ideas that work within the enterprise framework.  The new tools can be used to deliver on that promise, but is the enterprise up to that challenge?  What do you think?

About the Author

Beth Cohen, Cloud Technology Partners, Inc. Moving companies’ IT services into the cloud the right way, the first time!


May 1, 2011  1:00 PM

Uncovering the Truth about the IT Job Market…

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  I am hearing conflicting reports about the strength of the IT job market in the next few years.  Is IT a field that a new grad should seriously consider?

A recent article in the Boston Globe listed computer software and systems software engineers, computer applications software engineers and network systems and data communications analyst jobs as three of the top 30 fastest growing jobs by 2018.  In a further boost to the rosy outlook, in an April 2011 interview with Computer World, David Foote states that the IT workforce is gaining jobs as employers demand IT skills across the board.  For many of those in the trenches who have seen the steady erosion of IT jobs because of automation and off-shoring, this might come as a bit of a surprise.  In support of contrary empirical evidence, recent information from the US Bureau of Labor Statistics indicates that tech unemployment remains higher than the white-collar average.  My personal experience knowing many highly skilled and motivated software engineers and IT folks out of work for long periods of time would indicate there is something amiss in all the statistics and forecasts on both sides of the fence.  The truth is going to be far more complex.

In response to Mr. Foote’s comments about IT job growth, while the prospects for workers with IT skills are good, these are for the most part not new IT positions, but rather positions outside of IT that require new IT skills in addition to the many other skills demanded by the job.  As an illustration, the old secretary job no longer exists – they are all called administrative assistants.  Same low pay, but they are now required to be skilled in all the office productivity software.  In addition the role has shifted to what used to be called Girl Friday, since most managers – another dying profession as organizations flatten out – are expected to type all their own correspondence themselves, using email and other software applications they are expected to be conversant in.

During the recession of the past few years, the US shed 9 million jobs in all sectors of the economy.  Even if the economy returned to the robust growth rate of 12 years ago (which I don’t see happening), I do not see how these jobs are coming back any time soon.  The sorry truth is that while the jobs disappeared from the US, they reappeared overseas, as corporations shifted first manufacturing, then knowledge workers to cheaper labor markets.  For example, IBM no longer tracks its US based employee population simply because it does want to admit that 75% of its employees are now based overseas.  A few years ago it laid off 15,000 employees in the US at the same time it hired 17,000 new employees in India.  One can argue that IBM is a global company and that maybe 75% of its work is overseas as well, but anyone who has done business with IBM in the US, knows that is not the case. Labor arbitrage is a powerful profit motivator that will take decades to balance out.

Back to the original question, in the end I would not recommend my students go into the IT field per se, but I would encourage them to take IT classes and learn valuable IT skills to complement their marketing, business, and finance skills they need to be successful.  I maintain that IT as a profession in the US is still losing jobs and will continue to shed jobs at a great rate for the foreseeable future, but IT skills have become essential for any other profession in today’s highly competitive market.

About the Author

Beth Cohen, Cloud Technology Partners, Inc. Moving companies’ IT services into the cloud the right way, the first time!


April 27, 2011  4:00 PM

Tangled Up in Clouds — Interdependency lessons from the AWS outage

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  Amazon’s recent AWS outage affected a surprisingly large number of sites.  What can we learn about cloud resiliency and how can we minimize these outages in the future?

AWS, Amazon’s hosted web services offering suffered a major outage with some data lose at one of its data centers on April 21, 2011.  It was not the first such outage and I rather doubt it will be the last, but it was the one that was exposed what I call the dirty secret about cloud computing: the illusion of low cost high availability, systems backup and protection, and how quickly so many cloud services have become interdependent.

Ultimately, data protection and high availability boils down to having multiple copies of your data and IT systems in multiple locations with good reliable bandwidth connecting them.  Traditionally, high availability (that is 5 nines and up) has been expensive due to the cost of the bandwidth and hardware needed to deliver the level of service required.

On the surface moving your IT infrastructure to the cloud looks and sounds very attractive.  In theory, the cloud offers a great solution.  By purchasing cloud services, anyone can leverage the investments of Amazon, Google, Rackspace and the other major cloud vendors in state of the art data centers with full redundancy, and big network pipes for a tiny fraction of the cost of doing it in-house.  By moving IT infrastructure to the cloud you can take advantage of the redundancy and resiliency of using multiple vendors and multiple data centers and get enterprise class data protection at rock bottom prices.  Reading between the lines of the standard service level agreements for the low cost cloud services paints a very different picture.  Amazon guaranties 98% up-time, hardly earth shatteringly difficult to achieve.  Once you add in all those pesky asterisks and inter-dependencies, it is unlikely that anyone is going to be able to collect on this incident or any downtime at all.

Setting aside the issue of Amazon services level agreements, all of this assumes that you have control over most if not all of the systems and services in your IT stack.  What this outage highlighted for many companies is that even if they had built in the best fail-over and high availability into their systems, they were still dependent on vendors and services that might not have been quite so diligent.  As more companies take advantage of the increasingly specialized cloud services built on top of the cloud utility vendors’ infrastructure, insuring up-time is going to be increasing more difficult to determine through the maze of inter-dependent services.

The bottom line for a business that wants to gain the advantage of high availability at low cost is that you need to make sure you have not only architected your own service to have a full fail-over solution, but you will also need to spend time doing diligence on all of your vendors’ policies and architectures as well.  No matter how good the SLA is, if one of your upstream service providers does not have a good policy in place, your site will still be affected by their lack of planning.

About the Author

Beth Cohen, Cloud Technology Partners, Inc. Moving companies’ IT services into the cloud the right way, the first time!


April 8, 2011  8:00 AM

Virtualization – Not dead yet…

Beth Cohen Beth Cohen Profile: Beth Cohen

Question:  Virtualization has been the enabling technology for the entire cloud revolution.  Will there be any new development in virtualization tools and architectures?

Virtualization is no longer news.  Over the past 5 years, over 90% of all enterprises have converted their data centers from racks of physical servers to virtual architectures, with an average project cost savings of over $800K.  These numbers are compelling, but over the past two years the virtualization market has shifted yet again.

While VMware still has close to a monopoly on the enterprise market, as the market shifts more toward the use of the cloud, that lock-hold is weakening.  The majority of cloud offerings, like Amazon, Cloud.com and Terremark (recently purchased by Verizon) to name just a few examples, are not based on VMware, but rather either the Open Source Xen architecture or a proprietary platform.  The purchase of XenSource by Citrix a few years ago was a very smart move indeed. With the backing of Citrix, which has long been a major player in the enterprise market, the Xen platform is rapidly becoming more attractive to companies that have traditionally shied away from the Open Source software model.

VMware itself is jumping on the cloud bandwagon and has been rolling out new offerings that appeal to enterprises that want the flexibility and efficiency of the public cloud offerings in their private clouds.  The IT department is going to be hard pressed to win an argument with a business unit about why they cannot deliver a new server in less than six weeks, when the users can already get that service from Amazon in 20 minutes.  The vCloud offering includes such cloud friendly features such as simplified chargeback mechanisms, IT as a Service (IaaS) architectures, and self-service portals.  As the server market is rapidly saturated, VMware is actively developing desktop virtualization products, thin application technologies, and enterprise class identity management platforms to handle thousands of virtual enterprise desktops and applications.  The application, code named Horizon, is basically an ID Management proxy server that interprets a company’s AD infrastructure and presents it out to SaaS applications.  The drivers for this technology are security and control, not cost savings.  Ironically, companies have rediscovered the dumb terminal without realizing it.

The biggest news on the technology side is the development of para-virtualization and new development platforms that are optimized for virtualized and cloud environments.  Para-virtualization first developed in Xen and now in available in VMware, is a way to optimize the virtual environment by sharing resources.  By having the virtual machines share memory, CPU and disk I/O, the virtual environment can be run more efficiently than the traditional stove-piped guest OS architecture.  However, the cost is fewer supported platforms because the para-virtualized kernels must be supported by the underlying hypervisor and more risk that the security between the guest servers can be breached.  The cloud environment is heavily dependent on para-virtualization.

The other recent development is the renewed interest in new tools that leverage the virtual and cloud environments.  VMware’s purchase of SpringSource in August 2009 is an indication that the development of cloud enabled applications is going to be a huge market in the next few years.  VMware has continued to demonstrate their enthusiasm for this market with its March 2011 acquisition of WaveMaker, a company that has built tools that make it easier to use the Spring tools platform.

In the future, I would see a return to stripped down operating systems that are optimized for delivering applications like web services and databases.  Think of it as sort of a full circle reassessment of the definition of an application and an operating system.  Who needs a VM Windows server with web services on it, when what you really want is a web server?

About the Author

Beth Cohen, Cloud Technology Partners, Inc. Moving companies’ IT services into the cloud the right way, the first time!


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: