An unnamed client of Forrester Research received a bill for $1 million from a software vendor for violating licensing terms. The problem was that the company was running its software in a virtual environment on any number of servers in its data center, versus only the servers it had originally licensed the technology for.
This isn’t the first time I’ve heard of virtualization licensing terms being violated. A systems integrator told me that a customer had to pay Microsoft $300,000 after an audit of an application virtualization project. Apparently, the company was using Symantec’s Norton Ghost disk-cloning technology to create ghost images of four different desktop models. The company had licenses for four images, but they were being used by 800 users.
So how are vendors counting licenses under the virtualization model, and how can you avoid violating virtualization licensing terms?
Duncan Jones, a licensing expert and analyst with Forrester Research, gives some background in a recent report on counting virtual licenses:
For decades, many software vendors have licensed their products by hardware-based metrics such as server, processor, or device. The definitions they have used in their license agreements assume a permanent assignment of software to physical assets. The licenses are like labels that the operations manager can attach to a piece of hardware to say “this device is licensed to run Product A.” But the lawyers who wrote these agreements never envisioned today’s virtualized data centers. Increasingly, applications now run in software-controlled bubbles, called virtual machines (VMs), which usually cannot be permanently associated with the physical resources supporting them. This makes it hard for software vendor managers to ensure that their organization has sufficient license capacity — one can’t affix a license sticker on a virtual machine. If they’re not careful, these sourcing and vendor management teams may find themselves facing a large unexpected bill after a software audit.
Jones offers a few steps you can take to avoid violating virtualization licensing terms. These include:
- Choosing to license products based on named users rather than processors;
- Working with your vendor to retrofit your software licenses for a virtual environment;
- And, simply favoring vendors with more enlightened licensing policies.
Burton Group’s Chris Wolf believes it is time for those serious about virtualization to get a third-party licensing management tool. IBM offers such tools, as does ManageSoft.
ManageSoft, for example, allows you to audit the software you have in a virtual infrastructure and maintains an online database that will validate compliance for the applications and operating systems running in a virtual environment.
License compliance is no joke, as those who’ve been fined can attest. The onus is on you to figure out what you need and work with your vendors on the terms you need.
Let us know what you think. Email me at email@example.com.
Good morning! Last week, SearchCIO.com delved into private clouds, network access control in the enterprise, Lean thinking for IT and virtualization strategies. To learn more about any of these areas, check out the stories linked below.
Wary of public cloud, CIO builds private cloud and transition plan – CIOs reluctant to put cloud pilots into production are investing in private clouds with moves they can then take to the public cloud later. Read this story to learn how InterContinental Hotels Group managed the pilot.
Network access control: Security advice for enterprise CIOs – Network access control (NAC) is a method of improving the security of a proprietary network by restricting network resources to endpoint devices. But it’s not a one-size-fits-all solution. Read this guide and pick the best form of NAC for you.
Lean thinking for IT – Lean thinking is the process of incorporating Lean principles into an enterprise for better operational efficiency – and who doesn’t want that? This FAQ shows how Lean thinking works and how IT is benefiting from this improvement methodology.
Enterprises fill client virtualization gaps as client hypervisors bake – Learn how enterprises are mixing and matching virtualization technologies for the desktop as they await bare-metal client hypervisors from VMware and others.
Does outsourcing IT jobs mean that U.S. companies and employees can no longer compete on a global scale?
“Surprisingly, barely a fifth of companies state that jobs moving outside the country is an important inhibitor to outsourcing, which doesn’t reflect the media spotlight. It appears that while many people can voice protectionist views, when it comes to driving cost from their corporate bottom lines, it’s a different story.”
I asked AMR’s Phil Fersht, a co-author of the study, about this during our interview, and he confirmed that executives often say one thing when it comes to outsourcing IT jobs, then do quite the opposite:
“When we’ve talked to executives, some of them have said, while they have to be shown to play ball and protect U.S. resources, they’ve got to offshore,” Fersht said. “The cost of running a business in the U.S. these days is so unattractive compared to other locations.”
On his outsourcing blog, in a post titled “Who’s looking out for the U.S. business these days?,” Fersht elaborated on this stance, pointing to several factors that he said weigh against multinational companies conducting business in the U.S.:
– The cost of living in business centers such as New York City and Chicago is sky-high more so than many countries’ business centers.
– Health care costs employers a great deal – and Fersht believes that President Barack Obama’s proposed health care reform will increase the tax burden on the U.S. business.
– Other western countries are more corporate friendly – according to Fersht, the cost of hiring qualified graduates in London is half that of New York.
These factors caused Fersht to pose the question: “Why even consider setting up a global business in the U.S. these days in this virtual environment?”
This is obviously a tricky issue, and I’m torn looking at both sides of it. Particularly in this economy, it’s more difficult than ever to watch IT outsourcing jobs migrate offshore because workers in Asia or South America will do the work for less. It’s an example of the free market and cost-cutting at its finest, but it leaves U.S.-based companies and IT workers at a severe disadvantage.
However, don’t people living in Asia or South America have just as much a right to earn a livable wage by putting their IT education and skills to work for them? As we’ve all heard by now, “the world is flat,” and globalization means that, just because a business if U.S.-based, it doesn’t mean U.S. workers are inherently more entitled to first dibs on sought-after IT jobs. It might free up U.S. workers to complete more value-added work, such as positions that require not only technical knowledge but business sense as well. And if U.S. companies are more profitable as a result, that is within their rights, as well.
What are your thoughts on protecting IT jobs in the U.S. vs. outsourcing IT jobs? Are the cost advantages of IT offshoring too big to ignore, or are you committed to preserving these IT jobs in America?
Vendors are working on ways to create virtualization licensing terms better suited to multi-tenant environments, but there is clearly still a lot of confusion out there surrounding licensing terms for desktop virtualization.
Well, virtualization licensing in general, for that matter.
An IT person in charge of application management for a systems integrator told me that some of his customers can’t make heads or tails of how they should approach Microsoft if they want to do client virtualization.
Do they need two operating system licenses if two virtual machines run on one physical client? This question applies to the use of Microsoft’s server hypervisor Hyper-V as well, since many of the systems integrator’s customers are using Hyper-V to create virtual desktops. (My story on pricing out Windows Server 2008 for virtualization cost efficiency explains how many virtual machines, and in turn operating systems per physical device, you can get under Windows Server 2008 agreements.)
Experts presenting at the recent Burton Group Catalyst conference said time and again that it was up to companies of all sizes to push vendors to change their licensing terms to better suit virtualization and cloud models.
Chris Wolf, virtualization analyst with Burton Group, asked point blank: When do you cut loose vendors that refuse to get on board? Meaning ones that will not support your virtual infrastructure model with fair licensing terms.
And what is fair licensing to Wolf? Licensing that is based on virtual CPUs, managed instances or installed instances, and not licenses that tie to physical hardware or compute resources.
“You need to push vendors for licensing terms that are physical system-agnostic,” Wolf said.
Let us know what you think here, or email me at firstname.lastname@example.org.
Welcome back from the weekend! Ease yourself into your Monday morning by checking out the latest content from SearchCIO.com on IT outsourcing, using project and portfolio management (PPM) software vs. SharePoint, cloud computing risks and our guide on PPM and IT governance.
Firms to turn to IT outsourcing for global growth in economic recovery – IT outsourcing is poised to resume growth later in 2009 as enterprises look to globalize and pursue expansion strategies. Is your firm anticipating a 2009 outsourcing turnaround? Learn more about it here.
PPM software vs. SharePoint: Myths and user-vendor disconnects – PPM software vendors are trying to address user concerns over pricing, resource allocation and calculating ROI. But have Microsoft SharePoint and Project already done the job? Which route has your organization taken?
Beware these risks of cloud computing, from no SLAs to vendor lock-in – In their rush to get services on the market, computing providers are leaving quite a few gaps when it comes to contracts and accountability. In this article, find out what to watch out for to protect your organization.
PPM and IT governance in a recession: A guide for enterprise CIOs – Project and portfolio management and IT governance are increasingly important in this economic recession as CIOs aim to prioritize projects and assess ROI. Learn more in this guide, the latest in our site’s CIO Briefing series.
Companies using the agile rapid development methodology are experiencing benefits including quicker return on investment, greater ability to embrace change, better quality and increased productivity. So why aren’t more enterprise organizations adopting the agile process in today’s economy?
According to a recent SearchCIO.com PPM and IT governance survey of 300 IT professionals, only 30% of enterprise users are deploying agile or other rapid deployment methodologies.
One reason for the low adoption rate could be lack of executive support. The agile development methodology is not the type of initiative that is supported from the top down at most companies because executives see it as costly and complex. Instead, it usually starts in with the application development team, which has become more service- and client-oriented in this economy and knows that the agile process allows them to better collaborate and more quickly meet the needs of the business. With its success comes recognition and support from the top.
Another possible reason for agile’s low adoption rate could be the economic recession. CIOs and company executives see the agile development methodology as a huge investment in money, training and resources.
However, some companies, like IBM, have used the recession as a perfect time to adopt agile. During this global recession, IBM transitioned 25,000 of its developers to the agile development methodology as a way to increase productivity and cut costs.
So what’s your reason for not using the agile process? Is it cost, executive support or something else? Whatever your excuse, there’s no denying the benefits of adopting an agile process — even in this poor economy.
There were a lot of messages that came out of the recent Burton Group Catalyst conference in San Diego surrounding the public cloud.
But one resonated more than others: You need to get a grip on your own assets, meaning what data is stored on what servers and what the real costs of building or deploying and maintaining an application are before you can figure out if cloud computing is a more cost-effective route.
Burton analyst Chris Howard compared the state of enterprise IT to that of Rome: Are we just building and building upon an old architecture? When is it time to start getting rid of some of the old stuff? And how do we decide what should stay and what should go?
Bill Peer, chief enterprise architect at InterContinental Hotels Group, who presented at the show, talked about building an internal cloud. In the process he is moving data from two mainframes predating the 1960s to new servers on a private cloud.
This is a multibillion dollar company making the move to get rid of old systems, and there are probably other enterprises out there sick of maintaining mainframes and code created by people who are no longer with their company.
The list of cloud computing benefits and risks is long and varied depending on who you talk to, but one benefit is clear: It could force CIOs to assess what they need and can do without, and, if anything, build more efficient data centers on their own.
There is a test for figuring out what can go and stay if you are not faint of heart. Howard shared a story of how Ken Anderson, former CIO of Novell, used to go into the company’s data centers at night and randomly turn systems off.
If no one noticed in three weeks, the system stayed off.
Good afternoon! This past week on SearchCIO.com, we highlighted tips for leveraging employee talent, looked at ways to avoid IT project failures using change management strategies, and examined whether end users are bypassing IT in pursuing their latest cloud computing initiatives. Read the stories linked below and share your thoughts.
Hit the ground running and make people your priority — In Hit the Ground Running, a book by Jason Jennings, learn how some company executives are leveraging the power of their people for economic success.
Avoiding IT project failures with a change management strategy – CIOs typically aren’t involved in IT project execution, but they do pave the way for success with change management strategies. Here’s how.
Latest cloud computing trend: End users buying IT as a Service – Users want to consume IT as a Service and will bypass IT, or nudge IT into the cloud if necessary, to get there. Plus: How companies are handling chargeback.
David Shacochis, vice president, research and development at IT provider Savvis Inc., reminded me the other day that there is a big difference between a disaster recovery (DR) plan and business continuity, even though many forget the distinction.
A business continuity plan is your company’s prescription for things that you can expect to go wrong: components will fail, servers will fail, network outages are going to happen, IT professionals make mistakes. Disaster recovery is your plan for the things you can’t anticipate. “If you’re doing this Calvin and Hobbes scenario, where planes are falling from the sky, that is when you’re talking a disaster recovery plan,” Shacochis said.
Savvis, with 29 data centers, likes to boast it is has built the architecture required to give companies business continuity, which in turn gets rolled into its standardized services. “Virtually all our products have a high-availability option that can be added on for pennies on the dollar,” he said. And many customers use those services as their DR site.
But the premise of a DR-in-a-box solution — promised by various providers — is, in his view, untenable.
“We don’t really know what your requirements are, we don’t really know what the nightmare scenario will be, you’re not really implementing anything with us, but trust us, when you pick up the phone we’ll be there, and we’ll get you a data center in a heartbeat,” Shacochis said. “Those sorts of services are not that difficult to sell because there are a lot of people who want to believe that they exist. But they are very difficult to execute on.”
In fact, Savvis has not gone to market with a catchall DR solution yet. “We don’t really believe that the process maturity across so many different customers is there, or the standardization across so many different architectures is there that would allow us to do it,” Shacochis said.
Shacochis, however, believes that there will be a really elegant solution that will ultimately be cheaper than the present multi-tiered DR solutions and better. His idea is that the kind of cloud computing platform that Savvis is building in its labs will enable it — eventually — to offer DR that is standardized, flexible and cost-attractive enough to customers to make it worthwhile. Shacochis envisions a platform that can function as a complete cloud data center.
“All the typical IT resources you get in a physical data center, we’re going to be building in a platform that will allow you to provision not just your compute resources on the network, but to provision actual data center topologies for routing, switching, security, load balancing and failover features, as well as computing, storage and storage lifecycle management resources that are all running in a software-based context that you can control over a portal, and eventually control via a software API,” he said.
The beauty of that model, he says, is that companies could provision their entire cloud application stack, get it up and implemented and then turn it off.
“That really would be a cloud DR model, where the customer is paying a small percentage of what they would ordinarily pay for production, and they would have a highly functional and easy to executive DR plan.”
When will we see this? He thinks it’s a year or two off. Sounds good to me.
No weather-related complaints from me this week – it’s definitely summer in New England. Though it’s steamy out, I couldn’t be happier!
Here’s the latest content from SearchCIO.com, where this past week we focused on enterprise risk management, evaluating network access control (NAC), cloud computing and IT outsourcing trends in 2009.
Enterprise risk management quiz for CIOs – ERM is getting increased attention due to concerns about data protection, NAC, cloud computing and compliance. Learn more about ERM and take our quiz.
Evaluating network access control: NAC policy enforcement matters — After thinking through your usage cases for NAC, select the enforcement approach that meets your security requirements, budget and complexity tolerance.
If cloud computing companies form ecosystems, users will benefit – A partner network hosted by one provider, like Amazon’s EC2, can mean cost and performance advantages for customers of those services. Here’s how.
IT outsourcing trends 2009: Latest deals for the recession and beyond – IT outsourcing trends in 2009 are evolving rapidly as the economic recession, IT offshoring scandals and the drive for cost savings change how IT outsourcing contracts are structured. Check out our quick guide to IT outsourcing for more information.