Would paying for corporate mobile data in a cloud-like format change your mobile phone management strategy? It’s something to consider — AT&T CEO Randall Stephenson thinks the wireless industry is moving toward a usage-based pricing model for mobile data.
“For the industry, we’ll progressively move towards more of what I call variable pricing, so the heavy-[use] consumers will pay more than the lower-[use] consumers,” Stephenson said at an analyst conference this week.
A lot of analysts have seen this coming for a while. More mobile customers are sharing information via text messages, e-mails and even Skype on their smartphones rather than by making traditional phone calls, driving down their need to purchase pricier phone plans. (I speak from experience: A combination of these factors — especially Skype — has allowed me to remain with AT&T’s 450-minute calling plan, where I otherwise would have needed to trade up.) This could be viewed as a reflection of the cloud computing model that’s gotten so much buzz in the IT sphere, where you pay for what you use, rather than paying a flat rate.
These changes could drive up costs for big-time corporate users, who — like most mobile users — not only download and edit documents for their work, but also are inclined to surf the Web on their work phone during idle time. And why not? It doesn’t cost anything extra. But should IT start paying for mobile data based on usage, corporate mobile phone management policies might have to change.
Would a data-usage-based payment approach for mobile devices change your organization’s mobile phone management strategy or your mobile procurement decisions? What sort of mobile phone policies would you put into place for corporate users who also want to surf the Web?
Kevin Vogl has overseen hundreds of desktop virtualization deployments as vice president of virtualization for systems integrator Champion Solutions Group Corp., out of Boca Raton, Fla.
And he’s seen his share of desktop virtualization design-stage mistakes, a common one being the creation of too many desktop images.
Eager to satisfy the diverse needs of the user base, some enterprise IT departments end up designing hundreds or even thousands of virtual desktop images — voiding a major benefit of desktop virtualization, simplified desktop management. And it typically starts with one group of users and snowballs.
“I see enterprises that take a small group of users and instead of giving that group one or two images, they end up with six [images] because a few people in the group use an application that the rest of the group doesn’t,” Vogl said.
Instead, applications that are unique to a few users should be delivered separately through application virtualization, he said.
User boot storms also are often brought up as a storage-allocation or network-capacity nightmare that can be avoided during the design stage. Shared memory, available these days in most virtualization products, allows the first image of Windows 7, for example, to be loaded only once. Shared memory technology voids the need to load memory for that image the next time a user boots up Windows 7, Vogl said.
Read more about preparing your infrastructure for desktop virtualization in a tip written by Tom Nolle, president of consulting firm CIMI Corp. Or email your own tips to email@example.com.
I am reporting live from TechTarget’s brand new office in Newton, Mass., just up the highway from our longtime Needham offices. I have Internet access and my phone has a dial tone – so far, so good!
What’s everybody else thinking about? It sounds like Google and Microsoft are ready to go to war over ant-trust issues, while a new study confirms what I think most of us already know: More Americans are getting their news from the Internet rather than the newspaper or radio, and social media news sites like Facebook and Twitter are accelerating info-sharing opportunities.
As you catch up on these and other stories, be sure to check out the latest SearchCIO.com stories:
A dozen danger signs that your outsourcing contract is on the rocks — Outsourcing contract renegotiations were up sharply last year. Most of that activity focused on cost cuts, and that spells trouble. Experts offer advice, in the first of two articles.
What to watch for during negotiations with outsourcing vendors – And here’s part two! Outsourcing contract renegotiations were up sharply last year. Most of that activity focused on cost cuts, and that spells trouble.
Storage virtualization helps rein in storage sprawl — Storage virtualization can help CIOs manage the ever-increasing storage requirements that server and desktop virtualization cause.
IT governance framework helps public agencies boost service, cut costs — An IT governance framework can help public agencies improve service levels and cut costs amid tight budgets. Read more about IT governance models in Massachusetts and California (my background in community journalism was a big help with this one!).
My background is in community journalism, so public sector challenges — strained municipal budgets and efforts to improve education, health care and other city services — are all too familiar to me. But rarely do I get to put that knowledge to work nowadays, as I did this week in my story on implementing an IT governance framework in the public sector.
In researching my piece, I reached out to the CIO of the commonwealth of Massachusetts and to the chief deputy director in the state’s Office of the CIO in California. I spoke with each of them about how they’ve worked with their state governors to structure IT so that it can best serve not only the governor and his staff, but state residents too.
That’s one of the differences between public sector challenges versus private sector challenges for CIOs. In the private sector, CIOs are focused mainly on finding and implementing solutions that will move the business’s vision forward while cutting costs and boosting revenues. In the public sector, money matters, of course — particularly with California, as we’ve all heard about the state’s budget crisis — but the bottom line is far more intangible and difficult to measure with traditional ROI formulas.
A successful IT deployment might mean a better database of roads in need of repair, so the local Department of Public Works can fix a pothole before a kid on a bike tumbles into it. Or it might mean planning for the purchase of new desktops so that high school students without Internet access at home can work at the school library. On a much larger scale, the government push to electronic health records underscores the importance of technology in meeting citizens’ everyday needs.
Do I sound like the overzealous press person for a city mayor right now? Maybe a bit (I’m all too familiar with them as well). But one of the many things I’ve been impressed to learn through my reporting at SearchCIO.com is that IT — supported by a dynamite IT governance framework — can enable innovation and success at any level, from the multibillion-dollar corporation to the 5,000-person town.
Consultant Tom Young has been mulling over the good, the bad and all the ugliness of outsourcing contracts for 13 years, eight of them with the IT advisory firm TPI, where he is managing director of the CIO services and infrastructure group. Before that, he was a financial director at AT&T Labs.
“The people side of [outsourcing] is a tough, tough business,” Young said. This was at the tail end of a long interview for a piece I was doing on the danger signs of outsourcing contracts. Young was a wealth of useful information on that subject, and now we were talking shop about cutting jobs.
Young has seen his share of hatchet jobs. “I’ve been through this many times,” he said. At one company, where he was dispatched to cut IT costs, the employees called him and his partner “the Bobs,” after the two consultants in the 1999 cult film “Office Space” who were brought in to fire employees. (He had the modesty not to compare himself with the George Clooney character in “Up in the Air.”)
But Young said he “sleeps at night” because he’s convinced that the corporate IT guy is better off working for the very outsourcing provider that displaced him. His argument is that the IT professional working in a corporate IT organization is judged by his ability, of course, but limited by the opportunities to move up the value chain. In Young’s view, those opportunities typically pale beside the potential for career advancement at the big provider firms.
And unlike a corporate environment where you might be locked into a salary, “the more valuable you are to the provider, the more you make,” Young said. And he knows, because he’s kept in touch with people who have made the transition. “Nobody likes to be told, you’re not working here anymore,” he said, “but most people, except the slackers, will be better off.”
This sounds plausible, but didn’t I read not that long ago about IBM shipping U.S. jobs off to India?
When the audience at VMworld 2008 was asked if anyone had virtualized hundreds of desktops, I saw only one person raise his hand.
This year, anecdotally, VMware said at its partner show that the majority of its customers are evaluating, testing or rolling out its virtual desktop infrastructure (VDI) product.
Last month, Citrix Systems said that 1,000 new customers bought its desktop virtualization technology in Q4 2009, with several customers buying more than 10,000 seats.
In addition, a very large food manufacturer plans to virtualize 10,000 desktops this year, according to a system integrator I spoke with who had interviewed a job applicant working on the project.
There aren’t many CIOs putting themselves out there as desktop virtualization pioneers on a large scale, however. There are risks involved, after all: The technology is not mature, the ROI is questionable, new strategies have to be developed to account for the capacity needs of a virtual infrastructure, and software companies are still trying to figure out how to support their applications in a virtual environment.
Yet companies are moving forward with desktop virtualization. Independent Bank Corp., out of Iona, Mich., for example, has 1,000 virtual desktops as part of its disaster recovery strategy.
Disaster recovery was not named as a top driver for desktop virtualization in an informal survey conducted by Forrester Research analyst Mark Bowker. What did top the list? Reduced capital expenses associated with traditional desktops, simplified application upgrades and deployments, and reduced operational expenses tied to supporting client devices.
Let us know what is moving desktop virtualization forward — or holding it back — at your company. Email firstname.lastname@example.org.
The New York Times caught my eye today with this piece on whether the Windows 7 phone will be better than the iPhone for the enterprise. The BlackBerry is still the undisputed leader and the iPhone isn’t doing too shabby, but as the article points out:
We really have not seen any bona fide use of mobile collaboration tools as of yet across any device. People are using smartphones to check messages and use applications. The applications they do use are services like Twitter.
Also in the Times, a primer on traveling smart – with your technology. Most of this you’ve probably heard before, but it never hurts to review.
Come back here and share your thoughts on these stories, as well as the most recent content from SearchCIO.com:
Desktop, server virtualization help CIO fix disaster recovery plans — A combination of desktop and server virtualization is helping one bank CIO sync up disaster recovery between the backup data center and production site.
Proving the value of a business continuity plan (before disaster hits) — It’s the bane of business continuity experts: proving the value of business continuity management — before the incident occurs. Are KRIs and KPIs the answer?
CIO and IT salaries: Do you know what you should be paid in 2010? — CIO and IT salaries and IT job security and optimism all took a hit in this economic recession. Review our latest IT salaries stories and find out if you’re being paid enough in 2010.
It’s been a week of mea culpas. Following Google’s admission that it didn’t handle the launch of Google Buzz very well, on Thursday WordPress, a leading provider of blogging platforms to individuals and businesses alike, experienced an outage of approximately 110 minutes. The WordPress problems affected 10.2 million blogs, depriving those bloggers of about 5.5 million page views.
According to the official WordPress blog (which I assume was also unavailable during the downtime), the problems were likely the result of an “unscheduled change to a core router by one of our data center providers [that] messed up our network in a way we haven’t experienced before, and it broke the site.” Worse, the outage broke all of the company’s mechanisms for failover in San Antonio and Chicago.
I’ll be interested to hear about WordPress’ stopgap solutions in the case of blog failure. In the meantime, I’m left to ponder — and point out to our readers — that having a disaster recovery and business continuity plan isn’t necessary only for such catastrophes as hurricanes — or a massive data breach that leaves you scrambling to explain to irate customers what went wrong (I gather that WordPress did an admirable job updating users via Twitter during the outage). But how do you prove this to the business so you can get the resources you need? Consider piggybacking disaster recovery efforts on other projects or mapping availability risk.
The WordPress problems also underscore the importance of properly vetting your providers, whose data center outage could negatively affect your business and its customers. For more information about assessing potential sourcing partners, check out our FAQ on getting started with IT outsourcing.
Ahh, the irony! Organizations that have been through some kind of a disaster certainly understand the value of business continuity plans. But for most everybody else?
“When you talk about having a plan that could cost $20,000, $50,000 or $100,000, and might sit on a shelf and gather dust, for most business leaders, it’s ‘Excuse me, I have a business to run,’” said Paul Kirvan, a business continuity consultant based in New Jersey.
I heard a lot of variations on that attitude in my reporting this week for a story on mapping key risk indicators and key performance indicators in order to prove the value of business continuity (BC) programs. BC plans are a tough sell and not only because of those business leaders who’d rather spend money on making money.
The field is young — only about 35 years old, Kirvan told me. And the tools of the trade are not all that sophisticated. Quantifying the impact of a business-disrupting event that hasn’t happened, in order to craft a sound plan for getting back to business, is a soft science.
So soft, in fact, that Ramon Krikken, an analyst with Burton Group, has found that anecdotes about the bad things that have happened to other companies — good, old-fashioned horror stories — continue to be among the more powerful tools continuity specialists possess for convincing upper management that business continuity plans hold value.
In the United States there’s another element at play that makes it hard to get funding for BC — what Kirvan calls the “cultural dimension.”
Business continuity is viewed differently in the U.K. and other European countries from the way it is here, said Kirvan, who has worked extensively in the United Kingdom and is also a board member of the Business Continuity Institute. Of course, Great Britain is the source of arguably the industry’s most accepted business continuity standard, BS 25999. But it’s not just a matter of standards or certification, he said.
“The culture over [in the UK] tends to be one of anticipating potential problems,” he said. Business continuity is taken seriously, he said, perhaps because of issues with the IRA over the years and more recently, the 2007 terrorist bombings.
“Our culture, by contrast, with the pioneer spirit, that can-do, ‘We can handle anything, just throw it at us’ attitude, doesn’t take that view,” Kirvan said. American businesses are focused on the present and tend to believe the future will always be brighter.
“The typical reaction I have seen over the years from American businesses is that, ‘Well, we have never had a major issue, why should we worry about it? We’ll deal with it when it occurs,’” he said.
What about at your company?
Many large companies try to maintain hot sites that are in lockstep with the production environment, but this disaster recovery plan isn’t always realistic.
Configurations drift, or IT staff simply don’t have the time — or the budget — to mirror every aspect of their production environment. That’s where virtualization comes in.
Applications that may not have made it on the mission-critical DR list can now be put on a shared piece of hardware: a virtual server in the data center or hot site. The costs of maintaining hardware for both mission-critical and not-so-critical applications can drop considerably in this scenario.
The other day, Nelson Ruest, a principal with consulting firm Resolutions Enterprises Ltd., in Victoria, British Columbia, was telling me that one of his enterprise clients is projecting 60% to 70% savings, per year, across its infrastructure for disaster recovery. The company moved from mirroring the production environment — taking up three floors to do so in its backup data center — to using less floor space and hardware with a virtualized DR plan. It replaced many physical servers with virtual servers in its production environment as well.
Independent Bank, out of Iona, Mich., saved $1 million in hardware by replacing underutilized hardware with virtual servers, as part of a desktop-to-data-center virtualization DR strategy.
The shift to virtual desktops and servers allowed the bank to eliminate configuration drift between its hot site and production environment as well, according to its CIO, Pete Graves.
Still, enterprises are hesitant to use virtual disaster recovery for mission-critical applications, and are definitely not throwing out tape backups any time soon, according to John Humphreys, senior director of product marketing for the virtualization and management division of Citrix Systems Inc.
Humphreys is seeing the spread of virtualization for DR take the same path that the technology did on the server front: around the edges of the enterprise, starting with non-mission-critical applications that haven’t made it into the “critical” DR budget.
Others believe that virtualization DR will continue to evolve as server virtualization does. As the staff at SearchServerVirtualization.com point out in an article on predictions for 2010, there is still the potential for one virtual server to take down hundreds of other virtual machines with it.
So, it would seem that enterprises are testing out and moving forward with virtualization disaster recovery, but with a note of caution.
Tell us what your disaster recovery plans are and if virtualization will be part of them. Email email@example.com.