There were a lot of messages that came out of the recent Burton Group Catalyst conference in San Diego surrounding the public cloud.
But one resonated more than others: You need to get a grip on your own assets, meaning what data is stored on what servers and what the real costs of building or deploying and maintaining an application are before you can figure out if cloud computing is a more cost-effective route.
Burton analyst Chris Howard compared the state of enterprise IT to that of Rome: Are we just building and building upon an old architecture? When is it time to start getting rid of some of the old stuff? And how do we decide what should stay and what should go?
Bill Peer, chief enterprise architect at InterContinental Hotels Group, who presented at the show, talked about building an internal cloud. In the process he is moving data from two mainframes predating the 1960s to new servers on a private cloud.
This is a multibillion dollar company making the move to get rid of old systems, and there are probably other enterprises out there sick of maintaining mainframes and code created by people who are no longer with their company.
The list of cloud computing benefits and risks is long and varied depending on who you talk to, but one benefit is clear: It could force CIOs to assess what they need and can do without, and, if anything, build more efficient data centers on their own.
There is a test for figuring out what can go and stay if you are not faint of heart. Howard shared a story of how Ken Anderson, former CIO of Novell, used to go into the company’s data centers at night and randomly turn systems off.
If no one noticed in three weeks, the system stayed off.
Good afternoon! This past week on SearchCIO.com, we highlighted tips for leveraging employee talent, looked at ways to avoid IT project failures using change management strategies, and examined whether end users are bypassing IT in pursuing their latest cloud computing initiatives. Read the stories linked below and share your thoughts.
Hit the ground running and make people your priority — In Hit the Ground Running, a book by Jason Jennings, learn how some company executives are leveraging the power of their people for economic success.
Avoiding IT project failures with a change management strategy – CIOs typically aren’t involved in IT project execution, but they do pave the way for success with change management strategies. Here’s how.
Latest cloud computing trend: End users buying IT as a Service – Users want to consume IT as a Service and will bypass IT, or nudge IT into the cloud if necessary, to get there. Plus: How companies are handling chargeback.
David Shacochis, vice president, research and development at IT provider Savvis Inc., reminded me the other day that there is a big difference between a disaster recovery (DR) plan and business continuity, even though many forget the distinction.
A business continuity plan is your company’s prescription for things that you can expect to go wrong: components will fail, servers will fail, network outages are going to happen, IT professionals make mistakes. Disaster recovery is your plan for the things you can’t anticipate. “If you’re doing this Calvin and Hobbes scenario, where planes are falling from the sky, that is when you’re talking a disaster recovery plan,” Shacochis said.
Savvis, with 29 data centers, likes to boast it is has built the architecture required to give companies business continuity, which in turn gets rolled into its standardized services. “Virtually all our products have a high-availability option that can be added on for pennies on the dollar,” he said. And many customers use those services as their DR site.
But the premise of a DR-in-a-box solution — promised by various providers — is, in his view, untenable.
“We don’t really know what your requirements are, we don’t really know what the nightmare scenario will be, you’re not really implementing anything with us, but trust us, when you pick up the phone we’ll be there, and we’ll get you a data center in a heartbeat,” Shacochis said. “Those sorts of services are not that difficult to sell because there are a lot of people who want to believe that they exist. But they are very difficult to execute on.”
In fact, Savvis has not gone to market with a catchall DR solution yet. “We don’t really believe that the process maturity across so many different customers is there, or the standardization across so many different architectures is there that would allow us to do it,” Shacochis said.
Shacochis, however, believes that there will be a really elegant solution that will ultimately be cheaper than the present multi-tiered DR solutions and better. His idea is that the kind of cloud computing platform that Savvis is building in its labs will enable it — eventually — to offer DR that is standardized, flexible and cost-attractive enough to customers to make it worthwhile. Shacochis envisions a platform that can function as a complete cloud data center.
“All the typical IT resources you get in a physical data center, we’re going to be building in a platform that will allow you to provision not just your compute resources on the network, but to provision actual data center topologies for routing, switching, security, load balancing and failover features, as well as computing, storage and storage lifecycle management resources that are all running in a software-based context that you can control over a portal, and eventually control via a software API,” he said.
The beauty of that model, he says, is that companies could provision their entire cloud application stack, get it up and implemented and then turn it off.
“That really would be a cloud DR model, where the customer is paying a small percentage of what they would ordinarily pay for production, and they would have a highly functional and easy to executive DR plan.”
When will we see this? He thinks it’s a year or two off. Sounds good to me.
No weather-related complaints from me this week – it’s definitely summer in New England. Though it’s steamy out, I couldn’t be happier!
Here’s the latest content from SearchCIO.com, where this past week we focused on enterprise risk management, evaluating network access control (NAC), cloud computing and IT outsourcing trends in 2009.
Enterprise risk management quiz for CIOs – ERM is getting increased attention due to concerns about data protection, NAC, cloud computing and compliance. Learn more about ERM and take our quiz.
Evaluating network access control: NAC policy enforcement matters — After thinking through your usage cases for NAC, select the enforcement approach that meets your security requirements, budget and complexity tolerance.
If cloud computing companies form ecosystems, users will benefit – A partner network hosted by one provider, like Amazon’s EC2, can mean cost and performance advantages for customers of those services. Here’s how.
IT outsourcing trends 2009: Latest deals for the recession and beyond – IT outsourcing trends in 2009 are evolving rapidly as the economic recession, IT offshoring scandals and the drive for cost savings change how IT outsourcing contracts are structured. Check out our quick guide to IT outsourcing for more information.
As IT outsourcing is one of the core areas I cover for SearchCIO.com, I am, of course, interested in how both offshoring and onshoring will fare in the coming months.
According to some soon-to-be-published analyst research on where IT outsourcing is going in the second half of 2009, although most enterprises have put IT outsourcing more on the back burner recently, the tide is about to change. Among the analysts’ conclusions: Companies’ outsourcing engagements will be more global in nature in a post-recessionary environment and will not involve significant upfront capital expenditure.
Meanwhile, SearchCIO.com just published a quick guide on IT outsourcing trends in 2009. It covers trends since the end of last year, when the drop in contracts began, and offers advice on how to properly structure your IT outsourcing contract for success in the recession, as well as the Satyam scandal and the rise of insourcing.
So, as the second-half play gets under way, the stock market is surging (just today, the Dow Jones Industrial Index climbed past 9,000) and the number of jobless claims appears to be dropping. There is definitely chatter that maybe, just maybe, we’ve rounded the corner on the worst of this recession.
Would a turnaround now change your outsourcing strategy? Did the Satyam scandal have any impact on how you do business? Are you considering insourcing as an option? Budget season is approaching for many enterprises, so this is an ideal time to examine these strategic questions.
Welcome back from the weekend! This past week on SearchCIO.com, we looked at Lean thinking for IT, considered project management governance and addressed IT portfolio management strategies as part of a larger project and portfolio management program. Read the stories listed below and let us know if you have questions or comments.
FAQ: Lean thinking for IT – Lean thinking is the process of incorporating Lean principles into an enterprise. This FAQ shows how Lean thinking works and how IT is benefiting from this improvement methodology.
Project management governance: How much is enough? – What goes into successful project management governance? The best mix calls for just the right amount of process and a focus on improved project completion rates.
IT portfolio management: Strategy matters more than software, CIOs say – IT portfolio management features aren’t a top priority for users of PPM software, a SearchCIO.com survey shows, but PPM strategy is.
We’ve all heard about Obama’s promise for government transparency, but what about IT transparency?
First, what is transparency? Transparency is basically defined as openness and accountability in all areas of business. Obama’s promise for an “unprecedented” level of government transparency has caused many business organizations to look more closely at their systems and operations to determine how “transparent” they are and ultimately better protect themselves against potential legal ramifications.
Many IT organizations are also taking a closer look at transparency and accountability. By being transparent, IT can streamline processes, be more productive and improve customer service and support.
In a recent conversation with David Flesh, director of product marketing for ITSM at HP, we discussed how IT can drive efficiency and greater transparency to the business. One way of doing this is to leverage the power of the change advisory board. Change advisory boards are made up of IT and business stakeholders who meet regularly to review changes to systems and processes that will affect the business. With representation from all areas of the company, change advisory boards have the power to review and assess all changes taking place and ultimately confirm transparency among them.
Asset management is another area where IT can be more transparent to the business. As companies spend more money on IT, they want to know where all the money is going. Using asset management and IT financial management solutions, IT can again ensure spending transparency to the business.
Project portfolio management (PPM) is another way IT is addressing transparency. For example, one business unit might not have known that security was looking at a project that would involve reviewing confidential employee information. This could lead to legal issues. PPM helps address these issues and provide more transparency by creating a system where all units are aware of projects in the queues and how they affect other business and IT systems.
As Obama has said numerous times, “transparency promotes accountability.” Through formal processes and governance, IT can take a leadership role in delivering transparency to the business.
Good afternoon! This past week on SearchCIO.com we ran the coverage gamut, from IT ROI strategies and network access control (NAC) to the real cost of cloud computing and email outsourcing approaches. Check out the stories listed below!
Proven IT ROI strategies in an economic downturn – CIOs who calculate ROI on IT projects are more likely to get executive approval, especially in an economic downturn. Learn how to effectively calculate ROI on new investments in business process management, IT Service Management and enterprise risk management.
Network access control now addresses multiple needs – Users have found multiple uses for NAC, including growing usage for guest networks. Understanding which of NAC’s four main use cases is yours is crucial to selecting the right system.
The real cost of cloud computing services – Cloud computing services reallocate costs and save you money in the short term, but companies should keep in mind what the usage-based model will cost long term.
Enterprises look beyond Gmail, cloud for email outsourcing services – Email outsourcing is taking off in enterprises as CIOs consider managed or hosted email services rather than Gmail or a full cloud-based email approach.
CIOs are pretty paranoid when it comes to data protection, and rightly so given they put their job on the line when they recommend a new infrastructure or service approach like cloud computing.
I’ve been told many times now by companies providing cloud computing services that their security measures are good enough if not better than the ones that their customers have in place — end-to-end encryption being the primary pitch.
But does this satisfy SOX compliance or really prevent someone, say an admin of another customer or internal-to-the-service provider, from finding a way to peek at your data?
These questions are nothing new for CIOs that have gone down the outsourcing path, but the implications of data running on a shared infrastructure do tend to make them squeamish.
Here’s a sampling of some questions Sam Gross, vice president of global IT outsourcing at Unisys, is getting since his company entered the cloud computing fray with the Unisys Secure Cloud and its Stealth data protection technology last week.
1. How can you absolutely, positively assure me that a cloud administrator [employed by you] will not in error grant some type of read or write access to the content, facility or service?
2. How can you assure me that none of your cloud administrators will have any visibility to that data and compromise our SOX controls?
3. How can you assure me that you can’t, will not and don’t have mechanisms to electronically transmit my data in the background to another third party?
Unisys Secure Cloud is backed by 800 consultants and Stealth is a technology Unisys developed for the Department of Defense based on bit-splitting technology made by Security First Corp. The technology splits data across multiple packets, across disks, multiple sectors and physical devices so that snoops can’t construct a single byte or single character of data using any single packet. On top of that, AES 256-bit encryption is used.
In answer to those three questions: No. 1, the assignment of read or write access is handled by the client through the client’s own directory and authentication mechanisms and not by Unisys.
No. 2, Gross uses the analogy of the cloud looking like nothing more than a series of pipes with water running through them when the cloud infrastructure is Stealth enabled. “The pipes have water running through them, but [people with malicious intent] can’t tell where the water came from, they don’t know where each drop of water is going and the water is transparent.” In other words it’s SOX compliant because all that is seen is a stream of ones and zeros.
No. 3, data that is in storage on the SAN is also Stealth protected so that even if you are a member of a client-defined community and transmit data, that transmitted data would be, again, a stream of unintelligible ones and zeros to the person on the receiving end.
“The end result is that if people put sniffers online, use Deep Packet Inspection mechanisms or physically remove a disk from a SAN and try to recover data, it’s impossible to assemble that data,” said Richard Marcello, president, Unisys systems and technology. “They won’t be able to recognize the payload or data construct, so therefore it’s cloaked and unrecognizable to the mechanism people use to steal data.”
Time will tell whether Stealth is truly the answer for a lot users, but you have to admit they have a much better explanation than, “Our security is good enough if not better.”
What security measures do you want to see from cloud companies? Let me know at firstname.lastname@example.org.
Nothing fuels the tech press like wars. So I have no doubt that we will see an endless string of reports from the battlefield over the next many months on the struggle between Google and Microsoft for world dominance. Wake me up when it’s all over.
Google’s announcement of its intention to field a Linux based, browser-centric operating system for netbooks and eventually most client machines comes at a strange time in the history of operating systems. Now, operating systems matter less than ever.
Once upon a time, your choice of operating system dictated your choice of processor, which often meant your choice of hardware vendor. It also dictated what applications were available to you and from whom. The only mainstream operating system that you can still say that about is the Mac OS.
Chrome OS will do none of those things. It will run on all the same hardware that Windows runs on, plus ARM-based systems. It will run browser-based applications, which will also run on any other browser. Undoubtedly, Google Apps will work really well on Chrome OS. Google is also suggesting it will deliver better security, easier configuration and quicker performance than the incumbents.
History (especially of desktop Linux) suggests that it will have to be a quantum leap better at all those things to make a dent. We can only hope they achieve that, since regardless of who delivers those attributes, they are desirable. But more to the point, Chrome OS comes at a moment when desktop operating systems are themselves in danger of becoming a big so what.
By the time Chrome OS shows up, desktop chips will likely have hardware-assisted virtualization. Increasingly, hypervisors will determine the performance characteristics of the system – how well they manage memory, how well they interact with CPUs, etc. With storage and applications increasingly being network-driven, the operating system’s chief function may well become the user interface. How security functions will be parceled out between hypervisor and operating system is perhaps an open question, so that function may still be crucial.
Nonetheless, the world Chrome OS makes its debut in will be one where operating systems can be swapped willy-nilly, where applications don’t care what OS they run on, and where, frankly, users may not either.
Google’s main impact may well be nontechnical, forcing Microsoft to drop prices and (just) maybe to improve Windows. Call me a contrarian or curmudgeon, but I think this development is more important to shareholders of both combatants than IT managers. How about it, CIOs? Game changer or no biggie?