TotalCIO


July 9, 2009  6:37 PM

Data protection in the cloud: What’s good enough?

Christina Torode Christina Torode Profile: Christina Torode

CIOs are pretty paranoid when it comes to data protection, and rightly so given they put their job on the line when they recommend a new infrastructure or service approach like cloud computing.

I’ve been told many times now by companies providing cloud computing services that their security measures are good enough if not better than the ones that their customers have in place — end-to-end encryption being the primary pitch.

But does this satisfy SOX compliance or really prevent someone, say an admin of another customer or internal-to-the-service provider, from finding a way to peek at your data?

These questions are nothing new for CIOs that have gone down the outsourcing path, but the implications of data running on a shared infrastructure do tend to make them squeamish.

Here’s a sampling of some questions Sam Gross, vice president of global IT outsourcing at Unisys, is getting since his company entered the cloud computing fray with the Unisys Secure Cloud and its Stealth data protection technology last week.

1. How can you absolutely, positively assure me that a cloud administrator [employed by you] will not in error grant some type of read or write access to the content, facility or service?

2. How can you assure me that none of your cloud administrators will have any visibility to that data and compromise our SOX controls?

3. How can you assure me that you can’t, will not and don’t have mechanisms to electronically transmit my data in the background to another third party?

Unisys Secure Cloud is backed by 800 consultants and Stealth is a technology Unisys developed for the Department of Defense based on bit-splitting technology made by Security First Corp. The technology splits data across multiple packets, across disks, multiple sectors and physical devices so that snoops can’t construct a single byte or single character of data using any single packet. On top of that, AES 256-bit encryption is used.

In answer to those three questions: No. 1, the assignment of read or write access is handled by the client through the client’s own directory and authentication mechanisms and not by Unisys.

No. 2, Gross uses the analogy of the cloud looking like nothing more than a series of pipes with water running through them when the cloud infrastructure is Stealth enabled. “The pipes have water running through them, but [people with malicious intent] can’t tell where the water came from, they don’t know where each drop of water is going and the water is transparent.” In other words it’s SOX compliant because all that is seen is a stream of ones and zeros.

No. 3, data that is in storage on the SAN is also Stealth protected so that even if you are a member of a client-defined community and transmit data, that transmitted data would be, again, a stream of unintelligible ones and zeros to the person on the receiving end.

“The end result is that if people put sniffers online, use Deep Packet Inspection mechanisms or physically remove a disk from a SAN and try to recover data, it’s impossible to assemble that data,” said Richard Marcello, president, Unisys systems and technology. “They won’t be able to recognize the payload or data construct, so therefore it’s cloaked and unrecognizable to the mechanism people use to steal data.”

Time will tell whether Stealth is truly the answer for a lot users, but you have to admit they have a much better explanation than, “Our security is good enough if not better.”

What security measures do you want to see from cloud companies? Let me know at ctorode@techtarget.com.

July 8, 2009  2:36 PM

Is Google Chrome OS a turning point or yawner?

mschlack Mark Schlack Profile: mschlack

Nothing fuels the tech press like wars. So I have no doubt that we will see an endless string of reports from the battlefield over the next many months on the struggle between Google and Microsoft for world dominance. Wake me up when it’s all over.

Google’s announcement of its intention to field a Linux based, browser-centric operating system for netbooks and eventually most client machines comes at a strange time in the history of operating systems. Now, operating systems matter less than ever.

Once upon a time, your choice of operating system dictated your choice of processor, which often meant your choice of hardware vendor. It also dictated what applications were available to you and from whom. The only mainstream operating system that you can still say that about is the Mac OS.

Chrome OS will do none of those things. It will run on all the same hardware that Windows runs on, plus ARM-based systems. It will run browser-based applications, which will also run on any other browser. Undoubtedly, Google Apps will work really well on Chrome OS. Google is also suggesting it will deliver better security, easier configuration and quicker performance than the incumbents.

History (especially of desktop Linux) suggests that it will have to be a quantum leap better at all those things to make a dent. We can only hope they achieve that, since regardless of who delivers those attributes, they are desirable. But more to the point, Chrome OS comes at a moment when desktop operating systems are themselves in danger of becoming a big so what.

By the time Chrome OS shows up, desktop chips will likely have hardware-assisted virtualization. Increasingly, hypervisors will determine the performance characteristics of the system – how well they manage memory, how well they interact with CPUs, etc. With storage and applications increasingly being network-driven, the operating system’s chief function may well become the user interface. How security functions will be parceled out between hypervisor and operating system is perhaps an open question, so that function may still be crucial.

Nonetheless, the world Chrome OS makes its debut in will be one where operating systems can be swapped willy-nilly, where applications don’t care what OS they run on, and where, frankly, users may not either.

Google’s main impact may well be nontechnical, forcing Microsoft to drop prices and (just) maybe to improve Windows. Call me a contrarian or curmudgeon, but I think this development is more important to shareholders of both combatants than IT managers. How about it, CIOs? Game changer or no biggie?


July 6, 2009  3:06 PM

CIO weekly wrap-up: IT security jobs, lean BPM and risk management

Rachel Lebeaux Rachel Lebeaux Profile: Rachel Lebeaux

I’m happy to report that the sun finally shone in New England over the holiday weekend! I hope all of our American readers enjoyed July Fourth as much as I did. Now it’s back to work – join me in reviewing last week’s content from SearchCIO.com on IT governance and outsourcing contract success, lean BPM, enterprise risk management and more.

Solid governance model key to IT outsourcing contract success – A solid governance model and a relationship-minded manager are crucial to the success of an IT outsourcing contract. Here’s what to look for.

Gartner: Future IT security jobs to focus on risk management strategy – Gartner predicts that by 2016, maturing technologies will supplant many security experts. The jobs that survive will be all about risk management.

Get the most out of your lean BPM solution – The demand for BPM solutions is up and budgets are down. Learn how specific lean BPM tools can help streamline your processes and reduce costs.

Enterprise risk management solutions for CIOs – Enterprise risk management programs buffer organizations from risky business practices. In this guide, learn how to employ enterprise risk management solutions in an organization.


June 29, 2009  7:21 PM

Internet traffic overload: What does it mean for cloud computing services?

Christina Torode Christina Torode Profile: Christina Torode

Michael Jackson’s passing and the outpouring of grief and curiosity made its way full force onto the Internet last week, slowing down webmail, news sites and search engines and grinding Twitter to a halt. It made me wonder if customers of cloud computing services are going to start asking what cloud providers have in place to account for such events.

Or maybe providers of cloud computing services have such a deep reserve of resources that it’s a nonissue.

Still, many companies can’t afford to have their Web-based applications slow down for the day, or longer, so just how deep are those reserves, and does the responsibility to ensure acceptable application delivery times rest on cloud providers’ shoulders or those of the ISP?

Or even the customer, when it comes down to it. Should they have signed up for overdraft protection when it comes to bandwidth in the first place?

Sure, spikes like those of last week are not that common, but it is not unthinkable that such occurrences may become more frequent, given that many people are turning to blogs, Twitter and YouTube in place of their local news station or paper to get up-to-the minute information on something like the Iran protests. And then you have to factor in the spamming that follows close behind a big news event.

And I’m not talking just about Gen X and Y flocking to the Internet when something happens, but people like my 66-year-old father, who stopped reading newspapers several years ago and now sustains himself on a steady Internet news diet instead.

What do you think? Is Internet traffic cause for concern when it comes to cloud computing services, or is it a nonissue? Let me know, ctorode@techtarget.com.


June 29, 2009  3:16 PM

CIO weekly wrap-up: PPM software, customer service tips, Six Sigma FAQ

Rachel Lebeaux Rachel Lebeaux Profile: Rachel Lebeaux

I’ve held off on mentioning this until now … but is the rain in New England ever going to stop? The bright side of this weather (pun fully intended) is that I’m probably more productive at work when I’m not staring longingly outside the window at a sunny, summer day … at least, that’s what I’m telling myself to get through the gray days.

If you’re trapped indoors like I am (or even if you’re not), check out the latest SearchCIO.com content this week on business intelligence strategy, PPM software, customer service satisfaction and our new FAQ on Six Sigma methodology and let us know what you think!

Putting your business intelligence strategy to the test – Review our latest coverage of business intelligence and corporate performance management, then test your knowledge with this quiz.

How PPM software usage changes as firms grasp IT portfolio management – Nearly four in 10 large organizations use PPM software, but not all instances are created equal. New survey data shows what IT values most and how deployments mature.

Key to customer service satisfaction: Less complexity – Customer service satisfaction can be improved by simplifying complexity, according to business executives interviewed in this chapter of James Champy’s newest book, Inspire! Why Customers Come Back.

How does the Six Sigma methodology benefit IT? – The Six Sigma methodology has helped companies improve customer service and eliminate errors for years. Learn how IT can reap the benefits of this service-driven methodology.


June 25, 2009  8:17 PM

Healthcare IT standards still not clear

mschlack Mark Schlack Profile: mschlack

Healthcare actually pushed the Iranian elections out of the top news slot this week. Most of the attention has been on the administration’s efforts to establish a government insurance plan of last resort. But in the background, the health information technology (HIT) effort continues to boil along, with a lot of action and not much clarity emerging on standards for electronic health record (EHR) software.

Dr. John Halamka, noted CIO of CareGroup Healthcare System, reported on the second meeting of the HIT Standards Committee, of which he is a member. The committee is currently engaged in a four-dimensional exercise: drilling deep into the information space that healthcare inhabits to understand what data in what format has to be interchangeable, all the while trying to understand how these standards will develop over time. It’s a Herculean task, even at the 50,000-ft. level, and I’ll be very interested to see where they are able to make progress and where they stall. Halamka points out that already people are beginning to see that EHR won’t progress unless complementary initiatives like lab results data standardization also proceed apace.

Meanwhile, the standardization picture is far from clear. Neil Versel is hearing rumors that CCHIT may be replaced or augmented as the certification body for EHR software. The presumptive favorite body to certify may be sidelined or augmented by the Office of the National Coordinator for Health Information Technology, the federal agency overseeing the Recovery Act initiatives in electronic healthcare. One hospital IT director I’ve spoken with says he expects the Joint Commission that accredits hospitals to step into the fray. That might be a culture shock, as the Joint Commission has a reputation for rigor in its clinical inspections that could be a rude awakening for software vendors.

The next few months will hopefully clarify just what IT needs to do to demonstrate meaningful use of healthcare IT. Meanwhile, IT organizations are not standing still. Alex Barrett wrote recently about Boston-based Beth Israel Deaconess Medical Center’s EHR project to get private physicians integrated into their systems. BI-Deaconess is making creative use of server virtualization to build an infrastructure that can grow and adapt as it gains acceptance and use. This is not an insignificant problem for architects who face uncertain usage targets and unknown ramp up times.

Chris Griffin documents an interesting culture shift in healthcare IT, which he describes as a “culture that puts a big emphasis on software applications, rather than on hardware and a holistic view of the computing environment.” That’s probably a dysfunctional approach for IT departments that will face increasing pressure to store more patient data from imaging and other diagnostic procedures as well as for longer and longer times due to regulatory compliance requirements.


June 25, 2009  8:15 PM

Satyam scandal: Has it affected your IT outsourcing and offshoring?

Rachel Lebeaux Rachel Lebeaux Profile: Rachel Lebeaux

For those keeping an eye on IT outsourcing and offshoring, there were a couple of noteworthy pieces of news this week regarding the artist previously known as Satyam Computer Services Ltd.

First, Tech Mahindra Ltd., which purchased the troubled IT outsourcing company two months ago, following the Satyam scandal, has officially rebranded it as Mahindra Satyam.

Secondly – and, I think, more importantly – Satyam is looking to cut jobs if orders coming into the company continue to languish. According to this article, 8,500 employees placed in a so-called “virtual pool” might see their positions eliminated in six months if the company fails to find them work.

Satyam’s staffing troubles aren’t surprising — it must be difficult to woo new Satyam customers or retain those with expiring contracts, given the past transgressions of company leaders. Considering all that, I’m a little surprised that the new owners are leaving Satyam in the company’s name at all. As a point of comparison, after ValuJet Airlines experienced a series of safety problems and the fatal crash of ValuJet Flight 592 into the Florida Everglades, it changed its name and is now operating as AirTran Airways.

Since the Satyam scandal broke early this year, IT outsourcing and offshoring clients have struggled to parse through fact and fiction, protect existing contracts and wise up when pursuing new IT outsourcing deals. As the recession deepened, we began to hear that companies were seeking cheaper rates, sometimes in exchange for more flexibility on the part of the outsourcer in how work is completed. More recently, it seems that insourcing – bringing previously outsourced IT work back in-house – is on the rise.

So what role has the Satyam scandal played in these trends? I recently asked Ben Trowbridge, CEO of Alsbridge Inc., a U.S.-based IT outsourcing and business process optimization consulting firm, whether the scandal was sticking in his clients’ minds.

“Yes – but it’s amazing how short a memory clients have for bad news,” Trowbridge replied. “Within a month of that being brought to a head, it was like everybody had forgotten about it.”

This wasn’t the answer I was expecting. Google’s AdWords tool tells me that the term Satyam is still being searched quite a bit. So I’m putting the question out to enterprise CIOs: Has the Satyam scandal had any effect upon your company’s IT outsourcing and offshoring activities in the past six months? I’d love to hear your stories.


June 23, 2009  9:02 PM

SaaS BI vendor LucidEra’s demise harkens to ASP downfalls

Christina Torode Christina Torode Profile: Christina Torode

When my inbox began filling up with all the theories of why BI SaaS vendor LucidEra is expected to close down by month’s end, I couldn’t help thinking that the more things change (in name, at least), the more they stay the same.

LucidEra is in part a victim of a down economy, just as application service providers (ASPs) were in the late ’90s/early 2000s when the dot-com bust happened and VC funding started to dry up.

Like ASPs USinternetworking and Corio, LucidEra was one of the first to the SaaS BI parade. It had to lay new ground in many ways: The Web technologies that today’s SaaS vendors tap into weren’t around when LucidEra got started, so the company had a bigger learning curve and had to do lot of the development itself.

LucidEra told ThinkStrategies’ Jeff Kaplan that newer kids on the block learned from LucidEra’s mistakes and could skip many of the development cycles and bumps in the road that the company had to go through.

Back when next-generation ASPs such as Salesforce.com were getting started, they certainly didn’t try to go out and buy large data centers to essentially foot the infrastructure bill for enterprise customers, or try to retrofit Oracle’s or SAP’s licensing model to fit a multi-tenant one like first-generation ASPs had.

No, they, and other ASPs — now called SaaS vendors — learned from the mistakes of first-to-market ASPs like USinternetworking (USi), now part of IBM, and Corio, also now part of IBM.

USi and Corio came out the other side, but there are others that simply disappeared. Like ASP FutureLink, a company that many, including Microsoft — which sank $10 million into it — had high hopes for.

But all the buzz around so many of these players didn’t bring in enough customers to support them all.

Similarly, today there is a lot of interest in business intelligence and in the SaaS model. But is there enough interest to support all of the SaaS BI vendors?

Economy and customer adoption aside, LucidEra had a unique set of circumstances, including hooking its wagon to Salesforce.com. It is always risky to ride the coattails of another company, as USinternetworking and Corio found out by relying so heavily on Oracle and SAP.

And LucidEra did choose a niche in sales analytics. “One problem that [LucidEra] ran into was that not a lot of Salesforce.com customers saw the value-add of what they had to offer,” Kaplan said. “And to a greater extent, a lot of folks today think having analytics is a luxury they can do without.”

Some competitors believe that LucidEra’s downfall was its older code, developed in the late 1990s by Broadbase Software, the argument being that such code was not designed for the SaaS model. “I believe that it is difficult to retrofit a SaaS approach to an existing architecture and, unless designed as a SaaS application – multi-tenant, SOA, layered architecture than can scale horizontally – cost-effectively scaling the solution is incredibly hard,” said Wayne Morris, CEO of SaaS business intelligence vendor myDials. Morris expands on what went wrong at LucidEra on his company’s blog post.

Meanwhile, Brad Peters, CEO of SaaS business intelligence vendor Birst, chalks up LucidEra’s expected shutdown to the company’s standalone analytic software approach, as opposed to most companies’ need to analyze data from multiple sources, something that LucidEra’s software wasn’t set up for, he said.

All in all, the comments I’ve seen on blogs say this is not a sign of the on-demand model’s going away — not by a long shot — but a demise that happens naturally when a lot of companies crop up in one space. There are bound to be some that just don’t cross the chasm, as Geoffrey Moore would say.


June 22, 2009  3:33 PM

CIO weekly wrap-up: Business continuity plans, BI apps, IT insourcing

Rachel Lebeaux Rachel Lebeaux Profile: Rachel Lebeaux

It’s a dreary June Monday here in New England — I hope the weather is better in your neck of the woods! This past week at SearchCIO.com, we examined methods for a successful business continuity plan, business intelligence applications and strategy and IT insourcing of previously outsourced IT jobs. Get links to the full stories below:

Business continuity plan needs the right leader, metrics to succeed – A successful business continuity plan requires business leadership, whose role includes setting the metrics that will drive disaster recovery spending.

CIOs take business intelligence applications, strategy to next level – CIOs are advancing the capabilities of their business intelligence applications in various ways, including tackling self-service, real-time data and predictive analytics. Here’s how.

IT insourcing can bring jobs, cost savings back in-house, experts say – IT insourcing is on the rise as companies terminate IT outsourcing contracts or let them expire. Here’s why, and whether it might work for you.


June 19, 2009  2:10 PM

Lean methodologies for lean times

Karen Guglielmo Karen Guglielmo Profile: Karen Guglielmo

In lean times, companies should consider lean methodologies and tools to cut costs and improve processes. Lean BPM and Lean Sigma are two lean methodologies that allow companies to identify discrepancies and quickly improve business processes.

Lean BPM, according to Clay Richardson, senior analyst for Forrester Research, is the practice of “trimming the fat off of bloated BPM initiatives.” In a recent survey of 95 IT decision makers, Richardson found that companies are being asked to implement more BPM initiatives, while having their project budgets and resources cut at the same time. More than 50% of the 95 IT decision makers said that their BPM budgets were being reduced, while the demand was going up.

For Lean BPM to work, you have to think lean. Richardson suggests that companies get the most out of their Lean BPM plans by making sure to add nothing but value to projects, center or focus only on the people who are adding value, and audit your staff to make sure you have the right skill sets in place for your process analysts. All of these steps, and possibly adding a formal BPM Center of Excellence, will help ensure Lean BPM success in the enterprise.

Another way companies can improve processes in these lean times is through Lean Sigma. Different than Six Sigma, which is a customer-focused methodology applied to longer term projects, Lean Sigma focuses on short-term gains by identifying defects and eliminating waste from processes. Today’s business leaders are looking for these short-term gains and often don’t have the time, money or resources to invest in longer-term projects like Six Sigma.

In an economy where companies are constantly struggling to do more with less, lean methodologies like Lean BPM and Lean Sigma are just two examples of how some companies are successfully leveraging limited money and resources for quick gains. How many other ways can companies trim the fat, be lean and remain in competitive in today’s economy? What other “lean processes” or “lean tools” have you found effective?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: