I’ve held off on mentioning this until now … but is the rain in New England ever going to stop? The bright side of this weather (pun fully intended) is that I’m probably more productive at work when I’m not staring longingly outside the window at a sunny, summer day … at least, that’s what I’m telling myself to get through the gray days.
If you’re trapped indoors like I am (or even if you’re not), check out the latest SearchCIO.com content this week on business intelligence strategy, PPM software, customer service satisfaction and our new FAQ on Six Sigma methodology and let us know what you think!
Putting your business intelligence strategy to the test – Review our latest coverage of business intelligence and corporate performance management, then test your knowledge with this quiz.
How PPM software usage changes as firms grasp IT portfolio management – Nearly four in 10 large organizations use PPM software, but not all instances are created equal. New survey data shows what IT values most and how deployments mature.
Key to customer service satisfaction: Less complexity – Customer service satisfaction can be improved by simplifying complexity, according to business executives interviewed in this chapter of James Champy’s newest book, Inspire! Why Customers Come Back.
How does the Six Sigma methodology benefit IT? – The Six Sigma methodology has helped companies improve customer service and eliminate errors for years. Learn how IT can reap the benefits of this service-driven methodology.
Healthcare actually pushed the Iranian elections out of the top news slot this week. Most of the attention has been on the administration’s efforts to establish a government insurance plan of last resort. But in the background, the health information technology (HIT) effort continues to boil along, with a lot of action and not much clarity emerging on standards for electronic health record (EHR) software.
Dr. John Halamka, noted CIO of CareGroup Healthcare System, reported on the second meeting of the HIT Standards Committee, of which he is a member. The committee is currently engaged in a four-dimensional exercise: drilling deep into the information space that healthcare inhabits to understand what data in what format has to be interchangeable, all the while trying to understand how these standards will develop over time. It’s a Herculean task, even at the 50,000-ft. level, and I’ll be very interested to see where they are able to make progress and where they stall. Halamka points out that already people are beginning to see that EHR won’t progress unless complementary initiatives like lab results data standardization also proceed apace.
Meanwhile, the standardization picture is far from clear. Neil Versel is hearing rumors that CCHIT may be replaced or augmented as the certification body for EHR software. The presumptive favorite body to certify may be sidelined or augmented by the Office of the National Coordinator for Health Information Technology, the federal agency overseeing the Recovery Act initiatives in electronic healthcare. One hospital IT director I’ve spoken with says he expects the Joint Commission that accredits hospitals to step into the fray. That might be a culture shock, as the Joint Commission has a reputation for rigor in its clinical inspections that could be a rude awakening for software vendors.
The next few months will hopefully clarify just what IT needs to do to demonstrate meaningful use of healthcare IT. Meanwhile, IT organizations are not standing still. Alex Barrett wrote recently about Boston-based Beth Israel Deaconess Medical Center’s EHR project to get private physicians integrated into their systems. BI-Deaconess is making creative use of server virtualization to build an infrastructure that can grow and adapt as it gains acceptance and use. This is not an insignificant problem for architects who face uncertain usage targets and unknown ramp up times.
Chris Griffin documents an interesting culture shift in healthcare IT, which he describes as a “culture that puts a big emphasis on software applications, rather than on hardware and a holistic view of the computing environment.” That’s probably a dysfunctional approach for IT departments that will face increasing pressure to store more patient data from imaging and other diagnostic procedures as well as for longer and longer times due to regulatory compliance requirements.
For those keeping an eye on IT outsourcing and offshoring, there were a couple of noteworthy pieces of news this week regarding the artist previously known as Satyam Computer Services Ltd.
First, Tech Mahindra Ltd., which purchased the troubled IT outsourcing company two months ago, following the Satyam scandal, has officially rebranded it as Mahindra Satyam.
Secondly – and, I think, more importantly – Satyam is looking to cut jobs if orders coming into the company continue to languish. According to this article, 8,500 employees placed in a so-called “virtual pool” might see their positions eliminated in six months if the company fails to find them work.
Satyam’s staffing troubles aren’t surprising — it must be difficult to woo new Satyam customers or retain those with expiring contracts, given the past transgressions of company leaders. Considering all that, I’m a little surprised that the new owners are leaving Satyam in the company’s name at all. As a point of comparison, after ValuJet Airlines experienced a series of safety problems and the fatal crash of ValuJet Flight 592 into the Florida Everglades, it changed its name and is now operating as AirTran Airways.
Since the Satyam scandal broke early this year, IT outsourcing and offshoring clients have struggled to parse through fact and fiction, protect existing contracts and wise up when pursuing new IT outsourcing deals. As the recession deepened, we began to hear that companies were seeking cheaper rates, sometimes in exchange for more flexibility on the part of the outsourcer in how work is completed. More recently, it seems that insourcing – bringing previously outsourced IT work back in-house – is on the rise.
So what role has the Satyam scandal played in these trends? I recently asked Ben Trowbridge, CEO of Alsbridge Inc., a U.S.-based IT outsourcing and business process optimization consulting firm, whether the scandal was sticking in his clients’ minds.
“Yes – but it’s amazing how short a memory clients have for bad news,” Trowbridge replied. “Within a month of that being brought to a head, it was like everybody had forgotten about it.”
This wasn’t the answer I was expecting. Google’s AdWords tool tells me that the term Satyam is still being searched quite a bit. So I’m putting the question out to enterprise CIOs: Has the Satyam scandal had any effect upon your company’s IT outsourcing and offshoring activities in the past six months? I’d love to hear your stories.
When my inbox began filling up with all the theories of why BI SaaS vendor LucidEra is expected to close down by month’s end, I couldn’t help thinking that the more things change (in name, at least), the more they stay the same.
LucidEra is in part a victim of a down economy, just as application service providers (ASPs) were in the late ’90s/early 2000s when the dot-com bust happened and VC funding started to dry up.
Like ASPs USinternetworking and Corio, LucidEra was one of the first to the SaaS BI parade. It had to lay new ground in many ways: The Web technologies that today’s SaaS vendors tap into weren’t around when LucidEra got started, so the company had a bigger learning curve and had to do lot of the development itself.
LucidEra told ThinkStrategies’ Jeff Kaplan that newer kids on the block learned from LucidEra’s mistakes and could skip many of the development cycles and bumps in the road that the company had to go through.
Back when next-generation ASPs such as Salesforce.com were getting started, they certainly didn’t try to go out and buy large data centers to essentially foot the infrastructure bill for enterprise customers, or try to retrofit Oracle’s or SAP’s licensing model to fit a multi-tenant one like first-generation ASPs had.
No, they, and other ASPs — now called SaaS vendors — learned from the mistakes of first-to-market ASPs like USinternetworking (USi), now part of IBM, and Corio, also now part of IBM.
USi and Corio came out the other side, but there are others that simply disappeared. Like ASP FutureLink, a company that many, including Microsoft — which sank $10 million into it — had high hopes for.
But all the buzz around so many of these players didn’t bring in enough customers to support them all.
Similarly, today there is a lot of interest in business intelligence and in the SaaS model. But is there enough interest to support all of the SaaS BI vendors?
Economy and customer adoption aside, LucidEra had a unique set of circumstances, including hooking its wagon to Salesforce.com. It is always risky to ride the coattails of another company, as USinternetworking and Corio found out by relying so heavily on Oracle and SAP.
And LucidEra did choose a niche in sales analytics. “One problem that [LucidEra] ran into was that not a lot of Salesforce.com customers saw the value-add of what they had to offer,” Kaplan said. “And to a greater extent, a lot of folks today think having analytics is a luxury they can do without.”
Some competitors believe that LucidEra’s downfall was its older code, developed in the late 1990s by Broadbase Software, the argument being that such code was not designed for the SaaS model. “I believe that it is difficult to retrofit a SaaS approach to an existing architecture and, unless designed as a SaaS application – multi-tenant, SOA, layered architecture than can scale horizontally – cost-effectively scaling the solution is incredibly hard,” said Wayne Morris, CEO of SaaS business intelligence vendor myDials. Morris expands on what went wrong at LucidEra on his company’s blog post.
Meanwhile, Brad Peters, CEO of SaaS business intelligence vendor Birst, chalks up LucidEra’s expected shutdown to the company’s standalone analytic software approach, as opposed to most companies’ need to analyze data from multiple sources, something that LucidEra’s software wasn’t set up for, he said.
All in all, the comments I’ve seen on blogs say this is not a sign of the on-demand model’s going away — not by a long shot — but a demise that happens naturally when a lot of companies crop up in one space. There are bound to be some that just don’t cross the chasm, as Geoffrey Moore would say.
It’s a dreary June Monday here in New England — I hope the weather is better in your neck of the woods! This past week at SearchCIO.com, we examined methods for a successful business continuity plan, business intelligence applications and strategy and IT insourcing of previously outsourced IT jobs. Get links to the full stories below:
Business continuity plan needs the right leader, metrics to succeed – A successful business continuity plan requires business leadership, whose role includes setting the metrics that will drive disaster recovery spending.
CIOs take business intelligence applications, strategy to next level – CIOs are advancing the capabilities of their business intelligence applications in various ways, including tackling self-service, real-time data and predictive analytics. Here’s how.
IT insourcing can bring jobs, cost savings back in-house, experts say – IT insourcing is on the rise as companies terminate IT outsourcing contracts or let them expire. Here’s why, and whether it might work for you.
In lean times, companies should consider lean methodologies and tools to cut costs and improve processes. Lean BPM and Lean Sigma are two lean methodologies that allow companies to identify discrepancies and quickly improve business processes.
Lean BPM, according to Clay Richardson, senior analyst for Forrester Research, is the practice of “trimming the fat off of bloated BPM initiatives.” In a recent survey of 95 IT decision makers, Richardson found that companies are being asked to implement more BPM initiatives, while having their project budgets and resources cut at the same time. More than 50% of the 95 IT decision makers said that their BPM budgets were being reduced, while the demand was going up.
For Lean BPM to work, you have to think lean. Richardson suggests that companies get the most out of their Lean BPM plans by making sure to add nothing but value to projects, center or focus only on the people who are adding value, and audit your staff to make sure you have the right skill sets in place for your process analysts. All of these steps, and possibly adding a formal BPM Center of Excellence, will help ensure Lean BPM success in the enterprise.
Another way companies can improve processes in these lean times is through Lean Sigma. Different than Six Sigma, which is a customer-focused methodology applied to longer term projects, Lean Sigma focuses on short-term gains by identifying defects and eliminating waste from processes. Today’s business leaders are looking for these short-term gains and often don’t have the time, money or resources to invest in longer-term projects like Six Sigma.
In an economy where companies are constantly struggling to do more with less, lean methodologies like Lean BPM and Lean Sigma are just two examples of how some companies are successfully leveraging limited money and resources for quick gains. How many other ways can companies trim the fat, be lean and remain in competitive in today’s economy? What other “lean processes” or “lean tools” have you found effective?
CIOs looking for info on the how-tos of business continuity (BC) and disaster recovery (DR) have a wealth of literature at their Googling fingertips. On a mission yesterday to learn more about industry benchmarks for a recovery point objective (RPO) and recovery time objective (RTO), the metrics that in principle help determine a company’s recovery strategy, I found business continuity guidebooks galore. The U.K.-based Business Continuity Institute and the U.S.-based DRI International peddle tomes on BC. The industry bible, Business Continuity Planning Methodology by brothers Akhtar and Afsar Syed, can be had for $145 with 1-Click ordering.
Benchmarks for state-of-the-art RTO and RPO are one thing if your enterprise is already up to date with business continuity and disaster recovery. But these metrics — and affordable DR — can seem awfully abstract in the real world, I am discovering from CIOs, especially if the real world is the globe.
I recently got an email from Jiten Patel, CIO of the Foundation for International Community Assistance (FINCA International), an amazing not-for-profit that provides microfinance services to the world’s lowest-income entrepreneurs. You’d think with its 25-year successful history of village banking, not to mention high-profile support from uber-connected celebrities like actress Natalie Portman, FINCA would not be at a loss for good DR. And, indeed, setting an RPO and RTO benchmark for FINCA’s headquarters in Washington, D.C., “is an easy conversation.”
But there was Patel, on business in Baku, Azerbaijan, explaining the difficulty of providing affordable DR at its microfinance operations in 21 countries around the globe:
“DR options in the developing countries are fairly limited and expensive propositions, and not something we can afford to have in place in every country. And unlike in the US where we have the likes of Sungard and IBM who offer such services at a reasonable cost, this is not the case in countries where we operate — one has to buy a box, which may sit there ‘cobwebbed’ — which is not a viable option and it adds to the financial burden.
“In light of this my strategy has been to centralize our infrastructure on a region-by-region basis, to use 3rd-party hosting providers who can provide more robust redundancy options, and to try and negotiate more affordable DR pricing, which where we can makes it much more palatable for the likes of us.
“It would be a boon for all of us NGOs who operate globally, if someone like Sungard or IBM offered such services at very affordable rates around the globe: Offering it just in the US for far -lung global operations is not viable.”
The world may be getting flatter, but the playing field is far from level when it comes to DR. And it is just a hunch, but disasters that shred data must be common in countries where electricity on any given day is not a given.
This past week, SearchCIO.com looked at developing a business intelligence (BI) strategy, implementing project and portfolio management software, practicing innovation in a time of economic crisis, and addressing compliance requirements in cloud computing contracts. We also rolled out a brand new guide on BI and corporate performance management (CPM). Check out the stories below and let us know what you think!
Integrated business intelligence strategy spans app, BI developers – CIOs need to marry business application and BI development efforts to create a cohesive BI strategy, especially when adding analytical capabilities to enterprise applications.
PPM software helps university prioritize wide-ranging portfolios – Project and portfolio management (PPM) software helped the University of Utah centralize its IT initiatives, prioritize projects and establish useful portfolios. Learn how here.
James Champy: IT innovation in a time of economic crisis – IT innovation is critical in a time of economic crisis, says management guru James Champy. Find out how IT can enable the business to be successful and remain competitive in this video.
Addressing compliance requirements in cloud computing contracts – As CIOs look to cloud computing for data backup and storage, compliance requirements must be spelled out and met — or the data brought back down to earth.
BI services and solutions for enterprise CIOs – BI services and solutions and CPM software are growing quickly as effective means to gather and analyze data. Find out how to implement effective BI and CPM programs.
It would be suicide for a CIO to go to the CFO or CEO and say there’s no real return on our technology investments. But according to one industry expert, it’s the truth.
“If you just make a technology investment and don’t change the way you’re doing work, there’s no return on it,” said James Champy, author and chairman of consulting for Perot Systems Corp. during a recent interview at the MIT Sloan CIO Symposium. “The ROI doesn’t come from the investment in pure technology, but from the change in the nature of the work.”
According to Champy, the only way to measure the success of a technology investment is not through ROI, but through the realized improvements in business performance. And in the end if you have a dramatic improvement in business performance, you usually have a significant ROI from your technology investments.
So what’s the best way for measuring business performance and communicating the role technology plays in its success?
Many companies use BI scorecards and dashboards as a formal means for measuring business performance in the enterprise. These types of tools allow companies to use data in a more productive way and better align technology goals with the needs of the business.
As far as communication goes, you should “go to your company executives and tell them ‘here’s a way we’ve used IT to get a product to market, or respond to a customer call the day it comes in, or reduce the cost of a process by 50%,'” advised Champy. These types of “wins” are great examples to show the business executives how work has been significantly improved by technology investments.
And that’s where the real ROI really comes in – in the improvement technology investments make to business performance.
My story this week on the University of Utah’s project and portfolio management (PPM) program stood out from other PPM pieces I’ve reported because, in this case, the portfolio piece of the program got top billing.
The University of Utah categorized its IT initiatives across the university into 11 portfolios, in areas ranging from “architecture and security” to “instructional” and “user experience.” The entire program (which followed a campus-wide IT centralization effort) was completed in 10 months. Now, projects must go through the strategic portfolio process to be approved; it’s not longer a case of “whoever yells the loudest gets their money first.”
As we noted in the story, new SearchCIO.com reader research on PPM found that only 37% of 304 enterprise organizations define PPM as an enterprise-wide discipline used to select and prioritize investments in different parts of the company, including IT. The others use PPM only in IT (31%); don’t have a PPM practice at all (28%); or have one only outside of IT (5%).
It was also striking how that second “P” in PPM played such a huge role for the Salt Lake City-based university. A lot of organizations use PPM software mainly for project management and prioritization; the “portfolio” aspect doesn’t tend to play a big role. In our survey, 42% used PPM software, and of those, only 30% deemed portfolio management as a “very important” feature in PPM software system selection.
It sounds like this “portfolio first” approach at the University of Utah was a huge success. I’m told that an educational facility doesn’t necessarily view ROI in the kind of dollar terms other organizations do, but that they’re seeing results nonetheless. Resources (i.e. people) are being better utilized and the right projects – benefitting the university as a whole – are being completed in a timelier manner.
So why don’t more organizations focus on portfolio creation and management in PPM software purchases and planning? I’d be interested in hearing your organization’s rationale in the comments section below, or e-mail me.