August 6, 2010  11:56 AM

What’s data fungibility got to do with delivering business insight?

Linda Tucci Linda Tucci Profile: Linda Tucci

What’s data fungibility have to do with delivering business insight? No, really, I’m asking.

According to Burton Group analyst Lyn Robison, one reason CIOs are struggling to deliver business insight to the business — as opposed to information — is technology’s misguided relationship with data. IT professionals of a certain age, he said, tend to view data as “sawdust,” a byproduct of the processes that information systems so brilliantly automate.

“Many IT professionals still haven’t realized that we actually store this data and can do useful things with it,” said Robison, who presented his views at last week’s Catalyst conference in San Diego.

For process-oriented IT pros, data is an interchangeable commodity, to be shoveled into databases just as oil is pumped into steel barrels — or at best, organized by type like cut lumber in a warehouse, one plank as good as another.

“The real world is filled with unique things that we must uniquely identify, if we are going to capture those aspects of reality that are important to us,” Robison said. To be useful, data needs to be a snapshot of reality. Nonfungible assets, unlike fungible commodities, need to be identified individually. And the IT department needs to manage those identifiers so the business can zero in on the data that matters. Fungibility matters.

So, what’s fungible? Currency, for example, usually is considered fungible. One $5 bill is as good as another. Buildings are nonfungible. Transactions are nonfungible. Customers are nonfungible. When nonfungible assets are treated like fungible commodities, the consequence is “distortion and incomplete information,” Robison said.

A large university Robison worked with recently discovered it was paying costly insurance premiums for five buildings it no longer owned, because its information systems managed the university’s buildings as interchangeable, he said. A Florida utility company paid out millions of dollars to the families of a couple tragically killed by a downed pole’s power line — only to discover afterwards that another entity owned the pole. “The liable entity got off, because the utility poles around that metro area were not uniquely identified,” he said.

It turns out, however, that discerning the difference between fungible commodities and nonfungible assets is not as clear-cut a task as it might appear, Robison conceded. “Defining fungibility is something of an art,” he said. Just like in life, context is everything.

However, the bigger problem in managing data to deliver business insight, according to Robison, is that today’s enterprise systems do not identify nonfungible data assets “beyond silo boundaries.”

Primary keys are used as identifiers, but are not meant to be used beyond the boundaries of any particular database silo,” he said.

After his presentation, I learned that Robison has developed something he calls the methodology for overcoming data silos (MODS), “a groundbreaking project structure for bridging data silos and delivering integrated information from decentralized systems,” according to his recent paper on the topic. You can hear Robison talk about using MODS here. Let me know what you think.

Oh, and how you distinguish between the fungible and the nonfungible.

August 5, 2010  9:41 PM

Enterprise adoption of the public cloud hinges on liability policies

4Laura Laura Smith Profile: 4Laura

Of all the potential showstoppers to enterprise adoption of the public cloud — including such well-touted concerns as security, interoperability and portability — liability policies have emerged as the one most likely to derail progress. It doesn’t take an actuarial degree to predict that at some point, the cloud is going to go down — whether for routine service or by malicious intent. The question is, who is responsible for damages?

Because they are designed to serve the masses, large clouds like’s Elastic Compute Cloud, or EC2, have standard service level agreements that may refund businesses for time lost; but that’s pennies compared to the business that could be lost during an outage. Enterprises want to shift some of the financial risk to public cloud providers, but with increasing interest in cloud services, providers have little incentive to change their business models, according to Drue Reeves, director of research for the Burton Group in Midvale, Utah. The issue was brought home by Eli Lilly’s decision last week to walk away from Amazon Web Services (AWS) after its negotiations failed to push some accountability for network outages, security breaches and other forms of risk to AWS inherent in the cloud. In the article, an AWS spokesperson denied that Eli Lilly was no longer a customer.

At the moment, there isn’t enough jurisprudence to decide who pays for what, Reeves said, so he gathered a panel of lawyers and cyber insurers to comment on what has been deemed the Wild West of computing at the Burton Group’s Catalyst conference in San Diego last week. Heck, Rich Mogull, analyst and CEO of Securosis LLC, a consultancy in Phoenix, even called the public cloud a seedy bar.

“We don’t really have cloud law,” said Tanya Forsheit, founding partner of the Information Law Group in Los Angeles. “It’s going to happen. . . .[S]ome big breach involving a large provider will result in a lawsuit, and we might see principles coming out of that,” she said. Until then, negotiation is the order of the day around liability policies, she added.

Indeed, there have been 1,400 “cyber events” since 2005, according to Drew Bartkiewicz, vice president of cyber and new media liability at The Hartford Financial Services Group, a financial services and insurance company in New York. “If you had an event in 2005, you’re lucky,” he said. “The severity over the last two years is starting to spike. This is an exponentially growing risk.” With so much information flowing around the clouds, supply chains become liability chains, he added. “The question is, who is responsible for information that’s flowing from one cloud to another when a cloud goes down?”

The answer comes down to contracts, and what should be considered a reasonable standard of care, Forsheit said. “Have we reached a point where encryption is the standard?” she asked.

But enterprises aren’t the only ones at risk in the cloud: If the large providers are forced to indemnify businesses, the game will be over, Reeves predicted. The industry needs to figure out how to share the risk in order for the cloud market to mature. “Otherwise, the cloud becomes this limited place where we put noncritical applications and data,” he said. “If we don’t address this issue of liability, we’re stuck.” will be following the issue of liability policies in the cloud. Do you have a story that needs to be told? Contact me at

July 30, 2010  12:52 PM

Catalyst Conference: Is the new BI about less automation — or more?

Linda Tucci Linda Tucci Profile: Linda Tucci

I’m here on the business intelligence track at the Burton Group’s Catalyst Conference, trying to sort out the old BI from the new. As you might expect, there is a lot of talk about predictive analytics and complex event processing. The data warehousing of the past is done! Accessing “data on the fly” so the business can nimbly navigate the new normal is in. Yesterday’s theme was what IT needs to do to deliver business insight, not just business intelligence.

At the first session I learned that the old BI — BI at the dawn of computers — was there to help companies automate and become more efficient by taking the human factor out. The side benefit was that it reduced the tasks that humans had to think about (presumably so they could think about even harder questions). The approach was highly successful, but it did not anticipate the massive amount of data that businesses accumulate. The automation paradigm has run its course. The focus of the new BI should not be on removing the human to gain efficiency — those efficiencies have been realized — but getting the human back in the game. And not by handing the business another static (yawn) report that itemizes or narrowly analyzes data. The business doesn’t want to wait for answers from IT. The new BI is not about delivering answers at all, but about building architectures and tools that allow individuals to discover the salient pieces of the data. IT should focus on finding more powerful ways to assemble data to help discover why something happened, not just what happened. That was one track I heard.

By the next session, I was hearing that BI needs to automate more, by using complex event processing (CEP) tools to correlate tons of information that will allow businesses to take real-time automated action. Instead of getting out of the way of the business, IT needs “to lead the way” on complex event processing, according to the analyst. Some industries are already deep in CEP. Casinos do it well. The airlines are getting better at it. Of course, the financial services industry nearly brought the world economy to an end, partly by doing this. But I didn’t hear much about risk on the BI track. Or about privacy issues related to the stores of personal data required to turn complex event processing into something that helps a business improve customer service.

When I asked about the danger of taking automated action based on potentially bad data — on the kind of scale, mind you, that we saw in the financial services industry — I heard about “feedback” loops that adjust actions according to mistakes. Consider, I heard, how complex event processing can reduce risk by correlating data to instantly alert a theater of a fire and activate safety mechanisms, thus minimizing the loss of life. Unless the data is wrong, I was thinking, and the automated response causes a needless stampede to the exits.

One aspect of BI that just about everybody seemed to agree on: Data is precious. Or, as I heard at today’s session about fungible and nonfungible data, “Data is not the sawdust of processes.” More tomorrow, about the difference between fungible and nonfungible data. (P.S.: It’s not as clear-cut as you might think.)

Let us know what you think about this post; email Linda Tucci, Senior News Writer.

July 29, 2010  9:41 PM

How mega data center construction is tied to taxes

4Laura Laura Smith Profile: 4Laura

Massive data center construction is happening in places where power is cheap and taxes are low, like Dublin, Ireland. That’s where Microsoft built a 300,000-square-foot data center to support European cloud services on the Windows Azure platform. Mega data centers are becoming the trend — Intel says a quarter of the chips it sells will go into them by the end of 2012.

People can wax poetic about the cloud, but the services flying over the Web touch down on a piece of physical equipment somewhere. Consider Digital Realty Trust, a provider of data centers (move-in or custom) with more than 15 million square feet of space in 70 locations worldwide. Its data center facility in Chicago is the city’s second-largest consumer of power, behind O’Hare International Airport.

What’s scary is the prospect of a bomb being able to wipe out a mega data center and all the information in it. Or a hack. Granted, these data center behemoths are paired — mirrored to a secondary site that’s close enough to avoid latency, depending on the application and connectivity — so that if a disaster occurred at one site, the company could recover data from the other. Still, that’s a far cry from the distributed nature of the Internet, which was designed with ubiquitous connectivity so that no single (or multiple) node failure could disrupt operations. Of course, high-quality connectivity is still very expensive, so a distributed network of bandwidth-hungry mega data centers may not be the best way to go.

Physical security is just one issue; another concern is the threat of taxes that may be imposed after a mega data center is complete. When Washington state ruled last year that data centers were no longer covered by a sales tax break for manufacturers and imposed a 7.9% tax on new construction, Microsoft migrated its Windows Azure cloud computing infrastructure from its data center in Quincy, Wash., to its 475,000-sqare-foot facility in San Antonio before opening a 700,000-square-foot mega data center in Chicago.
Google is thinking of moving out of North Carolina for similar reasons, according to Mike Manos, Microsoft’s former director of data center services, who is now senior vice president of Digital Realty Trust. In his blog, Loose Bolts, Manos writes, “While most people can guess the need for adequate power and communications infrastructure, many are surprised that tax and regulation play such a significant role in even the initial siting of a facility.”

And when other parts of the country — or world — begin to offer tax incentives for building mega data centers in their backyards, being able to move workloads from one data center to another would make good economic sense. However, this requires a software layer that Google and others are still working on. “Something this cool is powerful,” Manos writes. “Something this cool will never escape the watchful eyes of the world governments.”

Reading Manos’ post, I thought of the PODs (point of distribution data centers) being marketed by the likes of IBM and Hewlett-Packard — virtual shipping containers redone with all the CPU power, network and cabling, water, and air cooling within. I imagined them stacked on barges, anchored in the world’s cheapest ports. But Manos had already thought of that — “Whether boats, containers or spaceships, all of these solutions require large or large-ish capital solutions. Solve the problem in software once, then you solve it forever.”

Let us know what you think about this post; email Laura Smith, Features Writer.

July 22, 2010  7:59 PM

Readers respond to practice and philosophy of IT chargeback

4Laura Laura Smith Profile: 4Laura

My thanks go out to readers who were charged up enough to write about the IT chargeback series on Whether you do showback, partial chargeback or chargeback for a profit; charge by usage or subscription for internal or external services; or wouldn’t go near the model with a 10-foot pole, it’s clear that the issue isn’t cut and dried.

While several vendors wrote with words of thanks for illuminating a tense topic, other notable correspondents reflected on subjects as diverse as accounting principles, the philosophy of chargeback and the evolution of the IT department.

Chris Puttick, CIO of Oxford Archaeology (OA), a U.K.-based consultancy of archeologists working in Europe and Africa, questioned the perception of IT within a chargeback environment. “I’m surprised you didn’t have more people picking up on the negative side of chargebacks, i.e., you are presenting IT as a cost rather than as something of value,” he wrote. (To learn how the thriving business of archaeology influences OA’s IT strategy, watch for my profile on in August.)

Puttick specifically commented on chargeback by per-unit pricing, which is not as simple as it seems because it fails to acknowledge the savings present in larger deployments. “CFOs do not make accounting principles simple, they make them accurate,” he wrote.

Another letter came from a veteran of the IT chargeback debate, whose message was salient, sympathetic and solution-oriented.

“I’ve been working in the chargeback field since 1985, and your article is one of the few that relates the problems to something everyone can understand,” wrote Sandra Mischianti, director of research and development at Nicus Software Inc. in Salem, Va. “I usually use the restaurant analogy. (Set the prices for the entrees and do not attempt to measure the grains of salt. But you need to know how much salt you use each month and track what it’s costing you.) Some of our clients get it, and some definitely do not.

“I also agree that most companies are not doing a great job,” Mischianti added. “As one client recently said at our conference, ‘IT chargeback is like teen sex: Everyone is talking about it, but no one is doing it well!”

Mischianti noted that external cloud providers have created a great deal of pressure on IT departments, which are on tactical defense. “The real problem for the CIO is to demonstrate the value IT has over the cloud providers,” she wrote, listing Issues such as integration with internal change control to anticipate changes, risk management, disaster recovery, etc.

“When individual departments move their applications into an external cloud, those systems don’t normally interface very well, and you get a bunch of disparate systems, like we had in the early 1990s,” Mischianti added. “Anyone who was in IT then can attest to why that didn’t work, but it seems we are headed in that direction again.”

IT can and should set the technology direction and ensure that applications can evolve and interface with every other application the CIO might decide to implement. But it’s very hard to put a dollar amount on that — and even harder to get someone else to pay for it.

What’s your value proposition? Look for chargeback best practices next week on, and send your thoughts to

July 22, 2010  3:02 PM

Chief data officers: Bringing data management strategy to the C-suite

Linda Tucci Linda Tucci Profile: Linda Tucci

John Bottega admits he’s a bit of a clotheshorse. The guy likes a quality suit. Actually, he is a connoisseur of fine suits, their fit, their style, their durability. The sleeve on a quality suit, for example, is cut to show a glimpse of shirt cuff. Crumple the pant leg of a quality suit, and it should spring back into shape, pretty much wrinkle-free. In fact, it’s the raw materials used and the workmanship employed that define the quality of a suit, or lack thereof, Bottega explains. The best materials plus superb workmanship, combined with a disciplined manufacturing process, make for a high-class suit.

Bottega is not in the garment business. But he’s a suit CIOs might just want to pay attention to.

A keynote speaker at the MIT 2010 Information Quality Industry Symposium, Bottega is vice president and the chief data officer (CDO) for the markets group at the Federal Reserve Bank of New York. Before that, he was CDO at Citigroup, the first person in the financial services industry to hold that position, according to his bio.

His disquisition on suits was just one of several analogies he used in his talk on “Information Quality and the Financial Crisis.” Quality raw material is data captured at the source. Quality workmanship is determined by the skill set of the data stewards. A quality manufacturing process needs to follow best practices for collecting and maintaining data. A high-class data supply chain is all about getting the right information to the right people, at the right place, at the right time.

The talk was interesting — he’s a skilled speaker. Bottega also has some strong ideas about data quality, as reported in my story today on data governance programs.

But what really perked up my ears was his job description. As CDO at the New York Fed, Bottega is responsible for the bank’s data management strategy, which, again quoting the official bio, “encompasses business, governance and technology in order to establish a sustainable business data discipline and technology infrastructure.”

Whoa, Nelly. Ain’t that the CIO’s job?

“Completely different role,” Bottega said when I caught up with him after his talk. “The genesis of the chief data officer was to bring 100% focus on a content and business issue, coupled with technology. Technology has been focused for years and years and years on the pipes and the engine. Banks and businesses are realizing there is a whole business component to data.”

The data supply chain includes technology, acquisitions, procurements, compliance, legal. “If no one person were focusing on it, it would be kind of a patchwork,” Bottega said. “No one owned the whole end-to-end data supply chain.”

The thinking behind establishing a data management office is that data is a separate and standalone discipline supported by technology, Bottega said, and “can stand alone as a corporate function.”

Of course, CIOs are chief information officers, I felt compelled to point out. And as businesses move from an analog to a digital world, why are CIOs not equipped to take data management strategy on?

“If you go back to the origination of the role, the CIO or the CTO was focused on the machines. I heard someone describe it as the engine room versus being on the deck,” Bottega said. He quickly added that having a chief data management officer does not minimize the importance of technology, nor is it meant as an indictment of the CIO or CTO.

“But think about it: CIOs and CTOs have to focus on so many pieces. This is just taking a chunk of this discipline and saying that data has grown so relevant to efficient operations that, gee, we need somebody focusing 100% of their time on it.”

July 16, 2010  3:03 PM

Chargeback process: An old model heralds changes in IT

4Laura Laura Smith Profile: 4Laura

The old chargeback process, revived by cloud computing, could have major ramifications for IT organizations as they revamp to become centralized service providers, experts say.

Traditional chargeback models divide the IT budget for a business unit, for example, by the number of users in it. There are numerous ways to do this, from “showback,” where the business units see a bill that they don’t have to pay, to partial, full and for-profit chargeback models.

The chargeback process for cloud services is more complicated, because IT must measure consumption for workloads in a shared environment. And yet the technical challenges — such as applying metrics to IT services and billing the correct parties — pale in comparison to the cultural barriers IT organizations will face as they reorganize to remain relevant, according to Craig Symons, vice president and principal analyst at Forrester Research Inc. in Cambridge, Mass.

With outsourcing, offshoring and hosted Software as a Service, or SaaS, there are plenty of opportunities for business units to compare external service offerings to an internal IT bill. Enterprise IT departments need to be willing to contract with external providers and promote their own internal strategy, or risk “being marginalized as business units go elsewhere,” Symons said.

The first order of business for enterprise IT is to develop a reusable catalog of products and services around which metrics can be placed to charge back business units. Some IT organizations are hiring product managers to package products and services to do this. Symons suggests assigning an account manager to each business unit to go over the bills, so more intelligent discussions can be held. Other new roles will include cloud services procurement and vendor management, as IT becomes a centralized provider of technology services — or what has been termed by some as “IT as a solutions broker.”

Let me know if your IT department is undergoing a role change due to shifting to external service providers, and how you are dealing with it; Email me at

July 15, 2010  4:08 PM

Does IT chargeback pave the way to outsourcing?

Linda Tucci Linda Tucci Profile: Linda Tucci

If your organization is talking about moving to an IT chargeback or a supply and demand model for IT, could that be its first step on the road to outsourcing?

That was an interesting sidelight raised by a story I did this week on a company that cut millions of dollars in technology spending by splitting its IT organization into a demand side and a supply side.

To back up a bit, the management experts I interviewed about this “supply demand” model agreed it can be an effective way to go. Demand-side IT works with the business on an IT roadmap and negotiates with supply-side IT on getting it done. Done right, the model recognizes that requests for IT services must come from the business, even as it imposes fiscal and strategic discipline on those requests. In turn, supply-side IT people become expert in their areas. Relieved of having to respond to a chaotic barrage of business demands, supply-side IT staff can focus on efficiency and competitive advantage — the stuff that makes internal IT departments relevant. That’s the ideal, anyway.

In practice, the model has the potential to disenfranchise the supply side of IT, a former CIO in the financial services industry told me.”I’ve been in situations where the supply side is viewed as a commodity,” said Jack Santos, who’s now a research vice president at Gartner Inc.’s Burton Group. “The thinking [from the business] was that this was the first step to outsourcing.”

Mark McDonald, Santos’ colleague at Gartner, saw it a bit differently, arguing that IT chargeback, per se, can pave the way to outsourcing IT, because the focus is strictly on price: “When I do IT chargeback, I make my pricing visible, which automatically means it will be compared to external providers,” he said. On price alone, the internal IT organization is almost guaranteed to be noncompetitive, Santos and McDonald said, especially when the IBMs and CSCs of the world typically underbid for the first couple years to get the business.

No matter, said Bruce Barnes, another former financial services CIO, who now runs a consulting practice out of Ohio. The wave of the future is that IT organizations are getting smaller not bigger, and the piece that survives is business literate, he said. Being a consultant, he naturally offered up a four-box matrix to explain. “One axis is running from simple to complex, and another from generic to highly proprietary,” he said. In the past, internal IT was in the lower left-hand quadrant — simple and generic — and the business hired consultants for the high-level stuff. The upper right-hand quadrant — complex and highly proprietary — is where internal IT people need to be. The rest can be given away if the business finds somebody good enough to take it.

July 9, 2010  2:32 PM

IT chargeback: A political hot potato is tossed up by cloud computing

4Laura Laura Smith Profile: 4Laura

A question has been nagging me since I attended Cloud Expo in New York: What metrics can IT departments use to charge back business units for cloud services? Measured service is fundamental to the National Institute of Standards and Technology definition of cloud computing, and enterprises building private clouds presumably will bill business units for their consumption of computing resources.

When I called CIOs and analysts about IT chargeback models, I didn’t expect to unearth such passionate arguments for and against chargebacks, a political hot potato at the heart of the IT-business relationship. Some say IT chargeback is not only inevitable but mandatory for achieving true efficiency; others say it sets up a charged relationship in which business units naturally second-guess IT departments’ pricing for services. With such public clouds as’s Elastic Compute Cloud, or EC2, a credit-card click away, an IT staff risks losing an opportunity to guide IT strategy.

In this economy, IT departments need to prove their investments support the strategic imperatives of the business. Therefore, IT chargeback metrics need to reflect the desired business outcomes, often in business terms. In a Software as a Service model, for example, the metrics wouldn’t refer to the disk and memory usage so much as to the number of customer requests responded to within a given period of time. For chargeback to be effective, both IT and business strategists need to collaborate on appropriate metrics; that can be challenging because they often don’t speak the same language. In fact, one source said the growing role of a CIO is that of a translator between IT managers and the business.

IT chargeback isn’t as simple as it sounds. My reporting blossomed into a series of stories on beginning this week that included IT chargeback metrics, the pros and cons of chargebacks, the contentious relationship between IT and the business on the subject, and important takeaways. I learned that chargeback isn’t the be-all and end-all, but just one part of an IT cost management structure.

Do you or don’t you charge back? Send me email at I want to hear all about it.

July 8, 2010  3:39 PM

Bumpy patch ahead for MDM software

Linda Tucci Linda Tucci Profile: Linda Tucci

Master data management software is about to enter a rough patch, according to analyst Andrew White, agenda manager for MDM and analytics at Gartner Inc. White explained to me in an email recently what the near future for MDM adoption looks like, and it is not necessarily pretty. The large software vendors, such as SAP, Oracle and IBM, are just now incorporating MDM into their product lines. They are doing this because the “early adopter” market is beginning to transition to the “fast followers.” That means that pretty soon MDM, in White’s words, will be “crossing the chasm” to become mainstream.

But White sees a big problem: Many IT shops will assume that MDM software is mature and ready for easy adoption — “and yet it is not,” he said. Because of this disconnect between the robutsness of the MDM software and the expectations of CIO adopters, “failure rates will increase,” giving MDM a black eye. Moreover, true MDM will not occur for most users, who instead will end up with just another half-baked “data integration effort.”

“As such, MDM will get a bad rap and name in the coming year — perhaps because the big vendors have invested in it,” White wrote. Or, to put it another way, because they are selling it to you before its time.

CIOs who just must have MDM now, be forewarned, White said: you will need to do a lot of the work on your own.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: