Storage Soup


June 7, 2013  11:58 AM

Panzura’s $25 million may be its last funding round

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud storage gateway startup Panzura pocketed $25 million in a Series D funding round this week, and CEO Randy Chou said he expects it will be the last funding the company needs before becoming profitable.

Chou said he expects to double the company’s headcount to 200 people with the funding. He also plans to expand Panzura’s presence in Asia-Pacific and increase research and development for its products.

The Panzura is one of several startups that popped up in the past few years selling gateways to move data to public clouds. The main function of their controllers is to translate object storage to work with applications written to communicate with file and block storage. Others included Ctera Networks Ltd., TwinStrata, Nasuni Corp., and StorSimple Inc., which was acquired by Microsoft in late 2012.

Chou said Panzura’s business has accelerated since Microsoft purchased StorSimple last year. “The market picked up as a whole in the third quarter last year,” he said.

He said 75 percent of Panzura’s leads come from partners such as EMC, Hewlett-Packard, Hitachi Data Systems, Nirvanix, Dell, Google and Amazon.

In December, Panzura clinched a multi-million dollar deal with the Executive Office for U.S. Attorneys, a huge win that Chou said he hopes will open up doors for Panzura in other areas of government. He attributed the deal largely to Panzura’s Storage Controller receiving the Federal Information Processing Standard (FIPS) 140-2 security validation. Its product also has Advanced Encryption Standard (AES). The company also has a deal with the Department of Justice.

Founded in July 2008, Panzura raised $6 million in a Series A funding in September 2008 and another $12 million in October 2010. Venture capitalist Meritech Capital Partners led the latest round with participation from previous investors Matrix Ventures, Khosla Ventures, Opus Capital and Chevron Technology Ventures.

June 5, 2013  8:35 AM

Beware software-defined lock-in

Randy Kerns Randy Kerns Profile: Randy Kerns

“Avoid vendor lock-in” has been a mantra for a long time by other vendors and marketing promotions. Vendor lock-in is equated to a lack of choices or an impediment to making a change in the future. The lack of choices results in:

• Paying more for the next product or solution
• Failure to keep up with and benefit from new technology
• Reduced support or concern from the vendor in solving a problem.

These may be more fear-mongering from competitive vendors than reality, although there are companies that have demonstrated reprehensible behavior that is generally assigned to have a customer “locked-in.”

Recent marketing hype in the information systems and management industry has been focused on using “software-defined something” as a means to avoid vendor lock-in. In this case, they mean lock-in with hardware. Other valuable attributes for the software-defined message are added to the discussion but the most basic argument is the flexibility of using software on generic (general purpose) hardware.

In the case of software-defined storage (which has a wide range of meanings depending on which vendor is talking), the software seeks to take the value out of storage systems. The message is that removing the value of the storage system and using generic hardware and devices will remove vendor lock-in to a particular storage system from a vendor. With the messaging that vendor lock-in is bad and costs more, the software-defined argument builds an affinity value message.

But the real question is: did the lock-in just get moved to somewhere else? Rather than a storage system that is replaceable, albeit with effort to migrate data and change of operational procedures, the lock-in may move to software. In this case, the software determines where to place data. The software has control of all the fragments that are distributed across physical devices. The software in the storage system (embedded software or firmware in an earlier generation lexicon) and software-defined storage are doing relatively the same thing at one level.

If lock-in (as defined earlier) is being moved from a vendor storage system to software, the impacts of lock-in need to be evaluated. One consideration is the long-term financial impact. Software has a support cost – either from a vendor or from the IT staff in the case of open source. Additionally, some software is licensed based on capacity. These changes continue for as long as the software is in use. Storage systems are typically purchased and have a warranty that is often negotiated as part of the sale. It is common to get a four-year or five-year warrant. After that time, there is a maintenance charge. Some of the value-added features of the storage system are separately licensed which may be annualized or capacity-based. This is a competitive area, however, and some vendors include the value-add software for their systems in the base price.

Storage systems have had a consistent price decline over the years, transferring the economics of improving technology and the effect of competition to the customers. Software typically does not have commensurate price reductions. It is seen as an annuity for the vendor for maintaining and updating.

The vendor lock-in message triggers emotion and rapid conclusions that may not represent reality. Deeper analysis is required on specific situations. The value of “compartmentalizing” information handling to allow technology transitions or transformations rather than massive infrastructure change that become inhibitors cannot be discarded in considerations. The vendor lock-in message is not really that simple, and attributing the next new thing as being the answer is not well thought out.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 24, 2013  8:09 AM

Scale-out NAS becoming an enterprise fixture

Randy Kerns Randy Kerns Profile: Randy Kerns

Enterprise systems with scale-out capability have been making an impact in IT environments and are almost always a consideration in every evaluation of client storage strategies.

Although there are scale-out implementations of block and object storage, NAS has been the primary focus for enterprise scale-out storage deployments. Scale-out products range from the enterprise down to the SMB market. Some high-end scale-out NAS systems such as EMC Isilon and Hitachi Data Systems HNAS have made a transition from high performance computing (HPC) to enterprise IT.

Benefits of using scale-out NAS include:

• Performance scales in parallel with capacity so increases in capacity do not cause performance impacts requiring additional administrative effort to diagnose and correct.
• The continued increase in unstructured data can be addressed without a single administrative system without increasing administrative efforts and costs.
• New technology elements can be introduced and older ones retired without having to offload and reload data.

Not all NAS systems offered today are scale out. Traditional dual-node controller NAS systems still fit many customer needs, and are usually kept as separate platforms than scale-out systems. It is easier to design a new scale-out NAS system than to adapt an existing design and maintain the high-value features, although NetApp has shown that new technology can be introduced and adapted with its Clustered Data ONTAP systems.

A common approach to scale-out NAS is to take a distributed file system used in HPC and research environments. Considering the success vendors are having with their scale-out NAS offerings, it would seem to be inevitable that a majority of enterprise NAS systems will be multi-node, scale-out systems.

Vendors have several terms for scale-out NAS and scale-out storage in general. A look at some of the vendor product offering sees the terms clustered NAS, federated systems, and distributed systems. These are mostly vendor marketing aimed at creating a unique identification for their products. They are more likely to create confusion.

While scale-out block storage may be more difficult to implement because of the host interface connection and greater latency demands than NAS, the implementations provide the same value to IT customers. The measure is the number of nodes in the system and how the nodes are organized such as in pairs or an N+1 protection arrangement.

Scale-out NAS and scale-out storage in general is becoming prevalent because of the value. Vendors will continue to develop products that meet customer needs and more scale-out systems should be expected.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 23, 2013  9:46 AM

NetApp CEO: We invented software-defined storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Software-defined storage is gaining a lot of attention these days, especially after EMC revealed plans for ViPR at EMC World earlier this month. Now EMC rival NetApp is taking credit for being a “pioneer” of the technology long before anybody from EMC or any other storage vendor used the team.

During NetApp’s earnings call with analysts Tuesday, CEO Tom Georgens cited the storage virtualization capability of the Data Ontap operating system as a prime example of software-defined storage. NetApp V-Series gateways can virtualize storage arrays from other major vendors.

“NetApp pioneered this value proposition with our Data OnTap operating system,” Georgens said. “For the last decade, we’ve been able to run OnTap on our hardware and other people’s hardware through V-Series.”

Georgens listed the ability to run OnTap in private clouds with Amazon Web Services and as a virtual machine in OnTap Edge as other examples of software-defined storage.

“This concept has been coined software-defined storage … only NetApp can deliver on the promise of software-defined storage today,” he said.

NetApp representatives have been making similar claims since EMC announced its ViPR software-defined storage offering at EMC World earlier this month. Because the definition of software-defined storage varies according to who’s defining it, Georgens offered his definition: flexible storage resources that can be deployed on a wide range of hardware and provisioned and consumed based on policy directly by the application and development teams.”

Clustered OnTap, more than a decade in development after NetApp acquired clustered technology from Spinnaker, is part of NetApp’s software-defined storage story. Georgens said NetApp has almost 1,000 clustered customers.

Georgens also proclaimed NetApp a flash leader. He said NetApp has shipped 44 petabytes of flash in its arrays. However, its FlashRay all-flash array remains a roadmap item while others have had all-flash systems on the market for at least a year.

NetApp has two convince two sets of people that it is a technology innovator – customers and investors.

As cutomers go, NetApp’s revenue of $1.72 billion last quarter increased one percent from the same quarter last year and five percent over the previous quarter. That’s not bad, considering several large competitors said their sales slipped from last year, but not exactly a home run.

NetApp has struggled to keep investors happy. To make amends, NetApp announced a 900-person layoff, a quarterly dividend of 15 cents a share and a $1.6 billion increase in its stock repurchase program that brings the total to $3 billion. The moves came after a Bloomberg story claimed Elliott Management Corp. – which owns 16 million NetApp shares – called for NetApp to changes its board and take steps to boost shareholder value. One of Elliott’s concerns was that NetApp’s technology hasn’t kept up with its rivals.

That puts Georgens in the position of announcing layoffs while pledging to be a tech leader.

“Last week was a difficult one for employees,” he said of the layoffs. “We are faced with the challenge of continuing to execute against our growth strategy while achieving our business and financial objectives in the context of a low-growth IT spending environment.”

His juggling act will be interesting to watch in the coming months.


May 17, 2013  7:28 AM

Brocade still bullish on Fibre Channel, despite sales plunge

Dave Raffo Dave Raffo Profile: Dave Raffo

Brocade executives said they expect the Fibre Channel (FC) SAN slump to be temporary and predict a rebound in the second half of the year.

As Brocade announced earlier this month, its FC SAN switching last quarter was lower than its previous forecast. It came in at $319 million, down seven percent from last year and down 12 percent from the previous quarter. Brocade executives said on their earnings call last night the slowdown was due to lower than expected storage sales from its OEM partners, such as EMC, IBM and Dell.

“We were disappointed in our SAN sales due to short-term slowing in the storage market,” Brocade CEO Lloyd Carney said.

The SAN business doesn’t look much better this quarter, as Brocade expects it to decline from eight percent to 11 percent from last quarter. “Demand signals for storage remains soft,” Brocade CFO Dan Fairfax said.

Carney added that Brocade’s storage partners expect sales to rebound in the second half of the year, and the company remains optimistic about Fibre Channel.

“We fully believe at the end of the year, the SAN business will rebound,” he said.

Carney said Brocade plans to cut $100 million in spending over the next year, and some projects will be discontinued. But he said Brocade will remain focused on FC SANs, Ethernet networking and the emerging software-defined networking markets.

“We believe that the fundamentals of the SAN market are strong, including storage growth related to virtualization, cloud and unstructured data,” Carney said.

Jason Nolet, VP of Brocade’s data center networking group, said the factors favoring FC include the rise of flash in network storage.

“Flash needs a network that is a very low latency, very high IOPS, very high bandwidth, and Fibre Channel is the perfect match for that,” he said. “That’s why you see all the flash vendors, whether they are startups or established vendors, with Fibre Channel connectivity on their flash arrays. So we think the fundamentals in the storage industry and the basic requirements for customers continue to favor Fibre Channel. So we’re bullish for that reason.”

Nolet said he was certain that slow FC sales were not due to customers converting to converged Fibre Channel over Ethernet (FCoE) networks in place of FC.

“The customer base has largely spoken, and end-to-end convergence on a technology like FCoE is not on their agenda,” he said. “We see a little bit of convergence from the server to the first hop in the network, and then Fibre Channel gets broken out natively and Ethernet natively as well. But this [revenue decline] is not a function of FCoE growth.”

Brocade reported most of its SAN revenue last quarter came from 16-Gbps FC, which it has been selling for a year.

Nolet said Cisco’s recent rollout of 16-Gbps FC products underscores the importance of FC, but he dismissed the Cisco devices as “largely focused on speeds and feeds and lacking the innovation that we delivered” when Brocade first went to 16-gig.


May 16, 2013  2:38 PM

Atlantis gains $20 million to spread more ILIO

Dave Raffo Dave Raffo Profile: Dave Raffo

Atlantis Computing closed a $20 million funding round this week, and will put a chunk of that money into bringing out a new storage management application for virtual infrastructures.

Atlantis’ previous funding round was $10 million in 2010. In the three years since, the company has had enough profitable quarters and made enough revenue from its Atlantis ILIO applications to survive, CEO Bernard Harguindeguy said.

“This round is about building up our cash reserves to build out our product line,” he said.

Atlantis claims it has more than 250 customers and more than 300,000 licenses sold for its software, which includes ILIO Diskless VDI, ILIO Persistent VDI and ILIO XenApp. The VDI products enable virtual desktops to run in-memory without storage, and the XenApp product does the same for virtual servers.

Atlantis has already announced but not yet delivered ILIO FlexCloud designed to enable applications to run in the cloud with little or no storage.

Harguindeguy said the next Atlantis application will do similar things for virtualized databases such as SQL Server and SharePoint. “They consume enormous amounts of storage,” he said. “We’re using the same foundation we have out there already.”

That product is expected in the fourth quarter of 2013.

New investor Adams Street Partners led Atlantis’ new funding round, with previous investors Cisco Systems, El Dorado Ventures and Partech International participating.


May 13, 2013  1:30 PM

New Cleversafe CEO aims at holes in storage status quo

Dave Raffo Dave Raffo Profile: Dave Raffo

Object storage startup Cleversafe switched CEOs today. Founder Chris Gladwin gives up the CEO post and becomes vice chairman while continuing to set Cleversafe’s technical vision. John Morris moves into the CEO job after spending four years at Juniper Networks as EVP of field operations and strategic alliances, three years with Pay By Touch as COO and then CEO, and 23 years with IBM.

Chicago-based Cleversafe is among a group of object-based storage startups – a group that also includes Scality, Amplidata, Caringo and Exablox – looking to crack the market while the major vendors rev up their own object platforms to handle petabyte and perhaps exabyte data stores that push the limits of RAID.

Morris spoke with Storage Soup today to outline his plans for Cleversafe.

What brought you to Cleversafe?

Morris: As I left Juniper, I wanted a chance to lead a company. One of the spaces I was looking at in technology was storage, where I think there is a lot of status quo ready to be challenged. I had heard of Cleversafe and followed it a little bit in the local Chicago media. I’m a Chicago guy anyway, and as I got more into learning about the company from Chris Gladwin and [chairman] Chris Galvin and the other board members, it seems to me it’s a great combination of technology that’s ready and accepted by customers, and momentum that is building dramatically.

It’s a great time for me to come in and join the company and bring a depth of experience in scaling businesses to match up with the company’s great technology.

Chris Gladwin is staying with the company, so it seems like Cleversafe won’t be going in a completely new direction with the change. Is this a signal that Cleversafe is moving into a new business phase, and why was the change made now?

Morris: I have to be careful not to disturb the phase we’re in now. We brought in a million dollar order last week. We have a couple of other strategic big orders that we expect to bring in this week. The company is on a tear with customer momentum building, but we also want to make sure that we’re growing in a way that is sustainable and allows us to scale to the heights we want. And that’s where I’m going to be helping to bring in approaches we take to not just have a great few quarters but to have many great years.

How many customers does Cleversafe have?

Morris: We have dozens of customers, that’s as specific as we’ll get. But we’re growing every week.

What’s your biggest challenge in this job?

Morris: We’re attacking an entrenched status quo of big entrenched competitors. In reality, there’s a lot of square peg for a round hole because the kind of data that we’re great at storing is not very well attacked by the status quo technology, in particular the RAID technology out there. So the biggest challenge is making sure the customers understand there’s an alternative out there that gives them better reliability and scalability at a lower cost. As a small company, it’s hard to make yourself known out there so that’s probably the biggest challenge I have now.

What’s the headcount at Cleversafe?

Morris: A little over 100. We’re definitely growing.

Who do you consider the major competitors in the object storage market?

Morris: The dominant players out there – EMC, HP, IBM, Hitachi Data Systems – those are the guys dominating market place and the competitors we think about. They’re throwing the wrong tool at the problem. We think we have a much better tool. Those are the guys I wake up thinking about.

We think we have a much better alternative for the fastest growing part of their market, which is the unstructured data around video and photos and audio and image files, and that sort of thing.

What about some of your fellow startups who sell object storage?

Morris: We’ve shipped more object oriented storage than anybody, by a long shot. Certainly more than the smaller players out there.

When was your last funding round?

Morris: In 2010 we raised $31 million in our C round. We’re in a  fortunate position, we have investors who like what we’re doing and are anxious to help us do more. So fundraising as a big challenge is not something I come in having to face.

Does that mean you’re close to profitability?

Morris: One of the fun things for me about moving into a private company is [laughs], we have an easy answer to that question, which is we’re not releasing any type of financial data around revenue.

Object storage seems to be a big piece of the ViPR software EMC announced last week. Do you expect to see more focus on object storage from the big vendors?

Morris: One of the hardest things that we had to try to do when I was at IBM for a couple of decades was eat our own children. And I’m counting on our large competitors having the same sort of trepidation. While they have offerings in this space, I think they continue to lead with what drives a lot of profit in their current business and that’s an opportunity for us.


May 10, 2013  4:04 PM

EMC World wrap-up: Isilon, VNX, Syncplicity future directions

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS – EMC World was short on product upgrades this year with the exception of the new ViPR platform, but the vendor did enhance a few products while previewing features expected soon in others:

Isilon

Isilon’s OneFS operating system added post-process block-level deduplication, native Hadoop Distributed File System (HDFS) 2.0 support, a REST Object Access to Namespace interface for data access and support for OpenStack Swift and Cinder integration. The dedupe will be included in the next version of  OneFS due later this year, and the other features are available now.

During a session on Isilon during the show, Isilon director of product management Nick Kirsch laid out strategic initiatives for the clustered NAS platform.

Isilon is working on using flash to increase performance in several ways, including putting file system metadata on flash and using flash as a read cache first and eventually as a write cache. Kirsch also said Isilon will add support for commodity consumer drives as a low-cost tier.

“If you’re going to deploy an exabyte of data, there has to be a step change in price,” he said.

Kirsch said Isilon is working on a software-only version of OneFS, and will support moving data to the cloud and using the cloud as a “peering mechanism” to connect to multiple clouds.

No timetable was given for availability of these future features.

VNX

Rich Napolitano, president of EMC’s unified storage division, previewed future features for VNX arrays. These included a flash-optimized controller, a VNX app store that would allow customers to run applications such as a virtual RecoverPoint appliance directly on a VNX array and a virtual VNX array that can run on commodity hardware or in the cloud.

Syncplicity

A year after buying file sharing vendor Syncplicity, EMC added a policy-based hybrid cloud capability that lets customers use private and public clouds simultaneously.

Customers can set policies by folders or by users to determine where content will reside. For example, legal documents from users can stay on on-premise storage while less sensitive data can go out to a public cloud. Files that require heavy collaboration such as engineering documents can be spread across multiple sites and have geo-replication features so uses can always access them locally.

EMC also added Syncplicity support for its VNX unified storage arrays, following on the support it gave EMC’s Isilon and Atmos storage platforms in January. Syncplicity will also support EMC’s ViPR software-defined storage platform when that becomes available later this year.

“Our strategy is to provide ultimate choice for storage backends,” said Jeetu Patel, VP of the EMC Syncplicity business unit. “So you can expect to be able to run Syncplicity on non-EMC platforms over time.”

Data Protection Suite

EMC’s backup and recovery group launched the Data Protection Suite, which consists of no new products but is a new way to package products such as Data Domain, Avamar, NetWorker, Data Protection Advisor and SourceOne. Customers can purchase backup and archiving products together with licenses based on consumption and deployment models.


May 9, 2013  4:53 PM

Fusion-io’s new CEO addresses ‘misconceptions’

Dave Raffo Dave Raffo Profile: Dave Raffo

One day after a surprising CEO shakeup, the new boss of PCIe flash leader Fusion-io denied the change came because of a failure of the outgoing chief, problems at the company or because it is looking for a buyer.

Shane Robison, a former long-time Hewlett-Packard executive who replaced Fusion-io CEO David Flynn Wednesday, made a statement and took questions during a technology webcast hosted by the vendor.

Fusion-io said that founders Flynn and chief marketing officer Rick White resigned their positions to pursue investing opportunities. The company said Flynn and White will remain on the board and serve in advisory roles for the next year.

The news was poorly received by investors, as Fusion-io’s stock price fell 19% to $14.60 by the end of Wednesday. It dropped slightly to $14.23 today.

There has been speculation that Flynn was pushed out because Fusion-io’s revenue declined last quarter following a pause in spending by its two largest customers, Apple and Facebook. It didn’t help that Fusion-io’s stock already dropped to $17.47 from a high of $39.60 in late 2011 before Wednesday. While at HP, Robison worked on its acquisitions of Compaq, Mercury Interactive, Opsware, EDS, 3Com and Autonomy  – leading to speculation that he was brought in to sell Fusion-io.

Robison began his webcast today by saying he wanted to “hopefully clear up some misconceptions.”

He said the move was discussed for “a long time” even if there was no public indication that Flynn would leave. He said the previous leadership team did a great job taking Fusion-io from startup to successful public company but its focus has shifted. The goals now are to move into international markets and make sure it releases quality products on time.

“It’s not unusual as startups evolve to medium-size companies that you need different skill sets as you go through these cycles,” he said.

Robison said Fusion-io did not reveal the CEO change when it reported earnings two weeks ago because the decision was not completed yet.

“Unfortunately, it was a surprise,” Robison said. “And nobody – especially the street – likes surprises. This caused a lot of speculation. A lot of times when these changes happen it’s because there is a problem with the company. I can tell you there is not a problem with the company. The company’s doing very well.

“The company has built a lead, and we need to maintain that lead and invest in R&D and in some cases, M&A.”

Robison said another misconception was that the board “brought me in to dress the company up and sell it. We’re not working on that. This was a decision that was about how we get experienced management in place to take the company to the next level.”

Robison has never been a CEO in his more than 30 years in the IT business. Besides serving as HP’s executive vice president and chief strategy officer from 2002-2011, he also worked for AT&T and Apple. He was blamed by other members of the HP board for not realizing that Autonomy did not have as much revenue that it claimed (a charge that Autonomy leaders have denied) before HP agreed to pay $11.3 billion to acquire it in 2011.

Robison did not discuss the Autonomy deal today. He defended his qualifications by saying that some of the business units that he has run inside of large companies were as big as Fusion-io.

He said his strength is his operational experience and Fusion-io needs to balance good operations with innovative technology.

The move comes as Fusion-io faces greater competition after having the PCIe flash market mostly to itself over its first few years. Intel, Micron, Virident, LSI, STEC, OCZ and Violin Memory have PCIe cards.

Storage giant EMC sells cards from Virident and Micron under OEM deals as its XtremSF brand, and its marketing concentrates on claims that those cards are superior to Fusion-io’s. EMC executives at EMC World this week also revealed plans to bring out MCx flash-optimized controllers for hybrid storage arrays and it has an XtremIO flash array that competes with the NexGen storage systems that Fusion-io acquired last month.

Robison said he spent time Wednesday with key large customers, and their reaction to the news was positive.

Others are wondering if the deal will lead to more changes.

“Surprise management changes usually portend more news in the following days and weeks,” Storage Strategies Now analyst James Bagley wrote in a note today. “As we have reported over the last year, we felt that Fusion-io had a tough future ahead with increasing competitors in its core market. Its recent acquisition of NexGen, a storage array manufacturer and Fusion-io customer, is a good move into a broader market where Fusion’s deep software expertise and larger resources should help revenue expansion.”

Objective Analysis analyst Jim Handy also published a note on the change, maintaining “Fusion-io is in an enviable position”  because the company was the first to introduce a PCIe SSD, and early with caching software and the ability to make SSDs appear in memory in virtualized systems.

“This resulted in the company’s competitors always remaining one or two steps behind in their efforts to compete,” Handy added. “It would appear that the two key architects of this strategy have now moved on, so outsiders should carefully watch to see if the underlying strategy, the one that has served the company so well in the past, will continue to be followed, or if a new path will be tried.””


May 9, 2013  12:09 PM

Keeping all data is a dangerous policy

Randy Kerns Randy Kerns Profile: Randy Kerns

There is a prevalent problem in Information Technology today – too much data.

Most of the data is in the form of files and called unstructured data. Unstructured data continues to increase at rates that average around 60% per year according to most of our IT clients.

Structured data is generally thought of as information in databases and this type of data is experiencing a much smaller increase in size than unstructured data. The unstructured data is produced internal to IT and from external sources. The external sources include sensor data, video information, and social media data. This type of growing data is alarming because there are so many sources and the information is used in data analytics that typically originate outside of IT.

The big issue is what to do with all that data that is being created. The data is stored while needed, which is during the processing for applications or analytics and while it may be required for reference, further processing, or the inevitable “re-run” in some cases. But what is to be done with the data later? Later in this case means when the probability of access drops to the point that it is unlikely to be accessed again. There is also cases when the processing is complete (or project is complete) and the data is to be “put on the shelf” much as we would in closing the books on some operation. Does the data still have value as new applications or potential usages develop? Will there be a potential legal case that will require the data to be produced?

The default decision for most operations is to save everything forever. This decision is usually made because there is no policy around the data. IT operations do not set the policies for data deletion. Because the different types of data have different value and the value changes over time, the business owners or data owners must set the policy. IT professionals generally understand the value but usually are not empowered to make those policy decisions. Sometimes the legal staff sets the policy, which absolves IT of the responsibility, but that may not be the best option. In a few companies, a blanket policy is used to delete data after a specific amount of time. This may not withstand a legal challenge in some liability cases.

Saving all the data has compounding cost issues. It requires buying more storage, adding products to migrate data to less expensive storage, and increasing operational expenses for managing the information, power, cooling, and space. Moving the data to a cloud storage location has some economic benefit, but that may be short-sighted. The charges for data that does not go away continue to compound. Storing data outside the immediate concern of IT staff takes away from the imperative to make a decision about what to do with it.

Besides the costs of storing and managing the data, the danger is that there may be some legal liability for keeping data for a long time. The potential for an adverse settlement based on old data is there and has been proven extremely costly. More impacting to IT operations is the discovery and legal hold required. Discovery requires searching through all the data, including backups, for requested information and legal hold means no deletions of almost anything – no recycling of backups. This causes even more operational expense.

Not establishing a deletion policy that can pass a legal challenge is a failing of a company and results in additional expense and liability. IT may the first responders on the retain-forever policy, but it is a company issue.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: