When I meet with IT personnel or speak at events, I always try to find out what people are doing in their positions in IT. Finding out what their daily work entails gives a good barometer on what is happening in IT and helps to identify where the problems are. One trend I notice is there are fewer storage specialists these days. A storage specialist is someone who understands how the storage systems work, how data flows through IT operations, and how to manage the information.
There is a great deal of tribal knowledge that the storage specialist learns, especially around Fibre Channel. This knowledge may be critical to maintaining operations, but this is changing as storage specialists give way to a growing number of IT generalists.
There are several reasons why IT generalists are replacing storage specialists. A generalist may have more flexibility because he or she can work in many areas. The generalist probably is not as well paid as a storage specialist would have been on average (and the generalists appear to be much younger than storage specialists I’ve known), so. IT managers have consciously developed the generalist to provide the resources that can be applied where needed.
But the movement to server virtualization has done more to develop the IT generalist than probably any overt IT management plan. The management of the virtual machine operating system – specifically VMware – is closely linked to storage. VMware has included storage management functions and has continued to improve those capabilities. This has led to the task of storage administration increasingly being included with the server virtualization function. Evaluator Group has an article on simplifying VMware storage management and the emergence of storage and server management for virtual environments.
A question to ask is whether the trend to IT generalists at the expense of storage specialists is a detriment to IT or not. The answer is not simple. Storage system vendors have recognized this trend and reacted by greatly simplifying the configuration and administration of storage systems. They have intentionally targeted the IT generalist with the system management software. Movement to the IT generalist may be considered a good idea, but that really doesn’t matter. It is happening anyway.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Hewlett-Packard has scrapped the StorageWorks SAN Virtualization Services Platform (SVSP) that it sold through an OEM deal with LSI, as well as the EVA Cluster SAN system that used the SVSP to cluster two EVAs.
According to an email statement from director of HP storage marketing Craig Nunes, HP notified customers last November that it would discontinue development of the SVSP and it stopped selling the EVA Cluster at the end of last year. Nunes said HP would support SVSP customers’ service contracts for five years.
HP was LSI’s only major customer for the SVSP, and last week LSI closed the Israel office where SVSP was developed (LSI acquired the technology from StorAge Networking in 2006). As EMC has found with Invista, no market ever developed for switch-based storage virtualization software.
HP launched EVA Cluster last June, months before it acquired 3PAR for $2.35 billion. Nunes said 3PAR’s storage platform will replace EVA Cluster on the market, but he insisted that the standard EVA platform is alive and well.
“The EVA Cluster was a stand-alone product and its discontinuation has no bearing on future EVA investment and roadmap,” Nunes said in the statement. “The EVA Cluster previously filled a need in our portfolio for Fibre Channel clustered storage arrays that we now address with our newly-acquired 3PAR Utility Arrays.”
According to a story in the Wall Street Journal over the weekend, storage is one area – along with software and networking — that new HP CEO Leo Apotheker wants to focus on. There have also been whispers in the industry that HP will cut all of its storage OEM deals and concentrate on developing all of its storage in-house or through acquisitions. Other OEM partners include Hitachi for HP’s HP StorageWorks P9500 Disk Array enterprise storage system and Sepaton for its virtual tape libraries (VTLs).
Nunes would not comment on HP’s other OEM deals.
EMC veteran William “BJ” Jenkins will replace Slootman. Jenkins has spent 13 years at EMC, and was chief of staff of BRS under Slootman. EMC said Slootman will retain a formal role with the company as an adviser.
Slootman’s tenure at EMC was relatively short but loud, as he took the lead in EMC’s backup division after it acquired Data Domain for $2.1 billion in July of 2009. Slootman remained outspoken after the deal and EMC’s backup revenue grew substantially during his tenure, but it was his time as Data Domain CEO when he really made a mark on the backup world.
When Slootman joined Data Domain in 2003, disk-based backup was in its infancy and nobody was talking about data deduplication. Now disk has pushed tape most into an archiving role, deduplication is a mainstream technology that is considered a must-have for all new backup products and EMC’s Data Domain platform is the market leader. Slootman also led Data Domain through a successful IPO in 2007 that led to a 2009 bidding war between EMC and NetApp, a precursor of the Hewlett-Packard and Dell competition for 3PAR last year.
In a blog posted on Greylock’s website today called “Why I’m joining Greylock Partners,” Slootman said he left EMC because he needed a new challenge. “I was tempted and flattered by interest to run other companies, but I could not easily see topping the experience of Data Domain,” he wrote.
Greylock was an early investor in Data Domain. Another Greylock partner, Aneel Bhusri, was Data Domain’s chairman at the time of the sale to EMC. Greylock remains active in storage, and is an investor in Actifio, Data Robotics, SilverPeak System, Xsigo and stealth Flash storage startup Pure Storage. It’s real gems, however, are Internet companies Facebook, Groupon and LinkedIn.
According to Greylock’s press release today, “Slootman will invest in data center infrastructure start-ups, particularly in the virtualization, networking, storage, cloud and enterprise application sectors. He will also coach and mentor up-and-coming entrepreneurs and executives.”
Slootman also compared his new job as going from a player to a coach in his blog. “I’d like to think I have learned enough lessons about building companies to provide valuable coaching and guidance to up-and-coming entrepreneurs,” he wrote.
Of course, a lot of players can’t stay on the sidelines and a lot of entrepreneurs fall into the category of serial entrepreneurs. Will Slootman get the itch to run another company?
“Never say never,” he replied in an email today when I asked him that.
After all the acquisitions of 2011, which is the largest private storage vendor still standing? According to IDC, it’s DataDirect Networks.
DDN CEO Alex Bouzari said he expects his company to remain private through 2011. Seven months ago as rumors swirled about DDN getting acquired, Bouzari told me the company was not for sale and he says that is still the case. Bouzari said DDN increased revenue 40% last year, driven by a growth in unstructured data across its core markets — media and entertainment, life sciences, high performance computing (HPC) and other verticals that require high bandwidth. He’s happy to stay on that growth track.
“We’re not pursuing M&A,” he said. “Our best opportunity is to scale up as an independent company.”
(Translation: it will take at least a 10-figure deal to buy DDN).
Bouzari doesn’t discount going public if the IPO market comes back as expected, but he doesn’t plan an IPO this year. Still, Bouzari is presenting at a Needham Growth Conference for investors today in New York, which isn’t something private companies usually do. Certainly not if they intend to stay private for long.
But to continue to grow, DDN needs to become more of an enterprise play, especially if it is to compete with EMC’s new “Big Data” tag team of Isilon and Atmos. DDN has its Web Object Scaler (WOS) object storage system to compete with Atmos, and Bouzari said it will add an enterprise NAS product late this year to challenge Isilon. The roadmap also calls for upgrades to two existing hardware platforms this year and the addition of enterprise software such as snapshots, deduplication, thin provisioning and replication to follow in late 2011 or 2012.
DDN has high-speed parallel file system storage for HPC and supercomputing, but no mainstream NAS. Bouzari said the NAS product will be a clustered offering that can scale up and scale out for customers who expect to have to add capacity and performance.
He’s also planning a refresh of DDN’s S2A6620 midrange and SFA10000 high-end product in 2011. He said the SFA10000 upgrade will boost that product’s IOPS and virtualization capabilities.
Still, it’s likely that when people talk about DataDirect Networks this year, it will be less about product upgrades than about M&A speculation.
According to published reports and other leaks, EMC is preparing to launch a VNX family of products that includes code from the Clariion Flare and Celerra Dart operating systems on a common hardware platform. The new systems include several midrange models – the VNX 5100, VNX 5300, VNX 5500 and VNX 7500 – as well as VNXe 3100 and VNXe3300 SMB systems. The systems will be available as block storage, file storage, or unified systems. The midrange systems were code-named Culham and the SMB systems went by the code-name Neo.
The midrange systems will support Fibre Channel, iSCSI and Fibre Channel over Ethernet (FCoE) block storage and CIFS, NFS, MPFS and pNFS on the file side. The SMB systems support iSCSI, CIFS and NFS.
The midrange systems will include the management features EMC rolled out for the Clariion earlier this year – including FAST automated tiering and block compression – as well as Clariion’s data protection software.
On the hardware side, there is one notable departure from existing EMC systems. The new systems will not support Fibre Channel hard drives. The midrange family supports Flash solid state drives (SSDs) and SAS for performance and nearline (NL)-SAS for capacity. The SMB systems support SAS and NL-SAS.
All the new systems will support VMware integration with the vStorage API for Array Integration (VAAI) and Virtual Storage Integrator (VSI).
None of the new systems are surprises. EMC executives admitted last April that they would converge the Clariion SAN and Celerra NAS platforms, after denying a consolidation move was under way when talk first began circulating. EMC added a Unisphere unified management console for the two platforms last year. And EMC CEO Joe Tucci began teasing the new SMB system last July.
The big question now about Tuesday’s launch is whether it will include other products. EMC is believed to be close to rolling out new Data Domain, VPlex and Isilon systems, but the timing of those releases is unclear.
Reldata has a new CEO, a little bit of new funding, and plans to expand its product platform and go-to-market channel.
Former Sun VP Victor Walker is the new CEO. Walker worked on Sun’s open storage and 7000 Unified Storage line. He left Sun in 2009 to become COO of file system startup ClusterStor, which was acquired by Xyratex last year. Walker replaces Steve Murphy, who left the CEO post after barely a year on the job to join a private equity firm but remains on the Reldata board.
Walker actually took over as CEO in October, but Reldata kept the changes secret until now. The vendor also got $4 million in funding from Grazia Equity, a German VC that has put up all of Reldata’s more than $18 million in funding. Walker said he plans to raise more money from U.S. VCs later this year.
But Walker’s immediate job is to make the Parsippany, NJ-based vendor better known in the storage world. He compares Reldata’s 9240i unified (iSCSI and NAS) storage systems to Sun’s 7000 platform (now the Oracle ZFS Storage Appliance) and wonders why more people don’t know Reldata.
“I can’t figure out why Reldata isn’t a bigger company,” Walker said. “It’s technically accomplished the same thing Sun accomplished with open storage and its 7000 Unified Storage systems. The difference is, Sun built it on Solaris, Reldata built it on Linux.”
Walker said Reldata will add a platform around March that will be an addition to its current 9240i systems. He said the new systems will be in the high availability market. He also said Reldata’s roadmap includes data deduplication for primary data to go with the compression it offers now for remote replication.
Walker said about 60 customers purchased Reldata systems in the last year, and the vendor’s revenue increased 135% last year. He said Reldata added customers in healthcare, higher education, legal, cloud-based application services and energy last year.
Walker added that Reldata customers average between 20 TB and 40 TB, but one customer last year bought 1 PB of storage in a single site. He wouldn’t name the customer but said it deals in e-discovery and uses the Reldata storage in a private cloud.
Walker said the funding will be used to grow Reldata’s sale force and to attract new VARs and distribution partners — a task his predecessor also said he would focus on. Walker is going after channel partners looking to replace vendors that have been acquired in recent years, particularly Compellent (in the process of getting bought by Dell) and even LeftHand (acquired by Hewlett-Packard in 2009).
“We want to get our name recognition out there and aggressively pursue channel partners who are disenfranchised with the recent consolidation,” he said.
I’m always interested in reading stories about expectations for spending on storage. There are the continuing prediction stories where it is reported that next’s year’s spending will be x% up or down from last year. Those stories are interesting but not always illuminating. They remind me of an article I read about predicting the weather. In that article, a comparison was done for the last year of the predicted weather versus the actual, and the accuracy of just saying tomorrow’s weather will be just like today’s weather. Obviously, the latter was statistically much more accurate.
For storage spending, the bigger picture economic indicators are more interesting and probably much more helpful to truly understand the market. I found the Jan. 3 Wall Street Journal, “Big Firms Poised to Spend Again,” interesting. The article reported that big companies (Fortune 500 in my estimation) had cleaned up their balance sheets and conserved cash over the last year and were ready to invest again in R & D, expand manaufacturing and sales, and so on. Basically the point of the article was that these companies were ready to spend money to make money.
The article cited several companies that had money to invest, and the numbers were in the billions of dollars. These expenditures will result in requirements in IT infrastructure and certainly in storage. Interestingly, the top three industries that had the greatest amount of cash accumulated to invest were Information Technology, healthcare, and industrials.
From a storage standpoint, the question is “where will the primary areas of spending be?” Given the conservatism that occurred over the last few years when many companies postponed purchases, you can probably figure out where spending will go. It’s likely that companies will look to replace aging or obsolete systems, add new systems to meet expansion and growth demands, and carry out postponed or new projects to improve operations and reduce operational expenditures.
These indicators present opportunities for the storage industry, if the industry is ready to take advantage of those opportunities right now. The products to solve problems and meet customer needs must be there. There won’t be time to take requirements and develop or modify a product. By the time that is done, competitors will have captured the business. The opportunity is about sales and marketing in the near term to deliver products needed when the customer is ready. Vendors will continue to develop new shiny toys but these major firms with money to spend to expand their businesses won’t wait for them.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Now that Isilon is part of EMC, the new management team at Panasas is looking to fill a gap in the market for an independent clustered NAS vendor.
Panasas has concentrated on high performance computing (HPC), but is looking to change that. Faye Pairman, who became the Panasas CEO last April and has since added new chiefs of sales, engineering and marketing, said the vendor is looking to capitalize on pNFS and perhaps add iSCSI capabilities to become more of a commercial or enterprise play.
“I don’t see us sitting behind an Exchange server, but the lines between the commercial market and HPC is getting blurrier,” she said.
Panasas has talked about moving to the enterprise for years, but Pairman said the Isilon deal provides its best opportunity while shining a light on parallel NAS.
“[The $2.25 billion EMC-Isilon] deal places a value on scale-out NAS, and indicates a future market,” she said. “It highlights the fact that there’s a ton of unstructured data and we need a new type of file system to support that. It also highlights how hard it is to make that type of file system that works.”
Pairman said there are parallels between Panasas and Isilon, which both began shipping products in 2003. She said the major difference between the two was the markets they pursued. Isilon went after the rich media market more while Panasas concentrated on the energy and simulation markets.
“We tend to be big on performance, scale and next-gen types of standards,” she said. “They tend to be more practical – more usability features, things like that. Their architecture limits them from scaling like we can.”
Along with NetApp and EMC, Panasas was a driving force behind pNFS and Pairman said pNFS will eventually help Panasas broaden its feature set. “But we’re not trying to look like NetApp,” she said.
One of Isilon’s last major enhancements before the EMC deal was to join the multiprotocol storage parade by adding iSCSI capability for block storage. Pairman said block storage hasn’t been a big priority for Panasas in HPC markets, but it’s certainly under consideration.
“The parallel nature of the Panasas ActiveStor architecture is ideally suited for delivering high performance, scalable iSCSI block storage,” she said. “As Panasas starts to address enterprise and commercial market needs, block storage services would be a natural growth path for us.”
The recent loss of mortgage records in Orleans Parish in New Orleans shows how important it is to have an efficient data protection process and efficient use of storage resources, and how the two are related.
Newspaper articles in the New Orleans area highlight a double failure data loss. Successful backups stopped after an update to the backup software, despite a message saying the update was successful. No one checked for successful backups after the update. The second failure came when the servers with the original data crashed with significant data loss. The backups were cyclical — as most are — and the older backups were expired (read deleted) according to a 90-day retention schedule. There were paper records for most of the transactions, but the index of the paper records including the location of documents was part of the lost data. This is like burying the key to the treasure chest along with the treasure.
The result was that primary digital data was lost and much of the backup data was either unrecoverable or does not exist. There are a number of process problems that can be pointed to here, but mainly this shows the need for a sound strategy for effectively and efficiently protecting records. Efficiency in this context is about both cost and the protection process.
Mortgage records are static, they do not change. Additional records for affected property are created and added to the records collection but each record is retained in perpetuity as an historical and legal document. These records should be kept in an online archive with infinite retention and protected with multiple copies at the time of archiving. Keeping the records in a volatile disk system where the protection depends on regular backups is extraordinarily inefficient.
Since the records don’t change, continual backups waste time and physical resources. The records are required when a real estate transaction affecting the property is done, so having them on a primary storage system may not be the most cost effective location. With the “forever” type of retention required, migrating the records every few years when the current storage system is replaced also would add extra operational costs.
This is one example where looking at the strategy around protecting information and being efficient could have yielded great return – and avoided a disaster. The disaster will be costly in the heroic efforts required to recover lost data, and in the careers or reputations of those involved. The real failing is in not re-examining the strategy around protecting the valuable assets and how to do that efficiently and effectively. I’ve written more about data efficiency here.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
New Orleans is no stranger to natural disasters that emphasize the need for a good disaster recovery plan. But people in the city are struggling to deal with a business continuity situation stemming from crashed servers and a backup service gone bad.
Two servers that hold the Parish of Orleans Civil District Court’s conveyance and mortgage records going back to the 1980s crashed simultaneously on Oct. 25, and the court has been without critical digital real estate records since then. The documents stored on the servers’ database exist on paper, but the computerized index linking them to their physical location was on one of the failed servers.
Members of the court’s IT staff are blaming the problem on Seagate’s i365, according to a series of stories on the incident in The Times-Picayune newspaper. The court is under contract with i365 to use its EVault Remote Disaster Recovery Service, EVault SaaS and SaaS plus, and Evault Express Recovery Appliance. i365 had been backing up records and purging them every 30 days from August 2009. Last July, i365 sent a software update to the court to install. The court’s IT staff said it installed the update and received a message saying it was installed correctly.
But no data was backed up since July, and other records were purged after their 30-day expiration date. The court did recover digital conveyance records from the 1980s up to March 27, 2009, and mortgage data through Aug. 6, 2009. But that left more than 150,000 documents without digital records or indexes to them.
While i365 has refused to comment publicly, an email from one of its executives to the chairwoman of the court’s technology committee obtained by the Times-Picayune put at least some of the blame with the court:
“The continued exposure of this situation hurts all involved — i365, Orleans Parish and the Civil District Court,” Dave Hallmen, head of i365′s Worldwide Sales and Marketing division, wrote to [judge Piper] Griffin on Nov. 5. “We have instructed our staff to refrain from publicizing our service call records which support our position that Civil District Court IT personnel failed to properly maintain the on-site software and backup jobs.”
According to the Times-Picayune, the court’s IT staff also inadvertently lost or corrupted database information when trying to troubleshoot the failed Dell servers.
Earlier this month, the court contracted with a data management firm, The Windward Group, to restore 35,000 missing conveyance records and 119,000 lost mortgage records by Jan. 2 at a cost of hundreds of thousands of dollars, according to the Times-Picayune. About 30,000 records have been restored.
The snafu is taking its toll on real estate sales in the area. According to the Times-Picayune, title insurance companies are reluctant to underwrite home sales and some refinancing deals without up-to-date records.
Regardless of who’s at fault, i365 competitors will be sure to use the incident to push alterative DR solutions. Larry Lang, CEO of QuorumLabs, said the New Orleans incident highlights problems with backing up to the cloud. QuorumLabs sells onQ backup appliances that can provide DR when used in pairs.
“Sometimes your data goes up into the cloud and when you go to pull it back it’s not there anymore,” Lang said. “It’s like freeze drying stuff, you don’t know what will happen when you add water. When [the court’s IT staff] went back to add water, there was nothing there.”
Lang said the incident also shows the importance of DR testing. onQ appliances allow customers to automatically test their restore capabilities. “You should consistently run tests to make sure your snapshots are good,” he said.