Storage Soup


March 21, 2014  9:43 AM

Will Symantec hire a CEO with storage background?

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec’s stunning firing of CEO Steve Bennett leaves the company with a third CEO in less than two years.

The move came as a big surprise because Bennett spent about half of his 18 months as Symantec CEO plotting a turnaround plan, and implementation of that plan is far from complete. Following the departures late last year of president of products Francis deSouza and CFO James Beer, Symantec will have a vastly different leadership team after a replacement is found for Bennett. Board member Michael Brown is the interim CEO.

The statement Symantec released Thursday about the firing quoted chairman Daniel Schulman saying the decision “was the result of an ongoing deliberative process, and not precipitated by any event or impropriety.” Financial analysts point to lack of growth and loss of market share on the antivirus/security side as Symantec’s biggest problems, but there have been issues on the backup and storage side as well. The biggest problem was the Backup Exec 2012 fiasco, which still is not fixed more than two years after its initial release.

Perhaps the main overall problem with Symantec is it is two companies under one umbrella. Symantec has never made storage and data protection an equal partner with security after acquiring storage software vendor Veritas in 2005. None of Symantec’s CEOs since then – John Thompson, Enrique Salem or Bennett – were storage guys, and there have been intermittent rumors that the storage products would be spun off.  Veritas was a major storage market influencer as a standalone company, but Symantec is not seen as much of a force in the storage world.

Security and storage have mostly been separate divisions, and the two-pronged approach hasn’t worked. Symantec’s flagship backup application NetBackup is still doing well, and the decision to sell it as part of integrated appliances has worked out. But BackupExec is a mess, Symantec’s storage management message is muddy, and it is also losing its iron grip on the anti-virus market.

It would help if the new CEO has experience in storage or data protection. Interim CEO Brown has that experience as a former CEO of Quantum and a Veritas board member before the merger. However, indications are that Brown will only hold the job until a permanent CEO is found. Let’s hope the search committee keeps storage in mind when screening candidates.

March 20, 2014  4:00 PM

Tintri moves into VMware’s vSphere

Dave Raffo Dave Raffo Profile: Dave Raffo

Tintri, which designs its VMstore storage appliances to be virtual machine-friendly, is releasing a plug-in to let customers manage VMstore inside of VMware’s vSphere.

The plug-in lets Tinri customers manage their VMstore appliances from the vSphere vCenter management tool. It makes VMstore dashboards visible from the vCenter server, and they can get alerts and monitoring information there. They can also set snapshots, clones and replication policies in vCenter.

“The end users care about the ESX application or the virtual desktop or the SQL Server application, not so much the storage system,” said Saradhi Sreegiriraju, Tintri senior director of product management. “We’ve exposed all the information from our VMstore dashboard into vCenter. Anything you can do from the VMstore UI – snapshots, clones, replication or monitoring – you can now do from the vCenter UI.”

TheTintri vSphere Web Client Plugin will be available next week as a download from Tintri.

Tintri’s selling point is it lets customers provision storage from the VM-level instead of having to deal with the LUNs and volumes associated with traditional storage arrays. Its greater integration with VMware comes as VMware moves more into storage with its virtual SAN (VSAN) software that turns hard drives, solid-state drives and compute running on VMware-connected servers into networked storage. VSAN is seen mainly as competitive to hyper-converged storage systems such as those from Nutanix, SimpliVity, Scale Computing and Maxta but it can also hurt VM-aware storage vendors. After all, VSAN enables companies to do many of the same things as Tintri does.

Sreegiriraju said Tintri doesn’t consider VSAN a competitor because VMstore has been on the market for three years and its hardware is tuned to work with VM-aware software. He said VSAN will compete more with traditional storage systems. “VSAN is validating the architectural underpinnings that we have,” he said. “We agree with VMware that you need a system that understands VMs at the VM level rather than  at the LUN level.”


March 20, 2014  9:49 AM

Data Dynamics automates file migration

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Data Dynamics Inc., a new company selling a decades-old product, has enhanced its StorageX file management software to simplify storage migration planning and automate the ability to map metadata characteristics between source and target file servers in data migration projects.

Data Dynamics came out of stealth last September to breathe new life into the StorageX file virtualization application originally developed by NuView Systems in 2002. Brocade acquired NuView in 2006, but killed StorageX in 2010. Data Dynamics positions StorageX as a file migration tool rather than for file virtualization.

StorageX 7.1’s new Advanced Design Mode provides an exportable grid view of a file server that allows administrators to regulate volumes on block LUNs, define the SnapMirror source and destination, adjust de-duplication ratios, sett quota limits, and change volume sizing. The grid looks like a spreadsheet,

“It identifies the source infrastructure in a file server,” Data Dynamics CEO Piyush Mehta said. “As you populate the source information, it automatically asks pertinent questions on the target environment such as identifying the filer of NAS device, what is the volume name and what is the volume size.”

It also automatically creates policies that help trigger the data movers. Previously, the process was done manually. IT administrators would have to understand the metadata from the source, and create shares and exports.

“Once they created those, they have to write scripts to move the data,” Mehta said. “It’s a fully manual process. It takes hundreds of hours and the risk of errors is high. (With our module) you save deployment time.”

StorageX supports NetApp Data Ontap, EMC VNX and EMC Isilon application programming interfaces (APIs) so data migration can be done within and across those systems. Customers can use it to migrate from a NetApp filer to an EMC filer or vice versa.

The software resides on a VMware hypervisor and uses replication agents to pull data from the source and push it to the targets. It supports both CIFS shares and NFS exports within a single console. It moves data while it’s still in use. It takes an initial copy and makes subsequent copies of the changes. It also detects NFS and CIFS access controls or security permissions and ensures file attributes are migrated correctly.

StorageX also has been upgraded with better reporting capabilities to export, sort and filter views and reports for selected storage resources. It can monitor agent utilization and trends, oversee device exports and shares, and audit migration policies and execution.

“The enhanced reporting provides utilization and trending of the entire migration process, the overall activity taking place across devices from the source to target side,” Mehta said. “It tells what various states of the migration policies.”


March 18, 2014  10:35 AM

What do you do with old storage systems?

Randy Kerns Randy Kerns Profile: Randy Kerns

During a recent explanation I was giving on the lifespan of enterprise storage systems, I received an interesting question: what happens with systems taken out of service? There is a tendency to give a flip answer to that question, and it did bring laughter. But it is a legitimate issue, and I tried to explain what usually happens.

The most common option when replacing an enterprise disk-based system for primary storage for critical applications is to demote that system to usage for secondary storage. Secondary storage can mean less performance-critical application data storage, a backup disk target, or test/development data. The timing for replacing a system varies, but for larger enterprises the system is often used as primary storage for three years and is replaced after five years.  The cadence for replacement is usually dictated by maintenance costs, increasing failure/service rates, and technology change.

After that end of “useful life,” what happens to the old storage systems?   For systems that are purchased, a depreciation schedule is applied by the accountants and IT does not usually expend the time and effort to challenge accounting practices. If it is a leased storage system, the system “disappears.”  The “disappears” statement is one of those flip answers.  Decommissioning or demoting storage is a big effort for IT.  Data has to be migrated and procedures have to be changed.  There is potential for big problems, because these changes introduce risk.  But the storage system is taken out of service, out of the data center, and out of the building.

A leasing company may sell the system to a company that uses it for repair parts for other companies that want to hold on to systems for a longer period of time. An organization can save money if it is willing to use out-of-date with technology. However, not only will new storage systems get faster in that time, there will be savings in space, power, and cooling with new technology that could make the old systems more expensive than new models.

A purchased system can be sent to a recycling company.  A recycling company will recover components that have value and make a profit from selling the extracted elements. It’s not always clear where these recycling companies are located and how they dispose of the systems.

Another way storage systems may be disposed of is by paying a company to haul them away. That company will sell the systems by the pound to a company that sees value in the metal pieces –- racks, doors, slides, chassis, etc.

There aren’t many other options, although a few other ideas came up during our conversation:

  • Start more computing museums.  It seems that when people have old cars they love, but not enough to continue driving them because their useful life has ended, they put them in car museums. Why not do more of this with technology systems?
  • Give them to art schools so they can create some modern art sculptures out of them.
  • Give them to universities for educational purpose.

There are probably some other clever and funny ideas. Maybe the best solution is to invest in systems with greater longevity or with architectures where technology updates can be applied independently.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 14, 2014  8:44 AM

Avere upgrades its FXT high performance system

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Avere Systems Inc. has added a new device to its NAS optimization FXT 4000 series that has higher capacity and better I/O performance.

The FXT 4800  is an upgrade to the company’s all-flash FXT 4500 Edge filer, using 400 GB solid state drives (SSDs) compared to the latter’s 200 GB SSDs. The 4800 also comes with a higher performance CPU, which the company claims improves I/O performance by 40 percent.

“The typical customer that buys the 4000 series don’t want to deal with spinning disk,” said Jeff Tabor, director of product marketing at Avere. “We are seeing environments using the 4000 have a lot of legacy NAS and they need a performance boost. Rather than add flash to their storage systems, they find it more cost effective to add a performance tier.”

Avere’s FXT Edge filer appliances use a combination of DRAM, nonvolatile random access memory,SSDs and hard disk drives to accelerate performance of other vendors’ NAS nodes. The Avere FXT 4000 series is designed for sequential high-capacity storage requirements, with the FXT 3000 series for random I/O performance.

Avere’s FXT filers reside between client workstations and core filers such as EMC Isilon, NetApp and Hitachi NAS to optimize NAS via a global file system.

A single FXT 4800 device can hold 12, 400 GB SSDs and can scale up to 240 Terabytes when 50 nodes are clustered. Two 4800 nodes can run at 140k I/Os per second, while 50 nodes can run up to 3.5 million I/Os per second.

Comparatively, one FXT 4500 node holds 15, 200 GB SSDs that can scale up to 3.0 Terabytes per node. Fifty clustered nodes scales up to 150 Terabytes and operates up to 2.5 million I/Os per second.

The FXT 4800 is available now.


March 13, 2014  10:15 AM

NetApp plans more cuts as market share drops

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp is planning to cut about 600 employees over the next year, following a 900-headcount reduction in 2013.

The vendor disclosed its plans in an SEC filing Wednesday, claiming it will cost about $35 million to $45 million for employee terminations and other costs. The 600-person reduction would amount to about five percent of NetApp’s total employees, and follows NetApp’s disappointing earnings and even more disappointing forecast given last month. Like its larger rival EMC, NetApp is feeling the sting of cautious IT spending as well as storage buying patterns that are changing due to the cloud, flash and other new technologies.

NetApp is not alone in feeling the heat. EMC in January said it would cut its staff by around 1,000. But NetApp sales have taken a bigger hit.

According to IDC’s latest storage tracker numbers, NetApp grew revenue by 1.5 percent from the fourth quarter of 2012 to the fourth quarter of 2013. That was below the external storage industry growth of 2.4 percent. NetApp’s market share dropped from 11.6 percent to 11.5 percent – placing it third behind leader EMC and IBM. EMC grew 9.9 percent in the fourth quarter and had 32.1 percent of the market. NetApp did outperform IBM and No. 5 Hitachi Data Systems, which both declined in revenue from the previous year. No. 4 Hewlett-Packard gained 6.5 percent but remains behind NetApp with 9.6 percent of the market.

In a report on NetApp today, Wunderlich Securities analyst Kaushik Roy wrote that the vendor’s biggest challenge comes from new increased competition from cloud and flash vendors.

“The biggest risk to NetApp is the new technologies that are disruptive to its existing products and the emerging storage companies that are gaining traction,” Roy wrote. “A large part of the non-mission-critical storage market is moving to the cloud. SMBs are increasingly using cloud-based compute and storage infrastructure … provided by vendors such as Amazon, Google, Microsoft and others who are using commodity hardware to build the infrastructure as a platform.”

Roy also wrote that NetApp’s investment in new technology has lagged its rivals, pointing out that NetApp does not yet have an all-flash array built from the ground up. NetApp does sell E-Series all-flash arrays for the high performance market, but its general purpose FlashRay all-flash system is not yet generally available while all other major storage vendors and a bunch of startups are selling all-flash systems.


March 12, 2014  7:39 AM

Condusiv brings central control for I/O optimization

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Condusiv Technologies recently upgraded its V-locity caching software with a central management console to manage I/O performance in physical and virtual environments from the application layer down to the storage.

V-locity takes a different approach to caching. The company’s V-locity software is designed for Microsoft environments and resides in the physical server or virtual machine to provides intelligence to the operating system so it can more efficiently create I/O performance. The idea is to solve any performance bottlenecks without the need to add more hardware, such as solid-state drives (SSDs) or PCIe flash cards.

“There is an I/O explosion that is being created with the data explosion. We sit close to the application so that performance problems are solved right away,” said Robert Woolery, Condusiv’s chief marketing officer. “Everything from the host, the hypervisor, the network and storage get the benefit of faster I/O.

Woolery said the Microsoft operating system has inefficient write operations. It breaks up a file during the write process, so each piece of that file is associated with an I/O operation. That translates into numerous I/O operations for the storage. V-locity uses what the company calls a behavior analytics engine that gathers data on an application and operating system, which then is used to help the application optimize the I/Os.

“Based on the behavior, we tell the application a better way to optimize the I/Os. It tells the operating system not to break up a file and just sent it as one I/O,” said Woolery. “The I/Os are optimized to the way the applications likes its blocks sized. Some applications are optimized for different block sizes.”

V-locity also works on inefficient read operations. Instead of reading the same file over and over, data is stored on an intelligent cache and serves up files base on usage time and frequency.  The company claims it helps eliminate unnecessary I/O on the storage layer and improves response time by at least 50 percent.

Woolery said most server-side caching does predictive analysis based on the data just received.

“They don’t know,” said Woolery. “They are guessing at it. You could be wrong and you don’t care if that involves just one I/O But if you guess wrong, you waste a lot resources on I/Os that you don’t need.”

The latest version of V-locity now has a central control management console so the the software can be deployed via only one product.

    1 Comment     RSS Feed     Email a friend


March 11, 2014  11:29 AM

Nimble adds greater insight to InfoSight

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage, growing at a greater rate than its larger competitors, this week enhanced its analytics program that is considered a leader in the storage world.

Nimble added correlation capabilities to its InfoSight analytics and monitoring program. InfoSight now tracks key performance data to find a potential cause of problems inside the array and over the network. The program automatically notifies the customers of potential problems so they can take action before the problems deepen.

InfoSight, launched in April, 2013, collects performance, capacity, data protection and system health information for proactive maintenance. Customers can access the information on their systems through an InfoSight cloud portal. It can find problems such as bad NICs and cables, make cache and CPU sizing recommendations and give customers an idea of what type of performance they can expect from specific application workloads.

“We used to point out things like the customer has challenges around latency on the volume level. Now if you have a latency spike, we tell you why you are experiencing that spike,”said Radhika Krishnan, Nimble’s VP of solutions and alliances. “We’re getting more granular. We can make precise recommendations on how to fix the problem and give customers a way to work around it – for example, turn off the cache for a particular volume or add cache controller or capacity to improve performance.”

Ed Rollinson, head of IT for British-based marketing services firm Brightsource, said he found InfoSight a pleasant surprise after installing two Nimble Storage arrays to replicate databases for disaster recovery.

“Those reports are incredible powerful,” he said. “We can just click a few buttons and have a report showing us all information we need to predict where you can go in the future for capacity planning. You need to be aware that your replication data sets are getting larger, therefore, if you want to meet RPOs [recovery point objectives], you need to increase your bandwidth. When I tell my directors I want to increase bandwidth, they will ask for all sorts of facts and figures about how and why. Of course, I can work that out, but it will take time. It helps that InfoSight can produce that information for me.”

Nimble is also extending InfoSight to resellers. If customers give their resellers permission, the reseller can access their information inside Nimble’s database and forecast potential problems.


March 7, 2014  6:13 PM

Slumping Violin Memory prepares a new tune

Dave Raffo Dave Raffo Profile: Dave Raffo

All-flash array vendor Violin Memory recorded less revenue and a great loss than expected last quarter. Still, its earnings report was better than the previous quarter.

Violin’s first quarterly earnings report as a public company last November was a train wreck. It’s poor revenue and guidance surprised investors and the storage industry, causing its stock price to plummet and a large investor to call for the company to put up a “for sale” sign. The board fired CEO Don Basile less than a month later and hired Kevin DeNuccio to replace him in early February.

Violin Thursday reported revenue of $28 million for the fourth quarter of 2013 and $108 million for the year. Its losses were $56 million for the quarter and $150 million for the year. Quarterly revenue was up 22 percent from last year but down from $28.3 million from the previous quarter. For the year, revenue increased 46 percent over 2012 but the quarterly and yearly losses were greater than the previous quarter and year.

DeNuccio outlined his plans for a turnaround on Thursday’s earnings call. Those plans consist mainly of reducing expenses by selling off its PCIe flash business and cutting staff related to that business. DeNuccio has revamped the Violin management team. He brought in Eric Herzog from EMC to head marketing and business development and Tim Mitchell from Avaya to take over global field operations.

DeNuccio said Violin will have new flash hardware and software products in the next few months. “We expect to make one of the most significant product announcements in our history,” he said.

He defended the decision to sell the PCIe business launched a year ago by saying “It was clear that we grew too much, too fast. Now it’s a matter of how do we get the company into a size that is manageable, and how do we focus on an area that we are successful in?”

Violin was the all-flash array revenue leader in 2012 according to Gartner, but new entrees from large players such as EMC, NetApp, Hitachi Data Systems, IBM, Dell and Hewlett-Packard changed the market in 2013.

“We have formidable competitors,” DeNuccio said. “We’re at the top of the pyramid, and we compete with the big boys. But we’re confident that our technology is unique enough and we can establish ourselves running the critical applications for our customers to allow us to compete at that level.”


March 6, 2014  12:46 PM

Cloud-to-cloud backup vendor discloses what data it can’t protect

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud-to-cloud backup vendor Spanning wants you to know that there is information that its Backup for Google Apps cannot protect. And neither can its competitors protect those files.

The latest version of Spanning Backup for Google Apps launched this week includes a status reporting feature that shows customers problems with the most recent backup. This report includes data that cannot be backed up because of limitations in the Google API that affect files such as Google Forms and scripts.

“Customers need to trust that data will be there when they restore,” said Mat Hamlin, Spanning’s director of product management. “We’re now providing granular insight into each user’s data so administrators can understand what data has been backed up and what data has not been backed up. Third-party files, Google Forms and scripts are not available for us to back up. Customers may not be aware of that. When they come to us to back up all the data, the expectation is we will back up all that data. We want them to know what we cannot back up.”

Google Apps and Salesforce.com are the chief software-as-a-services (SaaS) apps protected by cloud-to-cloud backup vendors.

Spanning’s new report also brings other problems to customers’ attention so they can take action. It flags zero byte files that could indicate corrupt files, and points out temporary problems that are likely to be resolved within two or three days.

Hamlin said the data that cannot be backed typically up makes up a small percentage of data in Google Apps. He said Spanning is coming clean to add transparency, both for Spanning and competitors. He said Spanning often tells customers up front about the limitations, but competitors will not admit those limitations.

Ben Thomas, VP of security of Spanning’s chief competitor Backupify, said Backupify for Google Apps runs into the same problems. However, he said there are ways to minimize these limitations.

“We do have similar things we run into,” Thomas said. “Some cloud systems, whether it’s Google or Salesforce or other apps, may not have API calls available to pieces of data. Some API calls may be throttled, so only so many API calls per hour or per day can be made. We’ve been smart over the years about the way we manage throttling. For example, Google will throttle the amount of data per day per e-mail. The limit is a 1.5 gigabyte a day now. If we’re continually hitting that limit, we scale ourselves back to meet that. And we do that for every API.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: