Storage Soup


November 15, 2011  2:17 PM

Fusion-io braces for competition with bigger, faster PCIe flash device

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io has been one of the early successes in solid-state storage, turning its early lead in PCIe flash cards into an IPO this year after winning large deals with customers Facebook and Apple.

Now as competitors are popping up to challenge Fusion-io, the vendor is moving to make its products bigger and faster. Today Fusion-io doubled the capacity and improved performance of its ioDrive Octal flash-based accelerator card.

The 10 TB ioDrive Octal, aimed at data warehousing and supercomputing, includes eight 1.28 TB multi-layer cell (MLC) ioMemory modules in a double-wide PCIe device. Multiple cards can go into a server, packing 40 TB of flash capacity into a 4U server.

The 10TB Octal can handle more than 1.3 million IOPS with 6.7 GBps of bandwidth, according to Fusion-io. The 5.12 TB ioDrive Octal that began shipping two years ago supported 800,000 IOPS.

Fusion-io founder and chief marketing officer Rick White said feedback received from customers indicates they want as much capacity as possible in the Octal product. He said the Octal is used mainly for scale-up performance while Fusion-io’s single-card ioDrive and ioDrive Duo cards are for scale-out implementations.

White said Fusion-io’s early success comes from the vendor taking a fresh approach to driving performance and reducing latency in storage systems.

“We founded this company as a software company,” he said. “We couldn’t convince major flash memory companies to build a memory card based on flash, so we had to do it ourselves. We say it’s not about PCI Express, it’s about not thinking about this as a disk. Don’t think about it as storage, think about it as a memory tier.

”We decouple storage from capacity. The old way of scaling performance was to add spindles, then you stripe them, short stroke them, and add a layer of cache. The problem was, you had network latency. It wasn’t just about IOPS, it was how fast can I get a response? And does this play with my application? We were about decoupling performance and putting it into the server.”

He said the idea was to enable hundred-thousand dollar JBODs to perform as well as million-dollar SANs. Much of Fusion-io’s early success comes from convincing large companies such as Facebook and Apple to adopt its approach.

Traditional storage vendors moved into flash with solid-state drives (SSDs) on their storage arrays, but White said “all those SSDs had to go through a RAID controller on the PCIe bus. It’s about getting rid of SAS, SATA and all the storage protocols.”

The competition is paying attention. LSI Corp., STEC, Violin Memory, Texas Memory Systems, Micron, OCZ and Virident now have PCIe flash cards similar to Fusion-io’s,and EMC is preparing to ship its server-based PCIe flash Project Lightning product.

“The industry has followed us to PCIe, they’re following us to caching software, and the next step is to lose the storage protocols and think of us a new tier of memory,” White said.

November 14, 2011  7:51 PM

Symantec speeds up failover for Windows apps

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Symantec Corp. today launched Veritas Storage Foundation High Availability 6.0 for Windows, designed for faster failover of Windows server applications and recovery of Windows Server Hyper-V virtual machines for disaster recovery.

Veritas SFHA 6.0 consists of Veritas Storage Foundation 6.0 for Windows and Veritas Cluster Server 6.0 for Windows. Symantec claims Veritas SFHA 6.0 reduces the failover time from about 30 minutes to one minute for business critical server applications because the software now uses an Intelligent Monitoring Framework with asynchronous monitoring instead of the traditional Monitoring Framework with polling monitoring. The polling monitoring typically takes up most of the time during a failover process because an alert is sent without taking formal action. A failure is detected instantaneously with asynchronous monitoring and action is taken faster.

“The biggest chunk of times was the polling process,” said Jennifer Ellard, a senior manager in Symantec’s Storage and High Availability Group. “We changed the paradigm from polling-based to asynchronous monitoring, which allows us to take action faster. We plug into Windows so we can get the information instantaneously.”

Ellard said failover is up to 30 times faster because many of the storage reconnection steps have been automated. Previously, the storage disk group was only available to the target server and not the standby servers. When a failover occurred, all the data in the disk group had to be imported into the new nodes and the disks had to be re-scanned. The Veritas SFHA 6.0 for Windows software now allows multi-node access to a shared volume, eliminating the need to import disk groups to the new systems and re-scan disks.

Veritas SFHA 6.0 for Windows now supports Hyper-V virtual machines with a feature that automates the end-to-end disaster recovery process. The 6.0 version helps coordinate the process for virtual machines with Windows Server 2008 RS Failover Cluster to automate the detection of site-wide failures and the recovery steps such as storage replication control, VM site localization and DNS updates.

Ellard said 6.0 allows for disaster recovery across any distance. “Typically, you can failover up to 100 Kilometers in virtual environments,” Ellard said. “We enable customers to failover across any distance. We do this by asynchronous replications in Hyper-V environments.”

Veritas Storage Foundation 6.0 for Windows also allows live migrations, such as when administrators need to move VMs from server to server along with the associated storage. SF 6.0 supports Windows Server Hyper-V live migration, which does the actual movement of the VMs while the Symantec software handles the storage that is associated with the VMs. Ellard describes this as a “big customer request.”

Veritas SFHA 6.0 for Windows also includes Virtual Business Services (VBS) for recovery of applications across multiple business units, operating systems and storage platforms. VBS is designed to give administrators and business groups a coordinated method for automated high availability and recovery of multiple, interdependent applications, plus all the supporting physical and virtual technologies.

Veritas SFHA 6.0 will be generally available on Dec. 5


November 11, 2011  3:36 PM

DataDirect Networks re-architects HPC storage

Dave Raffo Dave Raffo Profile: Dave Raffo

DataDirect Networks has added performance to its top-end high-performance computing (HPC) platform.

DDN this week launched its SFA12K series, which will replace the SFA10K product that the vendor has had success selling to HPC shops.

DDN CEO Alex Bouzari said the biggest improvements over the SFA10K are the internal network inside the appliance, the storage processing that lets customers embed file systems or applications inside the appliance, and greater density.

The SFA12K has 64GB of memory and DDN claims it scales to 1 TBps with 25 arrays using InfiniBand or Fibre Channel connectivity. It also runs up to 16 virtual machines inside an array. The SFA12K holds up to 84 2.5-inch or 3.5-inch SAS or SATA disks in one array – up from 60 drives in the SFA10K – and 840 disks in a rack. The SFA12K supports up to 600TB of eMLC solid-state drives (SSDs).

The SFA12K platform consists of three products. The SFA12K-40 is the highest performing model, hitting 40 GBps of bandwidth and 1.4 million flash IOPs. A SFA12K-20 handles 20 GBps and 700,000 flash IOPS, according to DDN. The SFA12K-20E is available with DDN’s ExaScaler or GridScaler parallel file systems running on the SFA12K-20 array. Customers can also embed applications natively within the SFA12K-20E.

The SFA10K could deliver 800,000 flash IOPS and 15 GBps of bandwidth. Bouzari said a new architecture was needed to keep up with larger data sets, cloud computer requirements and data center power and footprint restraints.

“In HPC, people are asking for levels of performance that just cannot be achieved by following the same old approaches,” he said. “Today you have large data centers being built and types of processing requirements deployed inside data centers that cannot be met with traditional architectures.”

Bouzar said IBM and Hewlett-Packard are among the DDN partners who will resell the new platform. The SFA12K won’t go GA until the second quarter of 2012, but DDN said it has more than 50 PB of orders including a 15 PB purchase by the Leibniz Supercomputing Center (LRZ) in Munich. LRZ already uses DDN storage for its SuperMUC HPC supercomputer. DDN said Argonne National Laboratory has also purchased SFA12K technology for its IBM BlueGene/Q-based Mira supercomputer.


November 10, 2011  1:39 PM

Virident ready to make flash move with MLC card, funding

Dave Raffo Dave Raffo Profile: Dave Raffo

PCIe flash startup Virident Systems released its first multi-level cell (MLC) solid state card today and hauled in $21 million in funding.

Virident, whose main competitor is Fusion-io, picked up an influx of cash from its previous investors Globespan Capital Partners, Sequoia Capital and Artiman Ventures. Strategic investors Intel Capital, Cisco and a storage company that did not want to be identified also pitched in.

The funding round brings Virident’s total to $57 million, and it will need all the help it can get to compete with Fusion-io and other competitors coming into the market. Fusion-io raised $237 million when it went public last year and is hoping to raise another $100 million in a secondary offering of shares. LSI and STEC also sell PCIe flash cards, and EMC’s Project Lightning server-based PCIe flash product is in beta.

Virident’s new FlashMax card comes in single-level cell (SLC) and MLC versions. Virident’s first product, tachIO, was SLC only.

The FlashMax MLC card is available in 1 TB and 1.4 TB configurations. The vendor claims the MLC card has a 19 microsecond write latency and 62 microsecond read latency, a write performance of 600 MBps and read performance of one million IOPS The more expensive SLC version comes in 300 GB, 400 GB and 800 GB configurations, with 16 microsecond write latency and 47 microsecond read latency. It writes at 1.1 GBps and reads at 1.4 million IOPS.

Virident VP of marketing Shridar Subramanian said the MLC card costs less than half of the SLC card, with the MLC starting price at around $13,000.

Subramanian said Virident is working on OEM deals with larger server partners. He said Virident competes well against Fusion-io because of its superior IOPS performance, but admits his competitor is better known and was first to market.

“Many customers bought us because of capacity and performance,” he said. “Where [Fusion-io is] ahead of us is that they’ve been in the market longer, and people know more about Fusion-io than Virident.”

Like Fusion-io, Virident is counting on EMC’s Project Lightning product having limited appeal beyond high-end EMC customers. “Our customers are mostly in the direct attached storage market where EMC does not have a big presence,” he said. “EMC Project Lightning is cache, it’s an augmentation of EMC’s current infrastructure.”


November 8, 2011  9:42 PM

Nexsan adds petabyte NAS

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Nexsan today launched its E5510 Network Attached Storage (NAS) system that can scale to just over a petabyte. That makes it the highest capacity member of the vendor’s E5000 Flexible Storage Platform introduced in early August. The E5000 series was Nexsan’s first home-grown NAS array, and the company plans to add iSCSI support in January.

The 3U E5510 can scale  up to 1,080 TB by adding three Nexsan E60 and three E60X expansion chassis to hold a total of 360, 3 TB SATA drives. The system also can be populated with 15,000 RPM SAS or single-level cell (SLC) solid state drives (SSDs). The E5510 has two active/standby NAS controllers so if one fails, the other picks up all operations. The E60 disk arrays have dual, active/active RAID controllers. Each NAS controller has two, six-core Xeon processors and a maximum of 96 Gigabytes of RAM per controller.

The E5510 out-scales Nexsan’s E5310 that supports 720 TB with four E-Series expansion chassis that hold 240 3 TB SATA drives. That system has 48 GB of memory and two, quad-core Xeon processors per controller. “We are not moving up the market,”  Nexsan’s CTO Gary Watson said. “We have a number of customers that are doing multi-petabyte deployments.”

Nexsan also added asynchronous replication support to the E5000 to go with the platform’s synchronous replication, snapshots, thin provisioning and FASTier for high-performance SSD-based cache to boost heavy workloads in applications such as databases or VMware, Xen and Hyper-V environments.

The company also made enhancements to its E-Series block storage systems. It has added a smaller capacity E18X expansion shelf that holds up to 18 SATA, SAS or SSDs. The E-Series also now has a SAS-to-host interface option along with support for 8 Gigabit Fibre Channel and 10-Gigabit iSCSI. The Nexsan E-Series is made up of the E60 storage system, containing 60 drives in a 4U form factor, the 2U E18 system with 18 drives and the 60X expansion unit with 60 drives in a 4U.


November 7, 2011  10:42 PM

Atkinson steps down as X-IO CEO

Dave Raffo Dave Raffo Profile: Dave Raffo

Less than three months after changing its name, X-IO is changing its leader.

Alan Atkinson is leaving as CEO after two years on the job to become chairman, and Oak Investment Partners general partner John Beletic will take over as CEO.  Oak is X-IO’s major investor. X-IO will officially announce the CEO change Tuesday.

Atkinson told StorageSoup.com today that he felt he had completed a transition of the company during his tenure as CEO, and it is time for a more operational leader. He said as chairman he will focus largely on X-IO’s international sales.

“I view this as the end of our transition,” he said. “We have a new name, new headquarters, new executive team, a more effective cost structure, and a clear focus. I feel like it makes sense for me to still be involved as chairman, and have someone come in and do operations.”

X-IO moved its headquarters from Eden Prairie, Minn., to Colorado Springs early this year. It changed its name from Xiotech in August, at the same time it changed the name of its main product from Hybrid ISE to Hyper ISE. Hyper ISE mixes solid state and hard drives inside ISE self-contained drives that use auto-healing technology that Xiotech acquired from Seagate in 2007.

Atkinson said XIO’s product focus would remain on performance-driven storage.

He said Beletic will work out of Colorado, which should help improve operations. Atkinson lives in Florida and spends a lot of time on the road meeting with customers and partners.

VP of worldwide sales Mark Glasgow has also left the company, and XIO will promote Shawn Kinnear (Western U.S.) and Dave Ornstein (Eastern U.S.) as sales VPs.

Atkinson said he doesn’t expect any other major changes in the short term.

“The idea is that we did all that work before I transitioned so John can just run things,” Atkinson said.

People in the industry may wonder if Atkinson did enough. Other tier two storage vendors such as 3PAR, Isilon, Compellent, BlueArc and LSI Engenio have been acquired for prices ranging from hundreds of millions to billions of dollars in the time Aktinson ran X-IO. There’s no evidence of profitability for X-IO, although Atkinson said the Hybrid ISE launch was the most successful in the vendor’s history and he claims that the vendor is shooting for an IPO.

“We’re doing all things that companies do when they try to go public,” he said. “John and I share that goal.”

But analyst Arun Taneja of the Taneja Group said X-IO may be running out of time because the IPO market is cold.

“I’m surprised at the patience Oak has shown,” he said. “They don’t want to see the money they’ve put in go to waste, but they have to figure out how much more they want to put in.”

Taneja said X-IO’s technology is valuable, but he doesn’t see any obvious candidates to buy the company whole.

“If they don’t make an IPO, they probably can get scooped up in an asset sale,” he said. “If they’re waiting for big money, it could be a long time coming.”


November 7, 2011  6:51 PM

CommVault considers backup on the edge the next frontier

Dave Raffo Dave Raffo Profile: Dave Raffo

Among the sales last quarter that helped CommVault surpass its revenue expectations was a large organization that became the first production customer for CommVault’s mobile backup technology.

CommVault launched its Simpana Edge Protection in April, but CommVault CEO Bob Hammer said the release was limited to one large customer that implemented it on 20,000 devices. He predicts this is the beginning of big things to come from the remote backup market.

Hammer said the technology used for remote backup is trickier than backing up only servers. The software needs to be network-aware depending on whether it is connecting over WiFi, the WAN or LAN. It also needs to work with firewalls and other types of security organizations have for edge devices.

“There’s a lot of complicated technology here, and we wanted to work with this one large customer and get all the issues buttoned up,” Hammer said. “We’ve enabled a user to restore information directly without burdening IT. It’s not only about backing up the device, but getting information into an archive for compliance and search.

“It’s at the beginning of its lifecycle, but it will enable us to move that technology into other devices in the edge like tablets and smartphones.”

Hammer said backing up virtual machines, archiving and cloud backup were the major drivers that helped CommVault report $97.5 million in revenue last quarter for a 30% year over year increase.

“We see the cloud going mainstream now for backup,” he said.

He said the customers want to make backup and archiving one process, and that will be one of the focuses of the next version of Simpana.

“That’s a key fundamental technology going forward,” he said. “You have to do it backup and archive as one single process. You move data one time into backup, and create one copy of the data for backup and archive. That’s not a trivial move, but we believe that’s the way it’s going to go. We think it’s the only way to manage data – you have to move it off the front end quickly and store in in a low-cost index, or what we call a strategic archive.”


November 6, 2011  6:46 PM

Think business when measuring storage efficiency

Dave Raffo Brein Matturro Profile: Brein Matturro

By Francesca Sales and Rachel Kossman, Assistant Site Editors

The best way to approach storage efficiency is to measure storage from a business perspective, Jon Toigo, CEO and managing principal of Toigo Partners International consultancy, told attendees last week at a Storage Decisions seminar in Newton, Mass., on building an efficient storage operation.

Toigo defined storage efficiency from engineering, business and operational perspectives, but stressed the business perspective as the most effective gauge when measuring storage efficiency. “How are our investments in storage going to increase our productivity and our competitiveness?” he asked. “That’s the bottom line. What is it doing for us? Are we just hosting a bunch of data … that doesn’t really deliver any value and recovery?”

Toigo told attendees that Gartner recently predicted that the popularity of server virtualization technology would add to the problem — increasing storage capacity needs by 600%. The way for storage pros to counteract intimidating numbers like that, he said, was to use a broad definition when considering storage efficiency — but pay close attention to specific metrics.

From the engineering standpoint, he said, efficiency is the ratio of the output to the input of any system; from the business perspective, it’s a comparison of what is produced with what can be achieved with the same consumption of resources; and from the operational perspective, efficiency is defined as the skillfulness in avoiding wasted time and effort.

It’s also important to collect baseline data about storage, he said, but that’s a challenge for storage managers because of the wide range of storage systems running in data centers. “There are lots of different configurations for storage, a lot of different storage products, a lot of different standards for storage,” Toigo said.

Toigo advises storage admins to measure efficiency using five metrics: capacity allocation, capacity utilization, storage performance (I/O throughput), data protection (downtime avoidance) and storage energy.

Collecting these metrics is becoming increasingly more vital as the “non-trivial” challenges to storage efficiency continue to pile up. Toigo listed factors such as the neglect of data management, the narrow interpretation of storage management as capacity management, yielding to vendor “marketecture” over architecture, and storage administrators’ tendency to address problems by buying more hardware instead of addressing the source of the problem.

Instead of succumbing to these “tactics from the trenches,” Toigo advises developing a strategic storage plan, which involves a three-part, measurable process. First comes an analysis of the current state of company requirements, as well as current and future market and technology trends. Then, Toigo said, it’s important to assess the options to meet these requirements, in terms of time, budget and other business parameters. Finally, implement the plan in a manner that allows for ongoing testing. This strategy building process, Toigo contended, will ultimately enhance storage efficiency.

Several IT administrators at the seminar said they are evaluating ways to improve their storage efficiency. Keith Price, system administrator at Johnson & Wales University in Providence, R.I., said his IT team is looking to buy a new SAN to replace a system coming off support.

“We’re just trying to figure out how to figure out what we want,” he explained. “We’re doing that by doing what Jon said, benchmarking item by item.” Price’s department manages an extensive collection of databases on its SAN – Exchange and customer relationship management (CRM), for instance – as well as file systems.

System programmers Edith Allison and Michael Orcutt make up the enterprise storage team for the University of Connecticut, and are seeking ways to improve their storage from a price/performance standpoint as the university centralizes its IT operation.

“UConn is at a crossroads,” Allison said. “We have central IT, and the university has lots of little pockets of IT, and we’ve all just come together under one IT leader for the first time.”

The university has a Fibre Channel SAN, and the team manages 300 TB of data across all the academic units of the university. “We’re looking at how we are going to become a more efficient organization, how we’re going to save money. We’re a state agency, we have no money,” Allison said, laughing. “We’re a state and a public university, so it’s a double whammy.”


November 4, 2011  6:56 PM

Retrospect backup looks ahead to new era

Dave Raffo Dave Raffo Profile: Dave Raffo

After going through two ownership changes since mid-2010, the Retrospect SMB backup software team re-launched this week as an independent company.

Most of the team for the newly private Retrospect, Inc., goes back to when Dantz Development Corp. owned the software before EMC acquired Dantz in 2004. EMC made Retropsect part of its Insignia SMB brand, but eventually lost interest in that market and sold Retrospect to Sonic Solutions in May of 2010. Sonic carried Retrospect software as part of its Roxio brand until digital entertainment company Rovi acquired Sonic for $720 million last December. Rovi wasn’t interested in the backup software market, so the Retrospect team spun itself off.

Retrospect made its coming-out announcement Thursday, when it also launched Retrospect 9 for the Mac with cloud support.

Eric Ullman, one of the Retrospect founders, said there less than 50 people at the new company with about two-thirds of them developers. He said the parting with Rovi was amicable because both sides realized Retrospect was not a good fit with its new parent.

“Backup software is not even close to what Rovi’s business is, and their management quickly realized that Retrospect was not going to be a long-term product at the company,” he said.
Ullman said throughout the ownership changes, Retrospect has kept most of the same channel partners and he hopes to hit the ground running as an independent company.

“We’ve maintained our channel amazingly well over the years,” said Ullman, who leads Retrospect’s product development. “The feedback we’ve received has been almost 100 percent positive from our partners.

“Our sales peaked during the first year at EMC. After EMC dropped focus, sales dropped. They have not changed significantly since we left EMC, but now that we’re focused on just one thing, we expect to grow sales back up.”

Restrospect 9 for Mac supports WebDAV (Web-based Distributed Authoring and Versioning), which makes it easy to integrate into cloud backup services. Retrospect also added a 64-bit network backup client for Intel-based Macs that uses optional AES-256 encryption and lets users initiate backups and restores from their desktops.

Although Retrospect is known largely for its Mac backup, Ullman said Retrospect for Windows accounts for most of its sales. It also has more competition for Windows SMB customers, going head-to-head with Symantec Corp. Backup Exec, CA ARCserve and Acronis software.

“We’re about 60 percent to two-third Windows now, but we expect that to change with the Mac upgrade,” Ullman said. “There’s really not a lot else in the Mac market.”


November 3, 2011  4:26 PM

Fusion-io CEO: EMC’s Project Lightning will cost too much

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io showed there is a hunger for server-based PCIe solid-state drive (SSD) accelerator cards by beating Wall Street estimates for sales last quarter, and CEO David Flynn said he’s not worried about EMC cutting into his success when the storage vendor comes out with its server-based flash product.

EMC has been touting its Project Lightning product since May. The product is in beta and expected to become generally available next month. But Fusion-io’s Flynn maintains putting flash in the server alongside high performance storage arrays is too expensive for widespread adoption.

“EMC is trying to make it additive to its existing business,” Flynn said during the vendor’s earnings conference call Wednesday night. “It’s relegated only to customers willing to pay an additional premium for performance on top of the premiums they already pay [for storage arrays].”

Flynn said using Fusion-io cards in servers allow customers to boost performance without using high-end storage arrays, keeping costs down. EMC obviously wants to continue selling storage arrays alongside servers with PCIe flash. Flynn questioned how many EMC storage system customers will want to pay for another flash device.

“We believe this isn’t about higher performance storage at yet a higher cost,” Flynn said. “This is about bringing cost way down. We believe customers will not pay twice, especially if the performance is solved out front. Instead, they will gravitate to lower cost solutions.”

He said Fusion-io’s IO Turbine software will let organizations get many of the storage management benefits of Project Lightning without EMC arrays.

Fusion-io, which became a public company with an IPO in March, reported revenue of $74.4 million and net income of $7.2 million for last quarter. That compares to $27 million in revenue and a $5.8 million loss a year ago when it was a private company. Fusion-io’s forecast for last quarter was in the $60 million to $65 million range.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: