Storage Soup


November 25, 2014  8:06 AM

Panasas brings high-speed parallel replication to its scale-out storage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Panasas, Storage

Since adding support for 6 TB Helium drives, hybrid NAS vendor Panasas has changed the way it protects and moves all of that extra data on its  systems. The vendor came up with a new triple-parity RAID scheme months ago, and this month added a high-speed parallel replication and file transfer application that runs on Panasas ActiveStor storage appliance.

Panasas SiteSync also runs on other Linux-based systems for disaster recovery, disk-to-disk backup and remote archive applications.

Geoffrey Noer, Panasas’ vice president of product management, said SiteSync is designed to move large amounts of unstructured data across the wire. Customers can use up to 64 compute clients, in which each system replicates a portion of the data and shares the workload that is moved in parallel. The software supports Panasas appliance and other heterogeneous storage systems. Panasas claims SiteSync can move data 10 times as fast as other file transfer utilities.

Noer called SiteSync “ scale-out replication designed for scale-out storage. It gives us a scale-out engine for data replication. You can migrate data from third-party storage systems, from Panasas or to Panasas or between Panasas systems.The parallel design is really the primary difference. It supports data migration and replication.”

Noer said SiteSync, which does a full data-replication and subsequent delta changes, can be used for both local area network (LAN) and wide area network (WAN) data movement.  But the short-term use case is for LAN-based data transfers and replication — mostly to help customers migrate to new Panasas storage.

Panasas announced ActiveStor 16 with PanFS 6.0 and RAID 6+ data protection in June when it announced support for HGST UltraStar He6 6 Terabyte helium drives. To handle the larger capacity drives, Panasas developed per-file triple-parity RAID that it said becomes more reliable and rebuilds drives faster as the system scales out.

“The short-term use case is to help customers upgrade existing file storage to PanFS 6.0 with RAID 6+ that offers customers 150 times better data protection than (other RAID),” Noer said. “This addresses a general-purpose need for accelerating LAN-based data movement and customers can use it to take advantage of RAID 6+.”

November 21, 2014  4:32 PM

Intel set to go big on 3D NAND

Dave Raffo Dave Raffo Profile: Dave Raffo
Intel, samsung, Storage

Intel claims it will leapfrog early 3D NAND leader Samsung with a 10 TB 3D NAND solid-state drive (SSD) for enterprises by late 2015.

Intel laid out its 3D NAND flash plans Thursday during its investor day presentations. Rob Crooke, VP of Intel’s Non-Volatile Memory Solutions Group (NSG), said the vendor will be able to fit 1 TB of data on a two-millimeter thick NAND chip for mobile devices. Intel is working with Micron on its 3D NAND devices.

3D NAND stacks memory cells vertically as well as horizontally in a cube model. Like Samsung, Intel’s flash has 32 layers. But Intel claims its 3D NAND products can hold 256 Gb on a die with multi-level cell flash (MLC) – more than twice as many bits as Samsung puts on its SSDs. Bill Leszinske, director of strategic planning for Intel NSG, said the vendor will put 384GB on its tri-level cell (TLC) 3D NAND. Samsung 32-layer 3D NAND has 86 GB bits per die for MLC and 128 GB for TLC.

Samsung already has two generations of 3D NAND –which it calls Vertical NAND (V-NAND) — on the market. It sells V-NAND SSDs and last month added a 3.2 TB V-NAND PCIe card.

During a conference call today to discuss the technology, Intel’s Leszinkse said the capacity gains delivered by 3D NAND could be as important for enterprises as their performance.

“There are applications where you don’t need all 11 million IOPS from these devices, but you would like to have 10-plus terabyte drives putting a tremendous amount of data close to the CPU,” he said.

Leszinske also said the cost of SSDs is decreasing to the point where racks of high-density and low-power drives make economic sense.

“Today, mostly cloud service providers are using SSDs in the enterprise,” he said. “Traditional enterprises are only now starting to use SSDS. The evolution, or revolution, has only just begun.”


November 21, 2014  4:31 PM

Like politics, storage campaigns take negative turn

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

Now that we have just finished an election where special interest groups seeking to control government through their proxies spent billions of dollars marketing campaigns to make negative claims about competitors, we should think about the lesson this has taught us.

First, most agree that the majority of advertisements, mostly delivered on television, were negative attacks that contained exaggerations, half-truths, and outright lies. Also in agreement is that these are not what we want to hear. Now that the election is over, the people who produce them can take their millions paid from big money special interest and crawl back under their rocks. However, we all know they will crawl out again every two years.

So what does this have to do with storage? I hear many of the approaches and tactics sales people use to sell storage products. I cannot help but think many of them are copying the approach used by the political campaigners. Or maybe the political campaigners are copying the storage sales people.

Many storage sales tactics involve pointing out potential shortcomings of competing products. Indeed, many even start out by pointing out that the competition is so bad. Eventually they get around to the product they are pushing, and how it does not have the same problems as the competitor they just bashed. The vendor is expecting the customer to buy a solution because of shortcomings of the competitor, rather than their products’ strengths.
Some competitor bashing is hard to sit through. Much of it is inaccurate, and other parts are speculation about the future or issues that cannot be quantified. So, this maps closely to the political approach of “exaggerations, half-truths, and outright lies.” The valuable attributes of the product they are representing are lost in the negative claims about the competition.

Considering how tired people get of hearing negative political campaigns and interviews, you would think sales and marketing pros understand that the same would hold true from negative selling for storage. It seems like they do not believe that their customers are discerning enough to understand this. Maybe they believe they have to discredit the competition to have any chance to sell their product. Most IT decision makers I know are smarter than that.

It is probably time for IT decision makers to call out a salesperson with a competition-bashing approach and say that they should only present the positive aspects of their product and the economic value. This means stopping the negative sale in progress and either dismissing the salesperson or letting them restart without the negative aspect.

It would be nice if we could do this with the big money, paid-for political advertising as well.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


November 20, 2014  2:38 PM

Pure Storage expands maintenance coverage, hopes to keep customers forever

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage, Storage

Pure Storage today expanded its Forever Flash maintenance and warranty programs, which the all-solid-state drive (SSD) array vendor hopes will translate into Forever Pure for its customers.

There are three pieces of Pure’s new program. It guarantees flat or lower per-TB maintenance and service pricing, promises free controller upgrades with every three-year maintenance contract renewal, and guarantees it will repair or replace any hardware device – include SSDs – with like or better parts.

Under the original Forever Flash program that Pure initiated last February, customers could either get new controllers or keep their maintenance flat without getting new controllers. Now they can do both. Pure’s stated goal is to relieve customers of an out-year maintenance bill when their initial contract expires, and preventing them from having to do forklift upgrades to the latest controllers.

“We mean it when we say forever,” said Jason Nadeau, Pure’s director of business value marketing. “We’re creating a perpetual storage lifecycle,”

Pure is trying to create perpetual customers. Under this plan, a customer only has to pay for more capacity after an initial controller purchase. The maintenance cost goes up as capacity increases, but Pure pledges to charge the same price per TB for maintenance.

Pure is also betting that its customers will do more than add SSDs to their arrays and upgrade controllers every few years. The vendor’s long-term growth will depend on customers buying more Pure controllers as they expand their use of flash and eventually replace hard disk drive systems.

All Pure maintenance contracts will be for one or three years under the new program. Pure upgrades its controllers every 12 to 14 months, which means a three-year maintenance plan will always include at least one free upgrade.

The all-flash market is perhaps the most competitive in storage these days, with pioneering vendors who concentrate on all-SSD systems such as Pure competing with the large legacy hard disk drive vendors.

Some large vendors expanded their maintenance programs for all-flash systems after Pure first launched Forever Flash earlier this year. For instance, EMC offers seven years of warranty price protection, flash replacement for seven years and a three-year money-back warranty for its XtremIO all-flash arrays. Hewlett-Packard offers a five-year warranty on its 3PAR StorServ SSDs.

Pure VP of products Matt Kixmoeller called for the legacy vendors to expand their warranties to all storage. “We see some competitors starting to mimic us and add all-inclusive software licensing and longer warranties, but only for their all-flash products and not for hard disk array products,” he said. “We would like to see the industry adopt these broader practices.”


November 20, 2014  2:00 PM

BMC automates storage asset management

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

BMC already could map servers and networks. Now the software vendor has discovered storage.

BMC’s new Atrium Discovery for Storage automates the discovery and mapping of storage resources and their relationship with servers and the network.

Raphael Chauvel, BMC’s senior director of product management, said the Atrium Discovery for Storage works with the Altrium Discovery and Dependency Mapping (ADDM) product. It also has been integrated with the Atrium Configuration Management Database (CMDB), which is a centralized service that pulls data from multiple resources for automation and visualization so IT can plan and assign priorities for business services.

Chauvel said BMC customers had mostly used spreadsheets to map storage resources.

“Now we have automated the discovery process,” he said. “Before we didn’t have automation and discovery [of storage] to servers and applications.”

BMC’s Atrium Discovery for Storage shows cloud management software, logical partitions and links to what device is consuming storage. The application works with storage systems and software from EMC, HP, Hitachi Data Systems, IBM and NetApp via SMI-S, Web-based Enterprise Management (WBEM) and SNMP management protocols.

The software is generally available now and the company plans to add more support in future updates.

Robert Young, IDC’s research manager for enterprise system management software, cloud and virtualization system software, said the value proposition for asset, discovery and dependency mapping has changed now that IT is run like a business.

“It’s not so much about keeping systems running and discovery of assets,” Young said. “It’s about how they impact business services. IT needs to show that they have an understanding of the value and ROI behind new technology adoption. This is where IT is showing its value. The cloud is enabling this kind of thinking and it’s fostering it more than ever.”

Young said BMC’s offering previously supported servers and networks but now, along with the new storage piece, it gathers data faster and can scale across larger data centers.


November 19, 2014  5:09 PM

Fusion-io founders bring ‘Woz’ to Primary Data

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Newcomer Primary Data appears to be a different kind of storage company than Fusion-io, but the management team looks a lot like the server-side flash pioneer that SanDisk bought out last June.

Primary Data founders CTO David Flynn and Rick White and CEO Lance Smith all came from Fusion-io. White was a Fusion-io founder, Flynn was a founder and Fusion-io CEO, and Smith was Fusion-io COO. And today, Primary Data introduced Apple founder Steve Wozniak as chief scientist, a role he also held at Fusion-io.

So what’s the old gang up to at their new company? We won’t know for sure until it starts shipping GA products – probably around mid-2015 – but the startup is demonstrating its technology at DEMO Fall 2014 this week at San Jose, California. That’s where Wozniak was introduced.

Unlike PCI flash storage vendor Fusion-io, Primary Data is a software company. Flynn says his new company is virtualizing data and separating it from physical storage. “This allows us to tap into new storage infrastructures, such as flash, server-side storage, object storage, the cloud,” he said. “We’re creating a single global space where objects can live without applications knowing the difference.

“When we were at Fusion-io, EMC pointed fingers and said, ‘The great stuff is in your servers, but data is trapped on an island on your server.’ Here we are fundamentally freeing data to reside on any storage system.”

Flynn and Smith said Primary Data’s platform consists of a data hypervisor, data director, policy engine and global dataspace. According to their descriptions, the storage hypervisor decouples data’s access channel from the control channel, is protocol-agnostic and allows data to be placed across third-party storage under the global dataspace. The data director is the central management system and a metadata server for data hypervisor clients. Customers use the policy engine to set parameters for automated data movement across storage tiers based on price, performance and protection needs, and the global dataspace gives administrators visibility into all storage resources in the enterprise.

Smith said Primary Data will support NetApp arrays, EMC Isilon, Nexenta NexentaStor and some third-party arrays out of the gate, and there is one early customer already running the software on Isilon storage. “This product is real,” he said.

According to Primary Data’s press release on Wozniak, he will work with the startup on technology vision and architecture. He will also “share the Primary Data storage” with technology innovators around the world.


November 19, 2014  1:05 PM

Veeam CEO says the company is chasing a broader market

Andrew Burton Andrew Burton Profile: Andrew Burton
Storage, Veeam, VM backup

Veeam’s core market is VMware administrators, but the company is looking to appeal to a broader audience – specifically storage admins.

In an interview with TechTarget editors this week, Veeam CEO Ratmir Timashev said that integration with array snapshots was the first step in that direction. He called support for NetApp snapshots the most important update in version 8 of Veeam’s Backup& Replication data protection software. “NetApp snapshots are the best,” he said. “And NetApp customers actually use snapshots – unlike some other array vendor’s customers.”

Veeam currently supports all NetApp snapshot technologies — Snapshot, SnapMirror and SnapVault – and Timshev said the company plans to add support for additional storage vendors. He said Veeam supports HP StoreServ and StoreVirtual arrays but declined to name other array vendors on the roadmap.

“Veeam’s core customer will always be VMware administrators. Addressing their needs always comes first, but we are adding functionality that will appeal to the more conservative storage guy,” Timashev said.

Timashev said Veeam will continue to play to its specialty of virtual backup rather than compete with legacy vendors for physical server backup. However, he said that its free Endpoint Backup software may appeal to customers that are nearly 100% virtualized, but have a small number of physical servers that need protection.

Hyper-V adoption

Veeam added support for Microsoft Hyper-V in 2011, and Timashev said that 7% of the company’s revenue and 14% of new customers could be attributed to Hyper-V in 2013. This year that increased to 11% of revenue and 22% of new customers. He also noted that the most Hyper-V adoption was among companies with fewer than 250 employees but some larger customers are using the software to protect Hyper-V servers at remote offices and in test/dev environments.

Replication for DR coming to Cloud Connect

Veeam will make a cloud disaster recovery play in 2015 through its Cloud Connect software.

loud Connect is designed to streamline the process of sending encrypted backups offsite to a service provider’s cloud infrastructure without the use of a VPN. Cloud Connect uses WAN acceleration and relies on a cloud gateway at the service provider’s site that places compute resources next to storage in the cloud. Timashev said that Veeam will add the ability to replicate VMs for cloud DR next year.


November 13, 2014  10:58 AM

NetApp expects new products to save the day

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage, NetApp, Storage

NetApp CEO Tom Georgens says the cure for stagnant revenue is an expanded product portfolio.

NetApp’s earnings report Wednesday night showed almost no revenue growth over last year, and its forecast called for more of the same this quarter.

NetApp’s revenue of $1.54 billion for last quarter was roughly the same as a year ago, and its $929 million in produce revenue decreased three percent year-over-year. OEM revenue fell 22 percent – mainly because IBM ended its N Series partnership – to $119 million, and products sold under the NetApp brand grew only two percent to $1.42 billion.

For this quarter, NetApp forecasts revenue in the range of $1.56 billion to $1.66 billion. The midpoint of that range would be slightly down year over year. NetApp executives said their revenue this quarter would suffer from the impact of unfavorable foreign exchange, particularly the Euro.

Georgens said he expects recent product rollouts and the vendor’s cloud and flash strategies to kickstart sales.

“We have dramatically expanded the NetApp portfolio at a pace unprecedented in our history,” he said. “We have a lot more to sell today than we had six months ago.”

Georgens pointed to product rollouts over the last three months and claimed “We have never had a stronger portfolio of innovate solutions.” The new rollouts were Data Ontap 8.3, Cloud Ontap, FlashRay (limited release) all-flash system, and StorageGRID Webscale object storage, along with the acquisition of SteelStore cloud backup from Riverbed.

NetApp Wednesday made SteelStore generally available, and will add Amazon Machine Image (AMI) options for SteelStore in the coming months.

Georgens described NetApp’s cloud strategy as weaving “disparate data elements of the hybrid cloud into a single architecture” to give customers a consistent way of managing and protecting data regardless of where they store it. “All of these innovations support our vision of a fully operationalized hybrid cloud,” he said.

He said Cloud Ontap completes that strategy. Cloud Ontap is a software only version of Data Ontap that runs in a public cloud. “We’re not viewing it as a point product,” Georgens said of Cloud Ontap. “It’s part of a much broader strategy to ultimately create seamless data management across the entire enterprise.

Georgens said the addition of Metrocluster software for DR a key feature for Ontap 8.3. The Metrocluster software allows synchronous replication across four data centers for high availability. Georgens said lack of that feature had held customer adoption of Clustered Ontap. “Certain segments of our market have used that to compete effectively,” he said.

Georgens did not give an update on when FlashRay would be generally available, but laid out NetApp’s positioning for its three all-flash arrays. “EF [E Series flash array] is all about performance, FlashRay is around performance with efficiency and all-flash FAS is around network storage for business applications using premium features available in Ontap but with the speed of flash.”

When asked about rival EMC recently buying out most of Cisco’s share from their VCE joint venture, Georgens said “the underlying relationship there has been problematic for some time.” He said NetApp’s relationship with Cisco is growing stronger. NetApp’s FlexPod is a reference architecture consisting of NetApp storage and Cisco servers and networking. That’s a slightly different model than the packaged Vblocks consisting of EMC storage and Cisco gear sold by VCE.

Georgens said FlexPod shipments last quarter were up 50 percent year over year.

“We’ve seen deeper and deeper engagement with Cisco around more and more strategic matters around products, co-development and co-marketing,” Georgens said. “We’re very, very very closely aligned with Cisco’s strategic initiatives going forward.”


November 11, 2014  12:45 PM

Sanbolic stays off hyper-converged bandwagon

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Not all storage vendors see hyper-convergence as the cure for all storage ills.

Sanbolic CEO Momchil Michailov considers hyper-convergence a cure for some ills, but says it falls short for many use cases. Michailov says convergence is good, but hyper-convergence not so good for enterprise storage. That’s because the hyper-converged approach is tied to one hypervisor and a totally virtualized infrastructure.

Michailov claims hyper-convergence is fine for VDI and remote offices – two popular early use cases – but will never be able to scale into an enterprise storage system.

“There’s only so much you can stuff in one server, and only so many servers you can manage before it becomes a ludicrous proposition,” he said. “Hyper-convergence is 100 percent dependent on virtual workloads and requires that customers run 100 percent virtual shops. I don’t know anybody who’s 100 percent virtual. If you’re a homogenous hypervisor shop, providing customers with a locked down single hypervisor workload isn’t going to fly.”

Sanbolic sells software that can aggregate and manage storage and data services on a SAN, solid-state drives (SSDs), hard drives or server-side flash. That’s different than hyper-converged systems, which combine storage, networking and hypervisors in one box. Most hyper-converged systems are bought with the software and hardware in one package.

Michailov said Sanbolic has customers running multiple storage hypervisors, and he expects a lot more to go in that direction. “The customers we go after are going to have multiple types of hypervisors, and they are not 100 percent virtualized. They have physical infrastructure as well,” he said. “How do you create orchestration across that? We work with OpenStack and CloudStack. We use a share-all architecture, and that means we can have Hyper-V, Xen, KVM and VMware accessing the exact same data and exact same storage at the same time.”

Sanbolic in May revamped and renamed its host-based storage platform to support Linux, XEN, KVM and OpenStack along with its prior support for Microsoft Windows and VMware hypervisors. It changed the product name from Melio to Sanbolic Scale-Out platform while making it a better fit for a wider array of enterprises.

Like Melio, Sanbolic Scale-Out Platform runs on physical, virtual or cloud server instances to turn heterogeneous hardware into shared storage. The software provides storage services such as dynamic provisioning, quality of service across RAID levels, snapshots, and cloning. It supports flash and spinning disk storage.

Sanbolic automatically detects storage and servers and builds clusters that can grow to 144 CPU cores, 2.3 TB of RAM and 2,048 nodes.

“Instead of buying an EMC or NetApp array, we give you that capability on internal hard drives,” Michailov said.

Sanbolic is priced per core, beginning at $1,200 and decreasing as customers scale cores.

David Floyer, chief technology officer at Marlborough, Mass.-based research and analysis firm Wikibon, said the additional platform support is critical for Sanbolic. “There was a very big hole in their ability to go to market anywhere other than the Microsoft ecosystem,” he said. “That was very limiting. If they want to compete in this market, it is essential that they expand the platform.”

Wikibon places Sanbolic in the Server SAN category, which it defines as a combination of “compute and pooled storage resources comprising more than one storage device directly attached to multiple servers.”

Floyer said Sanbolic has a mature product and more flexibility than VMware’s Virtual SAN (VSAN) hyper-converged software. “Some might want a broader range of physical server and hyper-visor SAN support [than VSAN delivers,” he said.


November 7, 2014  2:26 PM

Dot Hill makes its hybrid storage management real-time

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Dot Hill, Storage

Dot Hill Systems Corp. this week upgraded the RealStor storage management software with enhanced caching, capacity pooling and real-time tiering across solid state drives (SSDs) and hard-disk drives.

RealStor works with Dot Hill’s UltraSAN arrays, and is a particularly good fit for the Ultra48 AssuredSAN hybrid arrays that started shipping earlier this year. Dot Hill sells most of its storage systems through OEM partners, which include Hewlett-Packard (HP’s MSA systems come from Dot Hill).

Realstor 2.0 includes enhanced RealTier and RealCache features to manage data across SSDs and hard drives. RealTier, which previously only supported hard disks, now does autonomic data tiering across SSDs and hard disks.

“If I want better performances, then I have to pay for it, but we change it to give you better performance without the expensive costs,” said Jason Odorizzi, Dot Hill’s strategic product director.

Odorizzi said the RealTier scans and moves data within a five-second window so there is a negligible impact on the CPU while also boosting response times. Data is intelligently placed and moved across as many as three tiers, including flash, high speed disk, and near-line large capacity devices.

The RealCache read cache function also has been enhanced so that the RAID controller can see either 200 GB or 400 GB of extended cache. RealCache provides performance for peak workloads that exceed the controller’s memory cache capabilities. It also allows SSD cache to become an extension of the controller cache so IOP performances for read-centric workloads are increased. Also, one or more LUNs can be pinned to flash or any other storage tier.

“I can have peak workloads and have a much bigger cache and I can keep the entire workloads specific to reads,” Odorizzi said. “We can do tiering in parallel and if there are write workloads, we can read and write as well.”

RealStor’s RealSnap feature also has been updated with redirect-on-write snapshots that write only changes to new blocks, and can create snapshots any time. RealStor previously did copy-on-write snapshots that required the snapshots to be scheduled.

“As a result of redirect, I can get better RPO and RTO metrics,” Odorizzi said. “As I take more and more snapshots, the performance does not degrade. I can take a snapshot of a snapshot to make multiple copies and use it for different workloads and I don’t have to worry if it impacts the primary volume.”

The RealQuick capability has been enhanced to reduce the risk of data loss during a RAID rebuild because the software focuses first on restoring the bits of data and then the empty space on the storage media.

“The innovation is we want to minimize how long data is not available,” said Odorizzi.

The RealPool function has been upgraded to also virtualize SSDs instead of only pooling HDD capacity.

Brian Garrett, vice president of Enterprise Strategy Group’s lab, said the 2.0 version of RealStor has been improved to work in real-time.

“In my opinion, this is the next generation hybrid because a lot of hybrids don’t act in real time,” Garrett said. “This is more intelligent than policy-based software. It’s more real-time than scheduled.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: