Storage Soup

November 20, 2014  2:00 PM

BMC automates storage asset management

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

BMC already could map servers and networks. Now the software vendor has discovered storage.

BMC’s new Atrium Discovery for Storage automates the discovery and mapping of storage resources and their relationship with servers and the network.

Raphael Chauvel, BMC’s senior director of product management, said the Atrium Discovery for Storage works with the Altrium Discovery and Dependency Mapping (ADDM) product. It also has been integrated with the Atrium Configuration Management Database (CMDB), which is a centralized service that pulls data from multiple resources for automation and visualization so IT can plan and assign priorities for business services.

Chauvel said BMC customers had mostly used spreadsheets to map storage resources.

“Now we have automated the discovery process,” he said. “Before we didn’t have automation and discovery [of storage] to servers and applications.”

BMC’s Atrium Discovery for Storage shows cloud management software, logical partitions and links to what device is consuming storage. The application works with storage systems and software from EMC, HP, Hitachi Data Systems, IBM and NetApp via SMI-S, Web-based Enterprise Management (WBEM) and SNMP management protocols.

The software is generally available now and the company plans to add more support in future updates.

Robert Young, IDC’s research manager for enterprise system management software, cloud and virtualization system software, said the value proposition for asset, discovery and dependency mapping has changed now that IT is run like a business.

“It’s not so much about keeping systems running and discovery of assets,” Young said. “It’s about how they impact business services. IT needs to show that they have an understanding of the value and ROI behind new technology adoption. This is where IT is showing its value. The cloud is enabling this kind of thinking and it’s fostering it more than ever.”

Young said BMC’s offering previously supported servers and networks but now, along with the new storage piece, it gathers data faster and can scale across larger data centers.

November 19, 2014  5:09 PM

Fusion-io founders bring ‘Woz’ to Primary Data

Dave Raffo Dave Raffo Profile: Dave Raffo

Newcomer Primary Data appears to be a different kind of storage company than Fusion-io, but the management team looks a lot like the server-side flash pioneer that SanDisk bought out last June.

Primary Data founders CTO David Flynn and Rick White and CEO Lance Smith all came from Fusion-io. White was a Fusion-io founder, Flynn was a founder and Fusion-io CEO, and Smith was Fusion-io COO. And today, Primary Data introduced Apple founder Steve Wozniak as chief scientist, a role he also held at Fusion-io.

So what’s the old gang up to at their new company? We won’t know for sure until it starts shipping GA products – probably around mid-2015 – but the startup is demonstrating its technology at DEMO Fall 2014 this week at San Jose, California. That’s where Wozniak was introduced.

Unlike PCI flash storage vendor Fusion-io, Primary Data is a software company. Flynn says his new company is virtualizing data and separating it from physical storage. “This allows us to tap into new storage infrastructures, such as flash, server-side storage, object storage, the cloud,” he said. “We’re creating a single global space where objects can live without applications knowing the difference.

“When we were at Fusion-io, EMC pointed fingers and said, ‘The great stuff is in your servers, but data is trapped on an island on your server.’ Here we are fundamentally freeing data to reside on any storage system.”

Flynn and Smith said Primary Data’s platform consists of a data hypervisor, data director, policy engine and global dataspace. According to their descriptions, the storage hypervisor decouples data’s access channel from the control channel, is protocol-agnostic and allows data to be placed across third-party storage under the global dataspace. The data director is the central management system and a metadata server for data hypervisor clients. Customers use the policy engine to set parameters for automated data movement across storage tiers based on price, performance and protection needs, and the global dataspace gives administrators visibility into all storage resources in the enterprise.

Smith said Primary Data will support NetApp arrays, EMC Isilon, Nexenta NexentaStor and some third-party arrays out of the gate, and there is one early customer already running the software on Isilon storage. “This product is real,” he said.

According to Primary Data’s press release on Wozniak, he will work with the startup on technology vision and architecture. He will also “share the Primary Data storage” with technology innovators around the world.

November 19, 2014  1:05 PM

Veeam CEO says the company is chasing a broader market

Andrew Burton Andrew Burton Profile: Andrew Burton
Storage, Veeam, VM backup

Veeam’s core market is VMware administrators, but the company is looking to appeal to a broader audience – specifically storage admins.

In an interview with TechTarget editors this week, Veeam CEO Ratmir Timashev said that integration with array snapshots was the first step in that direction. He called support for NetApp snapshots the most important update in version 8 of Veeam’s Backup& Replication data protection software. “NetApp snapshots are the best,” he said. “And NetApp customers actually use snapshots – unlike some other array vendor’s customers.”

Veeam currently supports all NetApp snapshot technologies — Snapshot, SnapMirror and SnapVault – and Timshev said the company plans to add support for additional storage vendors. He said Veeam supports HP StoreServ and StoreVirtual arrays but declined to name other array vendors on the roadmap.

“Veeam’s core customer will always be VMware administrators. Addressing their needs always comes first, but we are adding functionality that will appeal to the more conservative storage guy,” Timashev said.

Timashev said Veeam will continue to play to its specialty of virtual backup rather than compete with legacy vendors for physical server backup. However, he said that its free Endpoint Backup software may appeal to customers that are nearly 100% virtualized, but have a small number of physical servers that need protection.

Hyper-V adoption

Veeam added support for Microsoft Hyper-V in 2011, and Timashev said that 7% of the company’s revenue and 14% of new customers could be attributed to Hyper-V in 2013. This year that increased to 11% of revenue and 22% of new customers. He also noted that the most Hyper-V adoption was among companies with fewer than 250 employees but some larger customers are using the software to protect Hyper-V servers at remote offices and in test/dev environments.

Replication for DR coming to Cloud Connect

Veeam will make a cloud disaster recovery play in 2015 through its Cloud Connect software.

loud Connect is designed to streamline the process of sending encrypted backups offsite to a service provider’s cloud infrastructure without the use of a VPN. Cloud Connect uses WAN acceleration and relies on a cloud gateway at the service provider’s site that places compute resources next to storage in the cloud. Timashev said that Veeam will add the ability to replicate VMs for cloud DR next year.

November 13, 2014  10:58 AM

NetApp expects new products to save the day

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage, NetApp, Storage

NetApp CEO Tom Georgens says the cure for stagnant revenue is an expanded product portfolio.

NetApp’s earnings report Wednesday night showed almost no revenue growth over last year, and its forecast called for more of the same this quarter.

NetApp’s revenue of $1.54 billion for last quarter was roughly the same as a year ago, and its $929 million in product revenue decreased three percent year-over-year. OEM revenue fell 22 percent – mainly because IBM ended its N Series partnership – to $119 million, and products sold under the NetApp brand grew only two percent to $1.42 billion.

For this quarter, NetApp forecasts revenue in the range of $1.56 billion to $1.66 billion. The midpoint of that range would be slightly down year over year. NetApp executives said their revenue this quarter would suffer from the impact of unfavorable foreign exchange, particularly the Euro.

Georgens said he expects recent product rollouts and the vendor’s cloud and flash strategies to kickstart sales.

“We have dramatically expanded the NetApp portfolio at a pace unprecedented in our history,” he said. “We have a lot more to sell today than we had six months ago.”

Georgens pointed to product rollouts over the last three months and claimed “We have never had a stronger portfolio of innovate solutions.” The new rollouts were Data Ontap 8.3, Cloud Ontap, FlashRay (limited release) all-flash system, and StorageGRID Webscale object storage, along with the acquisition of SteelStore cloud backup from Riverbed.

NetApp Wednesday made SteelStore generally available, and will add Amazon Machine Image (AMI) options for SteelStore in the coming months.

Georgens described NetApp’s cloud strategy as weaving “disparate data elements of the hybrid cloud into a single architecture” to give customers a consistent way of managing and protecting data regardless of where they store it. “All of these innovations support our vision of a fully operationalized hybrid cloud,” he said.

He said Cloud Ontap completes that strategy. Cloud Ontap is a software only version of Data Ontap that runs in a public cloud. “We’re not viewing it as a point product,” Georgens said of Cloud Ontap. “It’s part of a much broader strategy to ultimately create seamless data management across the entire enterprise.

Georgens said the addition of Metrocluster software for DR a key feature for Ontap 8.3. The Metrocluster software allows synchronous replication across four data centers for high availability. Georgens said lack of that feature had held customer adoption of Clustered Ontap. “Certain segments of our market have used that to compete effectively,” he said.

Georgens did not give an update on when FlashRay would be generally available, but laid out NetApp’s positioning for its three all-flash arrays. “EF [E Series flash array] is all about performance, FlashRay is around performance with efficiency and all-flash FAS is around network storage for business applications using premium features available in Ontap but with the speed of flash.”

When asked about rival EMC recently buying out most of Cisco’s share from their VCE joint venture, Georgens said “the underlying relationship there has been problematic for some time.” He said NetApp’s relationship with Cisco is growing stronger. NetApp’s FlexPod is a reference architecture consisting of NetApp storage and Cisco servers and networking. That’s a slightly different model than the packaged Vblocks consisting of EMC storage and Cisco gear sold by VCE.

Georgens said FlexPod shipments last quarter were up 50 percent year over year.

“We’ve seen deeper and deeper engagement with Cisco around more and more strategic matters around products, co-development and co-marketing,” Georgens said. “We’re very, very very closely aligned with Cisco’s strategic initiatives going forward.”

November 11, 2014  12:45 PM

Sanbolic stays off hyper-converged bandwagon

Dave Raffo Dave Raffo Profile: Dave Raffo

Not all storage vendors see hyper-convergence as the cure for all storage ills.

Sanbolic CEO Momchil Michailov considers hyper-convergence a cure for some ills, but says it falls short for many use cases. Michailov says convergence is good, but hyper-convergence not so good for enterprise storage. That’s because the hyper-converged approach is tied to one hypervisor and a totally virtualized infrastructure.

Michailov claims hyper-convergence is fine for VDI and remote offices – two popular early use cases – but will never be able to scale into an enterprise storage system.

“There’s only so much you can stuff in one server, and only so many servers you can manage before it becomes a ludicrous proposition,” he said. “Hyper-convergence is 100 percent dependent on virtual workloads and requires that customers run 100 percent virtual shops. I don’t know anybody who’s 100 percent virtual. If you’re a homogenous hypervisor shop, providing customers with a locked down single hypervisor workload isn’t going to fly.”

Sanbolic sells software that can aggregate and manage storage and data services on a SAN, solid-state drives (SSDs), hard drives or server-side flash. That’s different than hyper-converged systems, which combine storage, networking and hypervisors in one box. Most hyper-converged systems are bought with the software and hardware in one package.

Michailov said Sanbolic has customers running multiple storage hypervisors, and he expects a lot more to go in that direction. “The customers we go after are going to have multiple types of hypervisors, and they are not 100 percent virtualized. They have physical infrastructure as well,” he said. “How do you create orchestration across that? We work with OpenStack and CloudStack. We use a share-all architecture, and that means we can have Hyper-V, Xen, KVM and VMware accessing the exact same data and exact same storage at the same time.”

Sanbolic in May revamped and renamed its host-based storage platform to support Linux, XEN, KVM and OpenStack along with its prior support for Microsoft Windows and VMware hypervisors. It changed the product name from Melio to Sanbolic Scale-Out platform while making it a better fit for a wider array of enterprises.

Like Melio, Sanbolic Scale-Out Platform runs on physical, virtual or cloud server instances to turn heterogeneous hardware into shared storage. The software provides storage services such as dynamic provisioning, quality of service across RAID levels, snapshots, and cloning. It supports flash and spinning disk storage.

Sanbolic automatically detects storage and servers and builds clusters that can grow to 144 CPU cores, 2.3 TB of RAM and 2,048 nodes.

“Instead of buying an EMC or NetApp array, we give you that capability on internal hard drives,” Michailov said.

Sanbolic is priced per core, beginning at $1,200 and decreasing as customers scale cores.

David Floyer, chief technology officer at Marlborough, Mass.-based research and analysis firm Wikibon, said the additional platform support is critical for Sanbolic. “There was a very big hole in their ability to go to market anywhere other than the Microsoft ecosystem,” he said. “That was very limiting. If they want to compete in this market, it is essential that they expand the platform.”

Wikibon places Sanbolic in the Server SAN category, which it defines as a combination of “compute and pooled storage resources comprising more than one storage device directly attached to multiple servers.”

Floyer said Sanbolic has a mature product and more flexibility than VMware’s Virtual SAN (VSAN) hyper-converged software. “Some might want a broader range of physical server and hyper-visor SAN support [than VSAN delivers,” he said.

November 7, 2014  2:26 PM

Dot Hill makes its hybrid storage management real-time

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Dot Hill, Storage

Dot Hill Systems Corp. this week upgraded the RealStor storage management software with enhanced caching, capacity pooling and real-time tiering across solid state drives (SSDs) and hard-disk drives.

RealStor works with Dot Hill’s UltraSAN arrays, and is a particularly good fit for the Ultra48 AssuredSAN hybrid arrays that started shipping earlier this year. Dot Hill sells most of its storage systems through OEM partners, which include Hewlett-Packard (HP’s MSA systems come from Dot Hill).

Realstor 2.0 includes enhanced RealTier and RealCache features to manage data across SSDs and hard drives. RealTier, which previously only supported hard disks, now does autonomic data tiering across SSDs and hard disks.

“If I want better performances, then I have to pay for it, but we change it to give you better performance without the expensive costs,” said Jason Odorizzi, Dot Hill’s strategic product director.

Odorizzi said the RealTier scans and moves data within a five-second window so there is a negligible impact on the CPU while also boosting response times. Data is intelligently placed and moved across as many as three tiers, including flash, high speed disk, and near-line large capacity devices.

The RealCache read cache function also has been enhanced so that the RAID controller can see either 200 GB or 400 GB of extended cache. RealCache provides performance for peak workloads that exceed the controller’s memory cache capabilities. It also allows SSD cache to become an extension of the controller cache so IOP performances for read-centric workloads are increased. Also, one or more LUNs can be pinned to flash or any other storage tier.

“I can have peak workloads and have a much bigger cache and I can keep the entire workloads specific to reads,” Odorizzi said. “We can do tiering in parallel and if there are write workloads, we can read and write as well.”

RealStor’s RealSnap feature also has been updated with redirect-on-write snapshots that write only changes to new blocks, and can create snapshots any time. RealStor previously did copy-on-write snapshots that required the snapshots to be scheduled.

“As a result of redirect, I can get better RPO and RTO metrics,” Odorizzi said. “As I take more and more snapshots, the performance does not degrade. I can take a snapshot of a snapshot to make multiple copies and use it for different workloads and I don’t have to worry if it impacts the primary volume.”

The RealQuick capability has been enhanced to reduce the risk of data loss during a RAID rebuild because the software focuses first on restoring the bits of data and then the empty space on the storage media.

“The innovation is we want to minimize how long data is not available,” said Odorizzi.

The RealPool function has been upgraded to also virtualize SSDs instead of only pooling HDD capacity.

Brian Garrett, vice president of Enterprise Strategy Group’s lab, said the 2.0 version of RealStor has been improved to work in real-time.

“In my opinion, this is the next generation hybrid because a lot of hybrids don’t act in real time,” Garrett said. “This is more intelligent than policy-based software. It’s more real-time than scheduled.”

November 6, 2014  11:14 AM

EMC can make cloud-to-cloud backup an enterprise play

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC, Storage

Cloud-to-cloud backup has been a niche market, but that may change with EMC’s recent acquisition of Spanning Cloud Apps.

Spanning was among three small cloud start-ups EMC acquired Oct. 28, the day it also laid out its hybrid cloud strategy. Spanning backs up customer data in Salesforce and Google Apps to Amazon’s public cloud, so that data can be retrieved if it is lost or deleted. Next up on Spanning’s roadmap is backup for Microsoft Office 365. Spanning for 365 is set for beta late this year and general availability in 2015. EMC refers to data in those software as a service (SaaS) apps as data born in the cloud.

EMC sold Spanning through its EMC Select reseller program for nearly a year before the acquisition, and little will change in Spanning’s products and strategy in the near future, according to Spanning CEO Jeff Erramouspe. All 51 Spanning employees have been offered positions with EMC. Erramouspe will report to Russ Stockdale, vice president of EMC’s Core Technologies Division. The Spanning team remains in Austin, Texas.

“We will continue to do business as Spanning for the foreseeable future,” Erramouspe said. “That’s what we like about this deal. EMC lets companies they bring on continue to be themselves.”

Erramouspe said only a small percentage of Spanning customers have come through EMC Select, EMC began generating a lot more sales as it got closer to the acquisition.

Spanning will also be sold by EMC’s Mozy cloud backup sales team. The first place you can expect to see technical integration is with Spanning management functions becoming visible inside EMC’s Data Protection Advisor software.

Spanning claims just more than 4,000 customers worldwide, mostly SMBs. EMC will look to expand to the enterprise, especially when Spanning’s backup for Office 365 becomes available. That product will appeal to companies who are using EMC backup products for Exchange and other Microsoft applications running on-premise, but may eventually move those apps to the Microsoft cloud.

In an interview with in April, EMC backup boss Guy Churchward identified cloud-to-cloud backup as an area the vendor would move into.

“EMC realized it had a hole around the cloud in the data protection space,” Erramouspe said. “Mozy provides you with the ability to protect on-premise data by moving it into the cloud, but they didn’t have anything to address workloads moving to the clouds. If customers who protect Exchange on-premise with Avamar, NetWorker and Data Domain move those workloads to the cloud, it puts that revenue at risk. Our plan to do 365 backup plugs that hole.”

Other cloud-to-cloud backup startups include Backupify, CloudAlly and syscloud.

“EMC’s acquisition of Spanning provides validation for what we’ve known all along, that the cloud-to-cloud backup market is on fire,” Backupify CEO Rob May maintained in an emailed statement. “As companies move massive amounts of critical data to the cloud, ensuring this data is safe and secure will remain a top priority. There’s a lot more growth and innovation in the cloud-to-cloud backup market to come.”

October 31, 2014  9:53 AM

Flash vendor Pure Storage makes OpenStack push

Dave Raffo Dave Raffo Profile: Dave Raffo
OpenStack, Pure Storage, Storage

In advance of the OpenStack Summit in Paris next week, flash array vendor Pure Storage is throwing its weight behind the open source cloud operating system.

Pure this week joined the OpenStack Foundation as a corporate sponsor and pledged to heavily participate in development of the OpenStack code base.  The vendor has also made available an OpenStack Cinder driver and Python Automation Toolkit to help customers use OpenStack-based private and public clouds.

Cinder is OpenStack’s block storage service. Pure’s Cinder driver integrates with the Purity Operating Environment via a RESTful API. The Cinder driver supports OpenStack Juno and Ice house releases and Purity OE 2.4.3 and later. It calls Purity REST APIs for snapshot and volume services and volume migration.

The Python Toolkit lets developers build workflows for Pure FlashArrays. They can add service such as snapshot and replication scheduling, storage monitoring and reporting to Cinder. Pure customers can download the Cinder Driver and Python Toolkit from the Pure website.

Pure chief evangelist Vaughn Stewart said the most of the vendor’s investment in open source communities have gone to OpenStack. Stewart said service providers are a key market for Pure because flash arrays are used for the highest tier of service that providers offer. And those providers are increasingly adopting OpenStack.

“We look at OpenStack as critical for service provider customers and enterprise customers looking to advance their private clouds,” he said. “We believe OpenStack will be the open source winner in this space.”

October 31, 2014  8:53 AM

Quantum expects video to fast-forward StorNext adoption

Dave Raffo Dave Raffo Profile: Dave Raffo
Quantum, Storage

Quantum turned a small profit last quarter, and its CEO said he is confident the vendor has turned the corner thanks to its StorNext technology.

Quantum’s $135.1 million revenue was at the high end of its forecast. It was up only three percent from last year, but scale-out storage (StorNext file management and Lattus object software) revenue increased 58 percent year-over-year.

Revenue from DXi backup deduplication appliances grew 11 percent from last year. DXi and StorNext revenue combined for $47 million, helping the vendor ride out declines in tape revenue to record a GAAP profit of $1.2 million. That’s compared to a loss of $7.9 million in the same quarter last year.

Scale-out storage revenue was $25.5 million with disk backup at $21.2 million.

“I don’t think this is a blip for us at all,” CEO Jon Gacek said in an interview after the earnings call.  “It’s a nice start for something that will get much bigger.”

Gacek said StorNext is selling well in media and entertainment markets, and the vendor has just begun to move into other high-capacity storage markets such as video surveillance and corporate video. He said Quantum closed a deal with a sports broadcaster for $2 million last quarter, and had other deals more than $200,000 apiece.

DXi sales also benefited from the latest version of StorNext. StorNext 5 is the underlying file system for the  DXi 6900 enterprise appliances, and Gacek said that has boosted performance and allows customers to use less hardware than previous DXi versions. He said the  DXi 4700 midrange appliances also spiked in revenue last quarter.

Gacek forecasted revenue of $145 million to $150 million for this quarter. He expects the new video markets, plus the emergence of 4K High Definition video and eventually 8K Ultra High Definition, to provide plenty of growth in storage sales for video.

“We have good momentum,” he said. “And I don’t think it’s an anomaly, given that a lot of these markets are really nascent. I mean, the number of 4K installations is super small relative to what it’s going to be. So we have good momentum for this quarter for sure, and I think beyond that.”

October 30, 2014  10:53 AM

FalconStor prepares FreeStor for flash, clouds

Dave Raffo Dave Raffo Profile: Dave Raffo
FalconStor, Storage

After another quarter of declining revenue, FalconStor CEO Garry Quinn says the troubled company will focus on delivering storage software for flash array vendors and cloud service providers.

During FalconStor’s earnings call Wednesday, Quinn said the vendor will introduce software called FreeStor 10 in 2015. He said FreeStor “will be focused on this new software-defined marketplace that uses an intelligent abstraction layer to be completely hardware agnostic.”

The target markets are flash array vendors without their own software stacks and service providers looking to provide storage services to customers with legacy hardware.

Quinn said FreeStor will address data migration, continuous availability, protection and recovery and optimize data deduplication. Its partners or their customers can turn on the services they require.

The software will be based on development work FalconStor has done under an OEM agreement with flash array vendor Violin Memory. FalconStor is developing software that provides the above services for Violin arrays.

Quinn said the software can be sold with a FalconStor branded management console, a private labeled interface or without any management console. He added that it will work with web browsers, tablets or smart phones. He expects to announce the new products on Feb. 19, 2015 – the 15th anniversary of FalconStor’s inception.

“We’re moving in a different direction,” he said. “The focus of the company is to move into a new market that is more attractive, less confusing and allows FalconStor technology to get its fair opportunity in the marketplace. There are many, many, many people with point solutions in the business continuity and disaster recovery space, and the idea of moving more to the platform approach opens up opportunities for FalconStor to OEM its technology in the flash market.”

FalconStor will continue to sell its existing deduplication, continuous data protection and storage management software through resellers but future product development will focus on FreeStor.

The new direction comes after what Quinn calls “a complete miss” in the Americas region last quarter. While revenues increased from the previous quarter in the rest of the world, they were down 16 percent in the Americas. Overall, FalconStor revenue of $11.2 million was down from $11.3 in the previous quarter and $14.7 million a year ago. The company’s loss from operations was $2 million in the quarter, actually an improvement over the $2.8 million loss in the previous quarter.

Quinn has sought ways to turn FalconStor around since he became CEO in June, 2013 after his successor Jim McNiel resigned. McNiel’s predecessor, ReiJane Huai resigned as CEO in 2010 after his role in a customer bribery scandal became known. Huai committed suicide in 2011.

Quinn considered selling FalconStor but could not find a suitable buyer so he is changing its market focus.

FalconStor’s business issues may not be completely behind it. The company received a letter from the U.S. Securities and Exchange Commission (SEC) in September asking if it has done business in Cuba, Sudan or Syria – countries the United States has identified as state sponsors of terrorism – through its partner Hitachi Data Systems (HDS).

On the earnings call, Quinn said FalconStor’s agreements with HDS and all other partners include agreements that they conform to U.S. laws.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: