Storage Soup


May 20, 2015  8:00 AM

Barrall spins out Drobo, keeps Connected Data

Dave Raffo Dave Raffo Profile: Dave Raffo
file sharing

Drobo is under new ownership again.

Founder Geoff Barrall is spinning low-end NAS vendor Drobo out of Connected Data, marking the third major management shift in Drobo’s 10-year-history. A group headed by former BlueFin Technologies CEO Mihir Shah will buy Drobo, three years after Barrall bought back the company he started in 2005 and left in 2009.

After leaving Drobo following a dispute with investors about the startup’s strategy, Barrall and other Drobo executives started file sharing vendor Connected Data in late 2011. Barrall then acquired Drobo in 2013, merging it with Connected Data.

Barall remains CEO of Connected Data.

Shah had been CEO of IT service provider and consultant BlueFin since last August. He previous served as managing director of corporate development and strategy at Fibre Channel switch vendor Brocade from 2010 to 2014. His background is in finances and mergers/acquisitions. He also worked in corporate development at IBM, sat on the board of InMage Systems and worked for several venture capitalist firms.

Barrall said he thought Connected Data and Drobo could fit under one roof when he re-acquired Drobo, but he discovered they are best run separately. He will remain on Drobo’s board.

“It became apparent they really are two different companies,” he said. “They have two different manifest destinies, two different brands and two different markets. So we’re splitting into two companies.”

Barrall said he concentrated on cutting costs at Drobo while adding product such as the B1200i hybrid storage array. He said Drobo has become profitable, while Connected Data still operates at a loss but is off to a good start with its Transporter enterprise replication product.

“Expenses were very high when we acquired Drobo,” he said. “We got the operating costs of the business in line.”

He said Connected Data has about 40 employees and Drobo has 24, with few working across both. New Drobo CEO Shah said he expects to add to its head count, especially in marketing.

Shah said he is impressed that Drobo’s storage can be used by non-technical people, and plans to add cloud integration and expand into use cases such as military operations, video surveillance and restaurant franchises.

“Our goal is to grow it into the number one or number two market position,” Shah said. “We think we can double the business in the next two-and-a-half years.”

Shah is the second former Brocade executive to become Drobo CEO. Tom Buiocchi, who replaced Barrall at Drobo in 2009, had been Brocade’s VP of marketing.

May 15, 2015  3:01 PM

Axcient adds appliance-less recovery to its cloud services

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud Provider Axcient launched an appliance-less, direct-to-cloud recovery service aimed at small-to-medium sized businesses or remote office data protection.

The company has been selling a physical appliance for several years and last year it introduced a virtual appliance to replicate data and applications to the cloud. This appliance-less recovery service is focused on environments where an “appliance on-premise is over kill,” said Todd Scallan, Axcient’s vice president of products and engineering.

Scallan said customers access the service by downloading an agent from the Axcient website. The service offers image-level replication to the cloud, recovery and server fail over, file-level recovery, full-image restore and bare metal recovery. If data needs to be recovered, the point-in-time snapshot is selected and downloaded. Axcient can ship a thumb drive or 2 TB or 3 TB drives if a new image is required.

“We’ve always had an appliance for on-site, whether it is virtual or physical,” said Scallan. “The is a new offering where neither is required. Data is encrypted and you get incremental change-block recovery capability the same as the appliance-based cloud. There is a cloud-based user interface and you see all the point-in-time snapshot replicas to the other site.”

Scallan said organizations that have business critical servers want appliances for rapid recovery rather than recovery over the Internet. Those organizations that require a recovery service without an on-premise appliance “are only limited by the bandwidth they have,” Scallan added.

 


May 15, 2015  1:58 PM

Symantec sets January date for Veritas spin-out

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec expects to officially complete its Veritas spin-out Jan. 2, 2016, with a goal to effectively operate as two separate companies by Oct. 3,

Symantec executives updated the spinoff process Thursday during the company’s quarterly earnings call. Symantec said lasts October that it would spin off its data protection and data management products into a separate company, and last January said that company would be called Veritas. Security giant Symantec bought the original Veritas for $13.5 billion 10 years ago.

Despite published reports that Symantec is looking to sell Veritas to another large technology company, CEO Michael Brown and CFO Thomas Seifert said the breakup is proceeding as planned.

“We are on track to separate Veritas as a standalone company on January 2, 2016,” Brown said. “Operationally, we’ll be two separate companies on October 3, 2015.”

He said Veritas product revenue grew four percent during the fiscal year, which ended April 30. Revenue increased six percent last quarter year-over-year. Overall, Symantec’s revenue of $1.46 billion last quarter was down three percent from the previous year and its full year revenue dropped seven percent.

NetBackup drove the Veritas growth, increasing revenue 88 percent last quarter and 46 percent for the year, mainly on the strength of NetBackup integrated appliances for enterprise backup.

Backup Exec and storage management didn’t fare so well, but Symantec didn’t give much detail on those products. Brown said Backup Exec remains a “drag on the business” after several years of problems with the product, but he hopes recently released Backup Exec 15 will rally sales of the SMB backup app.

Seifert said he expects Veritas revenue to grown 27 percent to 29 percent over the next year.


May 13, 2015  10:27 AM

Veeam wins ruling that guts Symantec lawsuit

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

The U.S. Patent & Trademark Office (USPTO) has invalidated the final four patents in Symantec’s infringement lawsuit against Veeam Software, making it difficult for Symantec to win its suit against the smaller vendor.

The USPTO patent trial and appeal board invalidated the first three patents involved in the suit in February, and has now ruled in Veeam’s favor on all seven disputed patents.

“The case is essentially over,” said Doug Hazelman, Veeam’s senior director of product strategy.

Veeam filed a joint status report with the U.S. District Court of Northern California Court last Friday advising that all asserted claims are invalid. Symantec can appeal the U.S. Patent & Trademark Office (USPTO) Patent Trial and Appeal Board’s ruling but a company spokesman said no decision has been made on whether to proceed

“Symantec is aware of the Patent Board rulings.  We will continue to consider all of our legal options to protect our intellectual property,” Symantec spokesman Kurt Mumma said via email.

Symantec filed two lawsuits in 2012 in the Northern California court claiming that Veeam’s software infringed on seven Symantec storage, restore and backup patents. While preparing its court defense, Veeam also challenged the validity of the patents. The USPTO agreed with Veeam that the patents were obvious or anticipated by prior art, which means the technology involved was known or used before a patent was obtained.

Hazelman said while the lawsuits did not generate a great deal of attention, “some of our largest accounts were paying attention” and losing the lawsuit could have harmed its business.

Veeam said its bookings grew 22 percent in the first quarter of 2015 over last year, adding 10,800 paid customers in the quarter. For 2014, Veeam said its revenue grew 40 percent year over year to $389 million.


May 12, 2015  11:02 AM

Nutanix: Try our software for free

Dave Raffo Dave Raffo Profile: Dave Raffo
Nutanix, Storage

Perhaps looking to head off software-only hyper-convergence competitors down the road, Nutanix will make a free version of its software available beginning next month.

Nutanix will launch a public beta of its new Community Edition software June 8 during its .Next user conference. Nutanix has always maintained that all the value of its technology lies in the software, but it makes that software available only on appliances. Nutranix uses Supermicro hardware for its NX appliances. Dell also sells Nutanix software on PowerEdge servers through an OEM deal.

The Community Edition can run on x86 servers from Dell, Hewlett-Packard, Cisco, Supermicro and other hardware vendors. It can be deployed on one, three or four nodes. Nutanix will issue minimum hardware requirements to run the software. The Community Edition will run on all-flash hardware, or in a hybrid configuration with SSDs and hard disk drives.

Nutanix requires a minimum of a three-node configuration for its paid product, but will allow the free edition to run on one node for testing.

The Community Edition only runs with KVM hypervisors, unlike the NX appliances that also support VMware ESX and Microsoft Hyper-V. If users like the free software, they cannot upgrade it to a licensed version of the product. They must buy a new appliance form Nutanix or Dell. There will be no official support for the free product from Nutanix, but users can seek help through he Nutanix NEXT online community.

However, the Community Edition has all the features of a paid Nutanix appliance.

“It’s our entire feature set. Customers are empowered to take full advantage, there are no limitations to applications or workloads they can run,” said Greg Smith, Nutanix senior director of product and technical marketing.

Smith said he expects the free edition to be popular among application developers and prospective channel partners who want to test the technology before committing to the paid version.

While hyper-converged pioneers Nutanix and SimpliVity bundle their software on appliances, others sell software that can be packaged on commodity hardware by customers or channel partners. VMware is the major software-only rival to Nutanix with its Virtual SAN (VSAN) that is sold by major vendors through an EVO:RAIL OEM program. Maxta also sells hyper-converged software.

Still, Smith said the vendor is sticking with the appliance model for its commercial products for now. “Right now, [a software-only paid version] is not in our plan,” he said. “Customers tell us they like the convenience of appliances. Our efforts are devoted to making our appliances and Dell appliances as easy to use as possible.”


May 11, 2015  3:56 PM

Quantum expects to scale StorNext products, sales this year

Dave Raffo Dave Raffo Profile: Dave Raffo
Quantum, Storage, StorNext

Quantum’s forecast for the next year includes more StorNext and scale-out archiving products, sales and vertical markets.

The sales bump has already started. Quantum last week reported its scale-out storage – which includes StorNext – increased 116 percent to $31.7 million for last quarter and grew 74 percent to $102.4 million for the year. That led overall Quantum revenue growth of 15 percent for the quarter while its total revenue for the full fiscal 2015 year was flat with the previous year.

CEO Jon Gacek said more StorNext products are coming soon during the vendor’s earnings call last week.

“We will further expand our portfolio of full StorNext solutions with a goal of enhancing the customer experience,” Gacek said. “We will also continue to focus on increasing our StorNext 5 appliance footprint with large install base customers by employing more disk, object, tape and cloud storage.”

The product expansion will come on top of the Artico NAS and larger capacity Lattus nodes that Quantum rolled out last month.

Gacek said he expects a surge in Quantum’s video surveillance customer base over the next year. Most of StorNext’s revenue today comes from the media and entertainment industry, but Gacek sees surveillance as ripe for growth because of requirements for long-term retention and larger capacity video formats.

“Surveillance has generally been a low-cost NAS market for storage,” Gacek said. “Surveillance cameras didn’t have a lot of resolution and there was a short retention period. Now the cameras are going high definition, there’s a lot more data, people are doing analytics on it, and the retention periods are enormous.”

He said close to an exabyte of data is captured in the world every day. Besides StorNext, Gacek sees Quantum’s Lattus object storage and even its core tape business playing big roles in the surveillance market. Quantum is also going after network forensics, government intelligence and high performance computing with its scale out platforms. Data for all of those markets must be ingested fast, retained indefinitely and accessed whenever needed.

Quantum forecasted a 50 percent increase in scale-out revenue this quarter over last year. For the full year, its guidance calls for a 50 percent bump in scale-out storage revenue.

Quantum’s DXi deduplication and disk backup platform increased 30 percent year over year to $25.2 million last quarter and increased 120 percent to $88.2 million for the year. StorNext and DXi helped Quantum overcome a decrease in tape sales to hold revenue at the same level as 2014. For the fiscal year, Quantum reported a net income of $16.8 million compared to a loss of $21.5 million the previous year.

“Fiscal 2015 was a key turning point for Quantum,” Gacek said.


May 5, 2015  6:56 PM

Nimble gets SAP certification, offers SmartStack for HANA

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Most of the storage news this week is happening at EMC World in Las Vegas, but there’s also a bit of activity at the SAP Sapphire Now conference in Orlando, Florida.

Nimble Storage announced that SAP certified its Adaptive Flash CS500 and CS700 arrays for the SAP HANA in-memory database management system. The San Jose, California, storage startup sells hybrid arrays that combine solid-state and hard-disk drives and offer features such as performance monitoring, scale-out clustering and triple-parity RAID.

Nimble’s SAP certification opened the door for the company to participate in the HANA tailored data center integration (TDI) program, which lets customers use their existing hardware and infrastructure components for HANA deployments. Nimble developed a SmartStack integrated infrastructure offering built on Cisco’s Unified Computing System (UCS), tested and validated to work with SAP HANA.

“HANA is a pretty complex workload, and figuring out the sizing and the deployment guidelines can be pretty challenging,” said Radhika Krishnan, vice president of product marketing and alliances at Nimble. “What we’ve done working with Cisco is provide a reference architecture that outlines all the best practices as well as the deployment specs, the guidelines, that customers require.”

Cisco’s “build and price” site for SAP’s Business Suite for HANA (S/4HANA) lists Nimble as well as EMC, IBM and NetApp as recommended storage configurations for small, medium and large S/4HANA production environments.

“That’s essentially intended to help customers accelerate the pace at which they can get their HANA environments provisioned,” said Krishnan. “And it’s also to eliminate a part of the risk involved in the process. Oftentimes these are projects that take multiple months, and we’re trying to accelerate that whole cycle by providing off-the-shelf guidelines and reference architectures.”

In order to get the SAP certification, Nimble had to run tests to demonstrate that its systems had the performance and reliability characteristics to host mission-critical tier 1 workloads, according to Krishnan. She said that Nimble started the certification process last October.

Krishnan claimed one advantage of the Nimble-Cisco combination with SAP HANA is the ability to accommodate growth over time in a “safe, gradual manner without any kind of disruption to the application,” as well as lower power and cooling requirements than some of the other options on Cisco’s recommended list. She estimated that Nimble has several dozen customers using SAP with its storage systems.


May 5, 2015  11:19 AM

EMC opens up ViPR, hope developers take to it

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

LAS VEGAS — You don’t usually associate EMC with open or free, but at EMC World today the vendor said it is making ViPR an open-source project and is offering a free non-production version of its ScaleIO storage software.

Project CoprHD will make the storage automation and control functionality code for ViPR Controller open for community-driven development. EMC plans to make CoprHD available on GitHub in June, licensed under the Mozilla Public License 2.0 (MPL 2.0).

It will enable customers, partners, developers and other storage vendors to contribute new services and applications for ViPR. ViPR automates storage management and can virtualize EMC and third-party storage into one pool. EMC would like to draw more developers into working on ViPR and would welcome the embrace of the OpenStack community.

Free ScaleIO is an unlimited capacity version of EMC’s shared block storage software available for non-production use. EMC also launched the ScaleIO Product Community on the EMC Community Network (ECN). Customers, partners and developers can download EMC ScaleIO software from ECN. EMC’s goal is to tempt customers with the free version, and convince them to upgrade to a paid production license.

Skeptics might say EMC is opening ViPR because it can’t get third parties to develop to it, but EMC president of products Jeremy Burton said it is part of new strategy.

Burton pointed out that EMC’s acquisition of Cloudscaling last year gave it an OpenStack-powered cloud technology and the EMC Pivotal group’s Cloud Foundry service is community based.

“This is a big philosophical shift inside EMC,” Burton said. “I don’t think we could have done this a few years ago. We’re making our customers part of our development process.”

Burton said he would like to see ViPR embraced by the OpenStack community.

“We would rather contribute to existing OpenStack projects than stand alone,” he said. “If we could move ViPR to be more part of OpenStack, that would be the preferred route.

C.J. Desai, president of EMC’s emerging technologies division, said he expects more EMC software to be made available for developers.

“This is the beginning, its step number one,” he said.


April 29, 2015  3:59 PM

Kidd’s retirement leaves NetApp CTO-less

Dave Raffo Dave Raffo Profile: Dave Raffo
NetApp, Storage

Jay Kidd is retiring as NetApp CTO, although not necessarily ending his career.

In a blog post today, Kidd said he is leaving NetApp after 10 years. Kidd wrote that he is leaving corporate life and “will shift my time to pursue more personal interests and delve more deeply into the areas of advising and investing.” That hints he might become involved with startup companies.

NetApp will not replace Kidd as CTO, and it sounds like the vendor will take a CTO by committee approach. Kidd wrote that the job has likely become too big for one person.

“The role of the CTO in a company the size of NetApp is too broad to be done by a single individual, and a ‘CTO Community’ evolves which includes people both in the CTO office as well as in other parts of the organization,” he wrote. “It includes product architects, technology researchers, technical community leaders, market and industry researchers, technical spokespeople and a host of other disciplines.”

He added that he could not “imagine a better place to work” than NetApp and will always be an advocate for the company. Before joining NetApp in 2005 as SVP of emerging products, Kidd spent six years at Fibre Channel switch vendor Brocade as VP of product management and CTO.

Kidd’s departure comes with NetApp in a long sales slump, with a string of year-to-year revenue declines. The vendor laid off more than 1,000 workers since 2013. Financial analyst firm Piper downgraded NetApp’s stock last month, claiming flash storage and cloud storage providers are cutting into its sales.


April 28, 2015  10:14 AM

HDS extends VSP virtualized storage, moves into hyper-convergence

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

LAS VEGAS — Hitachi Data Systems opened its HDS Connect conference today with smaller versions of its flagship Virtual Storage Platform (VSP) storage array and two hyper-converged systems. The hyper-converged systems include one based on HDS technology and the other on VMware’s EVO:RAIL.

HDS is extending the VSP that it launched a year ago downwards with dual-controller, smaller arrays. The first VSP – the G1000 – can support 16 controllers and scale to 4.5 PB in a 10U rack. It continued HDS’ tradition of high-end enterprise arrays.

The new arrays include a 3u VSP G200 and 5u VSP G400, G600 and G800 models. They range in capacity from 1 PB on the G200 to 5.7 PB on the G800.

Like the G1000, the new systems can use Hitachi’s proprietary flash module drives (FMDs), which come in 1.6 TB and 3.2 TB capacities. The G200 supports 264 FMDs, the G400 480, the G600 720 and the G800 1,440. The G1000 holds 578 FMDs.

The new VSP arrays use the same Hitachi Storage Virtualization Operating System (SVOS) as the G1000. SVOS allows other vendors’ storage to be virtualized behind the VSP arrays, handles management features such as non-disruptive data migration and includes a native global active device feature that provides active-active clusters across data centers without requiring a separate storage appliance.

The VSP arrays support Fibre Channel and iSCSI storage natively, and connect to Hitachi NAS file arrays for unified storage.
While the G1000 is aimed at large enterprises, HDS will market the smaller systems as ways for midmarket companies to consolidate virtual server block and file workloads.

“The VSP high end was always unreachable for some customers,” said Bob Madaio, HDS senior director of product marketing. “You have to understand the needs of the application and not assume it’s one-size fits all.”

Madaio said while the software is the same across all VSP arrays, the new systems use Intel-based controllers instead of an HDs proprietary architecture.

The G200, G400 and G600 arrays are available today with the G800 expected later this year.

The hyper-converged systems are part of Hitachi’s UCP convergence family. Madaio described the new Hitachi Scale-Out Platform (HSP) as “hyper-convergence for the analytics world.” HDS positions HSP as a scale-out platform for Hadoop environments, allowing customers to analyze data in place so they don’t have to move large data sets to perform analytics functions. It uses Hitachi servers along with KVM hypervisors and HDS storage management software.

HDS also launched the Hitachi UCP 1000 for VMware EVO:RAIL that uses VMware’s Virtual SAN (VSAN) hyper-converged software. HDS said it would be an EVO:RAIL partner last year, but the UCP 1000 is its first product based on the VMware partnership. HDS sees the UCP 1000 and the new UCP 2000 – which combines Hitachi servers with the VSP G200 – as SMB or remote office storage.

We’ll have more on these and other news from HDS Connect over the next several days on SearchVirtualStorage.com.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: