September 27, 2013 3:29 PM
Posted by: Dave Raffo
flash storage array
, violin memory
Violin Memory finished its first day as a public company today, $146 million richer and with a day of disappointing trading behind it.
Violin completed its initial public offering Thursday night, pricing shares on the New York Stock Exchange at $9 – the midway point of its target range – to raise $162 million. By the close of trading today, its share prices were down to $7.11 for a 21% drop.
Violin CEO Don Basile said one thing didn’t change for the flash array vendor today.
“Our strategy doesn’t change,” he said. “Our one true competitor is EMC, and we focus every day on beating EMC.”
What does change is the way people look at Violin now. “It’s a great milestone,” he said. “But once you’re a public company, you have a quarterly scorecard and you have to executive every quarter.”
That scorecard was mixed for Violin as a private company. According to its IPO filing, Violin’s revenues steadily rose, from $11.4 million in 2010 to $73.8 million in 2012 and $51.3 million in the first half of 2013. But losses also mounted – Violin finished 2012 $109.1 million in the red and lost $59.2 million for the first six months of this year.
Basile said Violin will moderate the growth of its sales and marketing teams now to try and shrink those losses.
“We will invest in sales and marketing at the same pace or slightly above pace at what we have done,” he said. “We quadrupled the size of the company the last two years. We don’t have to expand as rapidly.”
As for the drop in share price on the first day of trading, Basile said “we’re focused on the long-term strategy, not what happens on a given day.”
While the strategy focuses on EMC, there is really a lot more competition out there. Violin was the market leader in flash array sales last year according to Gartner, but EMC and other leading storage vendors did not have an all-flash array platform then. Now just about all of them have all-flash systems.
“I attribute a lot of Violin’s success to being early to market,” said Mark Welke, NetApp senior director of product marketing. “They started in 2009 developing a flash array. We started our flash strategy around the same time and went for flash cache initially.”
September 26, 2013 9:19 AM
Posted by: Dave Raffo
, flash array
, whiptail; solid state storage
Don’t expect Brocade to follow its Fibre Channel SAN switch rival Cisco into the world of flash storage arrays.
During Brocade’s Analyst Day Wednesday, Brocade CEO Lloyd Carney said Cisco’s acquisition of all-flash array startup Whiptail was a blunder that should help Brocade increase its lead in FC switching.
“Cisco went off and bought a storage company,” Carney said. “It’s causing people in the storage business to say, ‘Should I partner with them as much as I did in the past?’ We are leveraging that to our benefit. The enemy of my enemy is my friend. They’re making enemies and we’re making friends.”
Most FC switches are sold through OEM deals with storage vendors. Brocade is counting on improved relationships with those storage vendors following Cisco’s move into flash although Cisco insists the Whiptail technology will be used with its UCS server platform rather than as a storage offering.
Brocade tried a similar strategy after it acquired Ethernet switching vendor Foundry in 2008. When Cisco moved into the server market with UCS, Brocade became a closer partner on the Ethernet side with server vendors IBM, Hewlett-Packard (HP) and Dell. The results weren’t what Brocade hoped for as HP and Dell went out and bought their own Ethernet switch companies.
Now Brocade’s main target to grow closer to is EMC, which has historically been a tighter Cisco ally and partners with Cisco in the VCE converged infrastructure alliance. Cisco sells more FC switches through EMC than Brocade does, and Carney is hoping to change that using the Whiptail deal as a catalyst. EMC sells its own XtremIO all-flash array that competes with Whiptail.
Brocade is still trying to cut into Cisco’s market share lead in Ethernet switching, but Carney made it clear that storage is its main focus. While the future of FC has been questioned, Brocade executives said developments such as flash and the cloud will increase the need for storage networking.
“We’re number one in the SAN market, and that’s what we’re known for,” Carney said. “You start worrying about Brocade when people stop buying storage.”
And while Brocade isn’t looking to get into the storage array business, Carney said this is a good time to pick up new technology through acquisitions. “Startups are overfunded with venture capitalist money and every good technology has about two or three companies you can buy,” he said. “It’s a buyer’s market now.”
September 25, 2013 10:33 AM
Posted by: Dave Raffo
, flash array
When the Virtual Computing Environment (VCE) coalition began four years ago, the products its three founding company brought to the table were clearly defined and unique. Cisco contributed servers and Fibre Channel and Ethernet networking, EMC the storage and VMware virtualization.
Since then, the lines have blurred some and Cisco finds itself in competition in some areas with EMC and its subsidiary VMware. That started last year when VMware acquired Nicira to get into software-defined networking. All the companies are vying for cloud customers, and Cisco revealed its intention to acquire flash storage array startup Whiptail two weeks ago.
The Whiptail news preceded the latest Vblock launch by a week. And that launch included a Vblock Specialized System for Extreme Applications, which features EMC XtremIO all-flash array as well as EMC Isilon scale-out NAS. Virtual desktop infrastructure (VDI) storage is the main target for that Vblock.
So, you’re wondering, what happens when Cisco completes the Whiptail acquisition and has its own flash array? Will there be another Specialized System for Extreme Applications configurations including Whiptail?
That’s not even an issue, according to VCE VP of product development strategy Todd Pavone. He said VCE doesn’t consider Whiptail technology as a flash array option, because Cisco doesn’t see it that way.
“Cisco is going to bake Whiptail technology into UCS for server-side flash,” Pavone said. “EMC will continue to stay where it is on the storage side of the market. We will take advantage of Whiptail when it is integrated into the UCS fabric as server-side flash.”
VCE plans to ship a Vblock Specialized System for high-performance databases by the end of this year using EMC VNX or VMAX arrays with FAST auto-tiering software and UCS servers with server-side flash cache. It’s unlikely that any Whiptail technology can be included by then, but it that sounds like a good place to use it eventually.
September 19, 2013 2:45 PM
Posted by: Dave Raffo
U.K.-based Aorta Cloud is trying to rescue its soon-to-be-defunct cloud storage provider partner Nirvanix, and has apparently enlisted the help of IBM. The provider is also asking if Nirvanix customers want to pitch in.
Aorta – with the help of its sister company Aorta Capital – has set up a Nirvanix Rescue Package page on its website detailing efforts to save the day for Nirvanix customers. Nirvanix notified its customers earlier this week that they have until the end of September to move their data off its cloud because it is going out of business.
The note on the Aorta web site from CEO Steve Ampleford claims his group has a “seven-figure commitment” to provide liquidity and a bank has agreed to match its investment. He also asked if there were any Nirvanix customers who would like to join his funding effort. The plan is to raise enough money to keep the company going for at least two months, and then try to secure more funding to save it long-term or at least give customers more time to get their data off the Nirvanix cloud. Ampleford said he wanted to consolidate Nirvanix’s seven global data centers into two or three to maintain operations while reducing overhead.
“After all, the hurdles for existing customers to move their data are substantial and if there is any way that we can help Nirvanix continue its operations, this has to be a preferable scenario for those customers to any alternative,” Ampleford wrote in his blog. “ … The technology is robust and solid, the business is credible, the market clearly exists and demand is there. We must be able to find a way forward together.
An update to the site posted today claims “We can now publicly confirm that IBM are an active participant in our attempt to drive forward a solution. More to follow …”
IBM is a Nirvanix OEM partner.
September 19, 2013 2:38 PM
Posted by: Sonia Lelii
Hewlett-Packard today upgraded its Application Information Optimizer (AIO) archiving application, adding support for the cloud and integration with the HP Intelligent Data Operation Layer (IDOL) search engine.
AIO, which HP acquired when it bought Autonomy in 2011, is used to archive or retire data for performance or regulation purposes. It includes e-discovery and data protection features for information governance. The software accesses, classifies and moves outdated and inactive structured data from production databases and legacy applications onto lower-cost storage. From there, it can be used in other applications or deleted.
The integration with the IDOL search engine lets customers mine structured and unstructured data through AIO. With the new release, HP added support for the HP Cloud and Amazon S3.
“We are adding flexibility on where you can retrieve the information and where customers can put the data, such as the cloud,” said Joe Garber, HP’s vice president of information governance.
IDOL integration gives AIO “Google-like” search so administrators can easily locate data residing on old hardware that is difficult to retrieve because of outdated technology.
AIO now can do searches across multiple IBM DB2 databases via the IDOL search engine. Administrators can use SQL or plain English searches.
September 18, 2013 11:42 AM
Posted by: Dave Raffo
, hp eva
, HP storage
Hewlett-Packard (HP) has officially killed its Enterprise Virtual Array (EVA) SAN platform, telling customers it will stop selling the two remaining models next January. HP is steering customers instead to its Converged Storage portfolio, which basically means its 3PAR StoreServ platform.
An HP spokesman said the vendor last week notified its EVA customers that it will stop selling the HP EVA P6530 and P6550 – the last models still on the market. HP will continue to support disk drives, major operating systems and SAN infrastructure through Jan. 31, 2017.
This is no surprise. HP last December began offering 3PAR Online import software to move customers from EVA to StoreServ arrays through EVA’s Command View management interface. HP also started using EVA’s Smart Start application to deploy 3PAR arrays.
“For HP EVA customers considering either a technology refresh or new application deployments, the replacement product for EVA systems is HP 3PAR StoreServ 7000 Storage,” the HP spokesman wrote in an e-mail about the EVA end-of-life notice. “In fact, moving from EVA to 3PAR is easier than moving from EVA to EVA.”
When HP came out with the migration software to move from EVA to HP, the vendor said it would not end-of-life EVA through 2013. It lived up to that pledge, and tacked on a month extra although it seems unlikely that any one will buy an EVA now that its fate has been sealed.
HP has made the 3PAR platform its flagship storage platform since acquiring 3PAR for $2.35 billion in 2010. 3PAR sales have been strong since then, while EVA has faltered and dragged down HP overall storage sales. During the last quarter, HP reported 3PAR sales increased more than 10% year-over-year while overall storage sales dropped 10% to $833 million.
When HP launched the StoreServ Storage 7000 and migration software last year, HP storage marketing VP Craig Nunes claimed “this StoreServ platform is everything you would want in a next-generation EVA.”
Compaq originally developed the EVA in 2001, and it became HP’s major midrange storage platform after the 2002 merger between those companies. HP claims there are were well over 100,000 EVA systems deployed.
September 18, 2013 10:48 AM
Posted by: Randy Kerns
, big data storage
, object storage
Dealing with information in IT operations includes delivering the resources to store and retrieve information as well as all the management and security requirements. Continued capacity demand creates challenges in adding more storage systems, adapting the data protection process for additional capacity, and ensuring that performance scales in parallel with capacity to meet application and system demands.
This continuum of managing storage gets disrupted when a big storage problem is introduced into IT. Big in this case means a large amount of unplanned capacity is introduced into IT. The large amount of data can come from new projects or acquisitions or from non-traditional data (not usually stored in IT) used for analytics. The capacity demand seen for these types of requirements may dwarf the existing storage systems and processes in IT currently.
Extending current environments usually doesn’t work for these large capacity situations. Public clouds or private clouds can add capacity more quickly, but adding capacity alone doesn’t solve the problem. The ability to move data and manage it is also required.
The type of information growing fastest is unstructured data in the form of files. The massive number of files presents another problem. Storing the potentially billions of files into file systems may not be possible or may result in slow access using the hierarchical index structures. Using next-generation object storage with a flat address space may be the answer. Most cloud storage offerings use object storage with RESTful access protocols such as Amazon S3. Object storage systems deployed within IT either independently or as part of private clouds uses protocols for objects or use gateway devices or file system interfaces to map file access to objects. So we see object storage used to solve the big storage or massive amount of data problem.
Object storage systems or software vary greatly in capability and support for access protocols. But solving the big storage problem really isn’t about object storage. It’s about the problem and the approaches. Object storage is a technology to use as part of the solution.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
September 17, 2013 2:56 PM
Posted by: Sonia Lelii
The speculation about troubled cloud storage vendor Nirvanix was true: today the company took steps to begin closing its doors.
Nirvanix did not comment on its future, but advised customers to stop replicating data to the Nirvaix cloud and to move their data off the company’s systems in the next couple of weeks.
Nirvanix sent a notification to customers late this afternoon with suggestions to follow for data migration, because it plans to disable uploads to the cloud on Sept. 23. The e-mail to customers read: “We are notifying you as soon as possible after making this decision so that you can make alternative plans for storage service. Nirvanix will have resources available to continue to provide service between now and Oct. 15 for you to download your data free of charge.”
“It’s been crazy,” said Chris Pyle, chief executive of Champion Solutions Group, a Nirvanix partner who received a call from Nirvanix at 9 a.m. today. “It’s a complete surprise. Yes, it’s disheartening.”
September 13, 2013 4:05 PM
Posted by: Dave Raffo
Oracle launched two new ZFS-based storage appliances this week, gave them a new name and released benchmarks that the vendor claims show it is the fastest file storage system on the market.
The new Oracle ZFS Storage Appliances have two models — the ZS3-2 and ZS3-4. The systems are the next generation of what Oracle previously called the Sun Storage ZFS 7000 Series. Like the 7000 series, the ZS3-2 and ZS3-4 are optimized for Oracle software applications with the goal of running database queries and analytics faster.
Both new systems are available in single or dual-cluster configurations with up to 12.8 TB of read flash cache. The ZS3-2 has four eight-core Intel Xeon processors, 512 GB of DRAM cache, and scales to 768 TB with eight disk shelves. The ZS3-4 has eight 10-core Xeon processors, 2 TB of read flash cache and scales to 3.5 PB with 36 shelves.
Jason Schaffer, Oracle’s senior director of storage product management, claims the big enhancements are in the software designed to simplify deployment and improve performance of Oracle databases. These enhancements include Oracle Intelligent Storage Protocol (OISP) that automates Oracle Database tuning and administration. OISP allows Oracle Database 12c to communicate metadata to the storage system to optimize performance. The ZS3 appliances also have new heat map and Automatic Data Optimization (ADO) capabilities to apply different compression levels depending on where the data is in its lifecycle. It also uses Hybrid Columnar Compression (HCC) to significantly compress Oracle Database data.
Oracle claims the ZS3-4 set benchmark “new world records” with its 450,702 SPECsfs operations per second throughput score for a dual-node NAS system, and its SPC-2 $22.53 price performance and aggregate throughput of 17,244 MB per second.
“Oracle applications will see a tremendous advantage of moving to ZS3 over competitive systems,” Schaffer said. “It’s designed to insure that any Oracle software running behind ZS3 will be the fastest and most efficient.”
In a report on the ZS3 series, SSG-Now founding analyst Deni Connor wrote that the automated database-to-storage tuning, heat map-drive tiering and HCC “separate ZFS storage from the rest of the competition in Oracle environments … Running an Oracle Database without these features enabled is like flying a plane without all the engines turned on – you’re still flying but you’re not getting the maximum output and velocity that’s available to you.”
You say you’re not running Oracle database apps? Never mind, then.