NAS acceleration vendor Avere Systems this week introduced a smaller version of its FXT NAS Edge filer — the FXT 3100 – that optimizes mission-critical applications from remote and branch offices across a WAN to core data centers. This latest product comes days after the company secured $20 million in Series C funding, bringing its total investment to $52 million.
The Avere FXT 3100 Edge filer contains 48 GB of DRAM and 1GB of NVRAM to accelerate read, write and metadata performance for active data. One 2U device holds 1.2 TB of data using 10,000 rpm SAS drives. The FXT 3100 has two 10 Gigabit Ethernet ports and six Gigabit Ethernet ports, and can be clustered to 25 devices. Data or changed blocks that reside in the 3100 is pushed across the WAN to the Avere core FXT 4000 or 3000 devices located in data centers.
“This is not for local peformance,” said Rebecca Thompson, Avere’s vice president of marketing, “so there is no flash in this unit. We are dealing with WAN latency. That is what we are mitigating. This is for data outside the data center, such as engineering applications, scientific application for genomics and remote rendering for media entertainment.”
The FXT 3100 Edge filer is built on the Avere Operating System (AOS), which does automatic tiering, advanced monitoring of the NAS environment and uses a global namespace to manage all storage as a single pool. The FXT 3100 will be available in 30 days at a list price of $42,500.
Avere, which has about 80 employees, will be using its latest funding to build out its sales and marketing teams, CEO Ron Bianchini said. He expects to add channel partners. Lightspeed Venture Partners led the C round with previous investors Menlo Ventures, Norwest Venture Partners and Tenaya Capital contributing.
“It’s not really about the product anymore,” Bianchini said, “It’s about getting the message out.”
In 2009, the company secured $15 million in Series A funding from Menlo Ventures and Norwest Venture Partners. In 2010, the company got $17 million in Series B funding led by Tenaya Capital and Menlo Ventures and Norwest Venture Partners.
That’s no surprise, considering NexGen had one model at launch. This week it turned its n5 storage system into a three-product series to cover more of the midrange market.
“Today [before the additions] we have a single product, and we know that’s not adequate to cover the market,” said Chris McCall, NexGen VP of marketing.
NexGen’s new platform consists of the n5-50, n5-100 and n5-150. The n5-100 is almost identical to the original n5 system launched last November. It has 1.24 TB of PCIe-based flash (two 68 GB Fusion-io cards) and 32 TB of raw capacity. The difference between the n5-100 and the original n5 is the new system supports Gigabit Ethernet and 10 Gigabit Ethernet instead of one or the other. The n5-50 has 770 GB of solid state and 16 TB of raw capacity and the n5-150 has 2.4 TB GB of flash and 48 TB of raw capacity. All three models include eight GbE and four 10GbE data ports and four GbE management ports.
NexGen claims the n5-50 can attain 50,000 IOPS, the n5-100 100,000 IOPS and the n5-150 150,000 IOPS. NexGen’s ioControl QoS software lets customers provision minimum IOPS levels to each volume. The systems range in price from $55,000 to $108,000. The n5-50 and n5-100 are expected to be available next month with the n5-150 to follow in September.
McCall said NexGen’s architecture is more efficient than using all flash as other startups have done. He claims NexGen storage systems run as fast as all-solid state disk (SSD) storage – especially for writes.
“Most vendors integrating solid state did it via disk drives,” he said. “Typically they add SSD as read-caches but they’re not doing write workloads to solid state. The world is going from saying ‘I need more performance’ to ‘I need to manage that performance.’’
Perhaps the next step for NexGen will be the ability to cluster its systems. McCall said “we’re strongly looking into it” but there is an engineering challenge. “A clustered system uses a network between nodes,” he said. “Because SSD is so low latency, the network can undermine performance between nodes.”
News that Pat Gelsinger will move into the VMware CEO job and his predecessor Paul Maritz will join EMC as chief strategist has EMC-watchers wondering what the changes will mean to the storage giant’s succession plans.
Gelsinger and EMC CFO David Goulden were considered the top candidates to replace Joe Tucci when he steps down as CEO. The moves that EMC made today included a promotion for Goulden, who adds COO and president to his CFO titles. EMC also kept Gelsinger in the family, because EMC is the majority owner of VMware.
Tucci already changed the timing of his retirement last January when he said he would stay on past the end of this year into 2013. During a conference call today to discuss the executive changes, Tucci said he would stick around for at least 17 more months.
“I am incredibly energized,” Tucci said. “I truly love EMC and VMware. I do believe I add value, and I am fully committed to staying on as chairman of EMC and VMware and CEO of EMC at least through the end of 2013. I am convinced my successor will come from within.”
Goulden becomes the clear No. 2 to Tucci in his new role. Goulden will remain CFO while also heading EMC’s business units, sales, customer operations, services and marketing.
But that doesn’t mean Goulden is a lock to become the next CEO. EMC can still bring back Gelsinger, who will have more than a year as CEO under his belt by then. Maritz, who remains on the VMware board and will help drive EMC’s “big data” strategy, also has to be considered a candidate. EMC has other well respected executives. High-ranking executives Howard Elias and Jeremy Burton have been given more responsibilities to help replace Gelsinger, and Bill Scannell was promoted to president of global sales and customer operations last week.
When asked if he would stay on beyond 2013, Tucci said: “I will not overextend my welcome by any means. As long as I’m providing value and the board wants me … I’m not putting any firm end date in the sand. Now key executives are getting increased experience. Dave Goulden hasn’t been a COO of a company of this size, and Pat hasn’t been a CEO.”
The executive changes take effect Sept. 1, days after VMWorld 2012 ends.
“It will be an honor to take the baton from Paul at VMworld for the next lap of the journey,” Gelsinger said.
Gelsinger joined EMC in 2009 after spending 30 years at Intel. He led EMC’s product strategy as president and chief operating officer of EMC information infrastructure products.
Maritz served as VMware CEO since 2008, when he replaced VMware founder Diane Green more than four years after EMC acquired VMware. Maritz described his new role as a full-time position, but said he would remain on the west coast rather than move nearer to Hopkinton, Mass.
Tucci said Maritz suggested turning over the VMware reigns to Gelsinger as both vendors prepare for a major IT transition to cloud computing. Tucci said the timing was good because both EMC and VMware have been doing well. He said he believed customers will be happy with the changes.
“The time to make these kinds of changes is off the position of strength when you are performing well and have customer permission to play in new markets,” he said. “VMware is moving to the next phase of cloud computing.”
The executive swapping also can be taken as a sign that EMC is considering spinning VMware back in instead of running it as a separate company. The way Tucci and the EMC board move execs back and forth suggests they already see VMware more as a division of EMC rather than an independent company.
Like many people in the high tech world, I don’t usually pay attention to the latest entertainment gossip. But while watching the news recently at a hotel, I found myself barraged with information about Katie Holmes deciding to leave Tom Cruise. There was so much earnest reporting of vague speculation that the sheer magnitude made me wonder what I had missed and what were the circumstances.
So why did Katie decide to leave Tom? Well, there are plenty of talking heads leading to uninformed conclusions. That mirrors the storage industry at times. In this case, the perceptions about what possibly could have happened were presented with such conviction that they must be true. The consensus conclusion was that religious differences were at the heart of this. There is no arguing with religion – just try and enter a discussion about Windows and Mac.
No matter how much “news” I hear, I know I can’t really believe what is being said, no matter how fervently. I do know that only a few people understand for sure what is happening, and the public will be told a variation of the truth. And, I do know I don’t really care. It is their personal problem and I don’t see that as a spectator sport. It’s not quite the same as watching “the big one” 27-car pile-up at Talladega in a NASCAR race.
Still, I can’t help thinking of the implications in the area of the digital information they have created and protected. Who gets the data? What information would they want individually that is in electronic form?
• Tax records?
• Business records?
• Wedding pictures? (I realize there needs to be multiple independent file system structures for these. This may be where we get into multi-tenancy isolation issues.)
• Pictures of children?
• I’m sure there are other types of important digital file as well that amassed during the marriage.
If they go to court — and it looks now like they won’t — some information could be part of a court order for discover. Other information is personal and while some of the information may be priceless to one person, it may not be quite so valuable to the other. But it is information that exists in digital form somewhere and has to be split up in some way. Where is the information and who parcels it out accordingly?
Were they practicing safe data protection? It is doubtful they were backing up to a cloud location because of security and privacy concerns for celebrities. Do they simply make a copy of all files and hand them over? Or, do they hide some data – delete files, digitally overwrite disks, send backup copies to the shredder? Emotion and lack of clear judgment (beyond normal operational failures that most business experience) may cause some data to be deleted or “lost.”
All these concerns have similarities to business issues. Many of them can be mapped to business circumstances. Getting access to the information could be made frustratingly difficult. In the words of the talking heads, “this could get ugly.”
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
When buying storage, IT teams often wonder if it is worthwhile to buy a complete package of storage system, management software, data protection and other elements from a single vendor.
Buying everything from one vendor has advantages and disadvantages. The advantages include:
• The solutions are integrated and will work together better than individually acquired products.
• Dealing with a single support organization results in faster resolution of issues and greater concern from the vendor because it has greater responsibility (and greater revenue).
• Cost will be reduced with what is presumed to be a volume purchase.
• There will be fewer vendors to deal with, which takes less time in purchasing and implementing.
The downside includes:
• The costs might be more because customers could end up having to purchase a more expensive component in the package than they would normally select.
• Customers might not get the best individual product available when a bundle is purchased, and buying a bundle might eliminate potentially better solutions from consideration.
Building a complete storage and storage software portfolio has become a major focus in vendor acquisitions. Dell’s recent purchase of Quest Software is a good example. Dell now has storage, storage management, data protection software, and other important software that can be packaged as a complete solution. This makes Dell more competitive with other major vendors who have integrated offerings.
For vendors, selling integrated storage solutions also make sense. These packages bring larger deal sizes, and the amount of money per time invested in the sales opportunity can be much greater. A larger sale also results in a greater footprint (more products sold to a particular customer) for the vendor. With more products (from the integrated solution) installed in a customer environment, there is much greater difficulty for the customer to turn to another vendor. While not a lock-in, there is greater resistance to change vendors.
Judging from my interactions with IT customers, I believe buying bundles from a single vendor will become the prevalent purchasing pattern. The main reason is because the purchase of the integrated solution is simpler for IT than buying multiple products from different vendors. It also requires less training. That simplicity translates to less administration required and less operational expense.
You can expect to see the trend of more integrated solutions continue. For the IT customer, it is important to understand what they may be giving up with a single solution commitment.
Simplifying the operational environment is a constant struggle, and anything that makes buying and using storage less complex is a big help. This Evaluator Group article on storage efficiency has more information on how to optimize storage.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Dave West said his phone has been ringing constantly since Monday morning when Dell revealed it reached a deal to acquire Quest Software.
West is CommVault’s SVP of marketing and business development, and people want to know how the Dell-Quest deal will affect his company.
“I’ve been fielding a lot of questions over the last 24 hours,” West said this morning. “’What the hell does this mean to CommVault?’ because Dell is a significant distribution partner.’
“Well, it’s not as big a deal as some people think.”
The first reaction from a lot of people who follow the backup space to Dell-Quest was: what does this mean for Dell’s current backup partners Symantec and CommVault? Dell sells both of those vendors’ backup software bundled on its hardware. Now Dell has its NetVault physical and vRanger virtual backup applications from Quest to go with the AppAssure replication it acquired in February. And Dell executives make it clear they want to rely less on partners and more on their own products over the long term.
CommVault has more at stake than Symantec from Dell’s push into backup. Symantec has much larger overall market share than CommVault, and has already moved into coopetition with Dell by marketing its own bundled appliances. CommVault still relies on hardware partner to sell its Simpana software, and Dell is its largest partner with more than 20% of CommVault sales going through Dell in most quarters.
West said despite the speculation, there is plenty of life left in the Dell-CommVault relationship. He said Quest’s software – including NetVault backup – is mostly aimed at Windows and at a lower end of the market than CommVault. CommVault is counting on Simpana’s heterogeneous support with storage and backups systems to make it more valuable to Dell in the enterprise than NetVault.
CommVault has also has years of integration work with Dell storage systems and servers.
“We don’t compete in the same space as NetVault or AppAssure,” West said. “Our business is predominantly enterprise. The Quest set of tools just isn’t in that space. The enterprise is heterogeneous everywhere – you have to be able to sit on top of all the different devices. That’s not something they’re going to get from Quest.”
West said CommVault’s partnership with Dell is “consultative,” meaning CommVault sales people are involved at every stage of sales and implementation with Dell customers.
Meanwhile, CommVault is working with as many hardware partners as possible. It has reseller or OEM deals with Hitachi Data Systems, NetApp and Fujitsu, and is even tightly integrated with some EMC storage systems.
“Dell is just one arrow in our quiver,” West said.
That’s a smart strategy for CommVault. While Dell didn’t buy Quest mainly for its backup portfolio, it has that IP now. Considering Dell’s long-term strategy of building its own products, it’s probably a matter of time before it won’t need backup partners. The big question is, how long will it take?
Dell said this morning that Quest Software has accepted its $2.4 billion acquisition, winning its weeks-long battle with Insight Venture Partners to grab Quest.
Dell first entered the bidding at $2.15 billion before Insight countered at $2.27 billion. Dell made another offer last week of $2.32 billion and increased that price before closing the deal. The bidding was reminiscent of Dell’s battle with Hewlett-Parckard (HP) for storage systems vendor 3PAR two years ago. HP won that battle, and paid $2.35 billion for 3PAR.
The shareholders of Dell and Quest must approve the deal, and Dell said it expects the transaction to close by the end of October.
Quest sells backup software for virtual and physical servers and replication in its data protection portfolio. It also has systems management, security and workspace management software. While Dell mentioned Quest’s data protection in its release, none of Quest’s data protection products were highlighted. Dell specifically cited Quest One Identity and Access Management, Foglight, Windows Server Management, and Database Management in the press release.
Quest’s major backup products include NetVault — acquired from BakBone Software last year — for physical servers and vRanger Pro for virtual machines. It also sells LiteSpeed data reduction software for Oracle and SQL database backups.
Dell will discuss the deal with press and analysts this morning. For more details, see our story on SearchDataBackup.com.
FalconStor will pay $5.8 million to settle criminal and civil charges that it bribed JP Morgan Chase to buy its software, and CEO Jim McNiel said the vendor can now focus completely on re-architecting its backup software and services.
“I’m relieved,” McNiel said after the payment was disclosed Wednesday by FalconStor and the U.S. Securities and Exchange Commission (SEC). “We spent over 18 months of wrangling with authorities to prove the company wasn’t a systemic criminal organization. It was a very small number of people with a single customer. We put additional controls in place so we can catch things like this. This gives us the freedom to walk off and do our business.”
More details about the crime were divulged with the settlement. The SEC charged that former FalconStor CEO ReiJane Huai and two sales employees paid more than $300,000 in bribes in 2008-09 to JP Morgan employees in exchange for JP Morgan’s purchasing $12.2 million of FalconStor software and services. The JP Morgan sales made up 7% of FalconStor’s revenue during that period.
The bribes included FalconStor shares, stock options, gambling vouchers, gift cards, golf memberships and golf-related benefits. FalconStor recorded the expenses as “compensation to an advisor” and employment bonuses.
The FalconStor sales people have been fired. McNeil, who replaced Huai in late 2010 and was not involved with the bribery incident, said JP Morgan remains a FalconStor customer.
The low point of the scandal came last September when Huai committed suicide at his Glen Head, N.Y. home.
“It’s hard to estimate how much this hurt us, but I would say it’s had an impact,” McNiel said. “I have had conversations with customers who say our competitors have told them we’re probably not going to stay in business. They’ve said, ‘What if the company gets fined $50 million? That fact that it’s a full economic impact of $5.8 million puts a fence around this. It clears up a big dark question mark.
“But all of that pales in comparison to ReiJane taking his life. I’ve known him 23 years. People here had the utmost respect for him. He lost his way, and that’s the tragedy. It’s easy for people to forget the human tragedy.”
As CEO, McNiel has worked on revamping FalconStor’s product line as well as accelerating its strategy of going from an OEM-driven company to a direct-sales vendor. A big part of FalconStor’s plan revolves around its Bluestone initiative to deliver services-oriented data protection and management. McNiel said Bluestone is expected to launch in the first half of 2013.
“We are anxious to get all this behind us and rebuild the company,” he said. “This is not the only thing FalconStor needs to be concerned with. Our sales have gone flat. Our focus now is on creating new disruptive products.”
On the shift from its OEM model, McNiel said: “We have been like a component in a car. We’ve been an alternator or a fuel pump. Now we have to build the whole car.”
FalconStor took steps in its rebuilding with the release of new versions of its data deduplication, Network Storage Server (NSS) and Continuous Data Protection (CDP) products last August. Those upgrades are seen as precursors to Bluestone.
“The challenge is deliver all of that in a simplified user experience,” McNiel said.
Like other pure-play storage vendors, Hitachi Data Systems is growing revenue at double-digit rates despite the slow economy. But HDS is bucking industry trends with its growth.
HDS grew disk storage revenue by 11% year-over-year during the first quarter of 2012, according to IDC. That’s a bit slower than EMC (14.4%) and NetApp (11.1%), but much faster than IBM, Hewlett-Packard and Dell. HDS made great progress with NAS and enterprise SAN sales – two categories with slow or no growth. IDC said industry-wide NAS sales declined 1.9% during the first quarter, but HDS claims it increased NAS sales by more than 50%. And HDS high-end SAN sales grew more than 30% despite flat growth industry-wide for storage systems costing $250,000 or more.
HDS remains fifth in overall storage disk sales, but is gaining on No. 3 IBM and No. 4 HP.
Asim Zaheer, SVP of marketing for HDS, attributes the NAS spike to Hitachi’s acquisition of its long-time OEM partner BlueArc last September. HDS sold BlueArc NAS systems since 2006, but Zaheer said sales jumped after the $600 million acquisition. He said customers were looking for that commitment from HDS, especially after BlueArc indicated it might become a public company.
“Our belief is there was pent-up demand out there with potential new accounts relative to our long-term commitment to the technology,” he said. “They were waiting for a signal from us. BlueArc was discussing an IPO, but we took that concern off the table.”
The Virtual Storage Platform (VSP) enterprise SAN is Hitachi’s flagship product, and HDS picked up market share from EMC and IBM in that category. HDS likely benefitted from EMC’s transition to a new Symmetrix VMAX, but Zaheer said the VSP’s storage virtualization features also helped. “There’s quite an increase in customers virtualizing third-party arrays because of concern about budgets,” he said.
Zaheer said the hard drive shortage didn’t hurt HDS much. While it raised prices just as all its major competitors did, Zaheer said HDS shipped all of its orders in the first quarter. “We felt it, but we did not have to stop or delay shipment on anything,” he said. “I don’t know if we’re out of the woods yet, but our supply appears to be back to almost normal levels.”
HDS is less bullish on flash than its competitors, particularly EMC. So far, HDS’ flash offerings consist of the option to add solid-state drives (SSDs) to arrays. “The market is there, but it’s not exploding to the levels that EMC and others have predicted,” Zaheer said. “You have to have the right use cases and the economics have to make sense. If customers feel they need SSDs in their arrays, we can do that. It’s growing, but not like the hockey stick that everyone thought.”
Still, Zaheer said HDS is planning other flash products, such as all-SSD arrays and server-side flash, in anticipation that demand will grow. “You’ll see some announcements soon,” he said.
DataCore has upgraded its SANsymphony-V storage virtualization and management software to make it better suited to large enterprises and clouds. The vendor launched SANsymphony-V 9 today with new or expanded automated disk pooling, auto-tiering, asynchronous remote replication, synchronized mirroring, disk migration and load balancing.
Previous version of SANsymphony-V targeted the midmarket. With version 9, DataCore is going after large data centers, companies looking to build private cloud,s and cloud service providers with private, public or hybrid cloud offerings.
“We are trying to take it up a higher level,” DataCore CEO George Teixeira said. “We have automated tasks to make it simple, so you don’t have to focus on the details. Most of the commands and features have been made more adaptive.”
SANsymphony-V, which DataCore bills as a storage hypervisor, allows data on disk, solid state drives (SSDs), and Google cloud storage to be managed as a single pool. Auto-tiering can be applied so that administrators can put higher performing applications in memory, while archiving data into the cloud, Teixeira said.
DataCore also automated its N+1 scaling feature, allowing administrators to scale capacity and processors by adding one node at a time. The extra node can take over if any node in the cluster is lost.
Snapshots of multiple drives now can be done with a single click, and one-to-many bidirectional replication has been automated. Load balancing among multiple drives has also been automated.
DataCore is also adding reporting for chargeback and a DataCore Cloud Service Provider Program that offers new licensing options allowing CSPs to license the storage virtualization at a fixed monthly, per-Terabyte rate.