The interest in deploying solid state storage is still building, but there are already a handful of ways to introduce solid state technology into existing IT infrastructures:
• As a PCIe solid state memory card installed in a server with software to manage caching and sharing of data.
• As a caching appliance to accelerate certain applications.
• As an extended cache added to a traditional disk storage system.
• As a tier in a traditional storage system using solid state drives (SSDs) along with spinning disk drives. There may also be a traditional storage system with only SSDs installed.
• As a storage system specifically designed for all solid state, typically with solid state modules and a custom controller to manage the memory.
Solid state technology will continue to evolve over time as the value from performance acceleration and other benefits such as reducing power, space and cooling while increasing reliability justify further development. IT customers who purchase solid state storage systems need to realize that the systems are an investment that not only provide immediate benefits but have a long-term positive impact as well. The investment may be optimized with operational changes and infrastructure improvements. The selection of product and vendor for this momentous long-term decision must be carefully considered.
Some of the considerations include:
• Will the vendor’s system design be operationally the same if the underlying solid state technology is updated with the latest developments? Today’s systems are primarily NAND flash solid state memory, which will continue for years with improvements in durability and cost but will inevitably be replaced with another technology with greater advantages. IT should look at the investment to ensure that it will continue if the vendor can transparently introduce new solid state technology. Vendors that only focus on flash may not have considered the long-term investment.
• Does the solid state system fit seamlessly into the overall management environment? Simply put, does the management of the system work with the vendor’s other management tools, including top-level orchestration? This could require an exception now, but may change with further product development or with the next generation.
• Is the storage network attachment capable of meeting the performance latency and bandwidth the solid state storage system can deliver? Exploiting the high performance characteristic requires low latency, which can be achieved with direct connection or though storage networks. You need to consider network performance and expandability for solid state storage.
Deployment of solid state storage systems will become more pervasive and benefit from the continuation of investments made. It is important to look at the long-term when making a strategic decision about selection of a product and vendor.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Whenever EMC rolls out PCIe flash products, it paints a bull’s eye on Fusion-io.
Just as they did last year when they brought out VFCache, EMC spokesmen compared benchmarks against Fusion-io today during a webcast hyping their XtremSF flash products. EMC marketing materials used in the webcast show its cards beating “Brand F” in a series of IOPS and latency results.
And as he did in response last year, Fusion-io CEO David Flynn said all the attention around XtremSF is good for his company. He pointed out that EMC is reselling PCIe cards that Fusion-io already competes with, and competes well enough to stand as the server-based flash market leader
“We are quite flattered by EMC and its introduction of more products across the market we have created,” Flynn said. “EMC is making a renewed push to try and be relevant in server-side flash. They’ve incorporated three vendors – Micron, Virident and LSI – none of which have been competent at competing with Fusion-io. Now they’re trying to highlight those vendors’ competitive stance relative to us.”
Flynn said EMC is “cherry picking” its IOPS and latency numbers, mixing results from different partners that make them look good against Fusion-io instead of making apples-to-apples comparisons. He also said EMC’s benchmarks are more fitting for storage than for application server performance.
EMC isn’t the only vendor encroaching on Fusion-io’s turf. Most of the solid-state drive (SSD) vendors have added server-side flash, and flash array vendor Violin Memory launched its first PCIe flash cards this week. Flynn said Fusion-io’s early entrance into the market gives it an advantage not only in technology but in distribution partnerships.
“It’s one thing to have a component, it’s something else to have access to a market,” he said. “We have a sales team, but we also have partnered with the server vendors. All the server vendors and [storage vendor] NetApp have aligned themselves with Fusion-io. The only systems companies not aligned with Fusion-io are EMC and Oracle. Only EMC is an enemy, and we have them to thank for others aligning themselves with us. It’s a case of ‘My enemy’s enemy is my friend.’”
Fusion-io is making some impressive IOPS claims of its own. He said the vendor will demonstrate one of its 365 GB ioDrive2 hitting 9.6 million IOPS March 26 during a Technology Open House at its Salt Lake City, Utah, headquarters. He said that performance is enabled by Fusion-io APIs that integrate flash into host systems as well as the vendor’s Auto-Commit Memory software. The APIs allow flash to bypass operating system bottlenecks, Auto-Commit Memory is designed to maintain flash persistence in nanoseconds running on Fusion-io’s directFS, eliminating duplicate work between the host file system and flash memory software.
There are no shortages of companies trying to get a piece of the cloud.
Coraid Inc. announced it has contributed drivers for ATA-over-Ethernet (AoE) and its Coraid EtherCloud to OpenStack block storage, while Silver Peak Systems Inc.’s Virtual Acceleration Open Architecture (VXOA) software can be used for WAN optimization in Amazon cloud deployments for off-site replication and lower disaster recovery costs.
Coraid contributed the drivers so that OpenStack open-source clouds can integrate with the company’s EtherDrive scale-out arrays and EtherCloud platform, which automates workflows for storage provisioning and management via a REST API. The company designs systems based on a lightweight AoE protocol to handle block storage as an alternative to iSCSI. The AoE drivers allow storage access over massively parallel 10 Gigabit Ethernet connections.
“You can now provision OpenStack storage over the Ethernet. We have done this for enterprises and service providers,” said Doug Dooley, Coraid’s vice president of products. “This is for large scale public or private cloud developments. It’s not for small or medium-sized enterprises.”
While Coraid is trying to advance AoE’s presence in the cloud, Silver Peak is targeting the challenge of moving data to and from the cloud. Damon Ennis, Silver Peak’s vice president of product management and field engineering, said the Silver Peak software improves Amazon Web Service cloud traffic.
Organizations can accelerate data movement from their data centers to the cloud by spinning up a Silver Peak Amazon Machine Image in the Amazon Virtual Private Cloud, which allows you to set up a private cloud within Amazon Web Services cloud computing services.
“If your enterprise wants to take advantage of VPC and the Amazon Cloud (data center) is not close to you, this is where Silver Peak comes in,” said Ennis. “The challenge is if customers have to move their data to and from VPC. The customer has Silver Peak in their data center and they can spin up an instance in the Amazon cloud so they can optimize data transfers to the cloud.”
Silver Peak’s offering is available now. Coraid’s new drivers will be available in the next OpenStack, codenamed Grissly that is scheduled for release in April.
There are many opinions regarding how to handle information storage for big data analytics. By big data analytics, I’m referring to information associated with a data analytics operation that does the analysis in near real-time to present immediately actionable results. The most common approach to this type of analysis is to provide data that is the source for the real-time analytics process to the compute nodes with minimal latency and at a high data rate.
This requirement has led many data scientists designing analytics systems to require data to come from storage directly attached to the compute nodes. If solid state devices (SSDs) are used for storage, then all the better. This is contrary to most IT organizations’ strategy of delivering efficient storage utilization through networked storage. The approaches for the source data will continue to evolve with new storage systems and methods, but currently the decisions are driven by the designers of the analytics systems.
A more impacting question, is where does the data go after the initial analysis has been done? Some say that the data has already been used and can be discarded. However, future analysis on a larger set of data with different criteria may prove valuable. The problem is where to store that potentially massive amount of data that might be used again.
The most discussed approach is to archive the data for subsequent usage. The target for the data could be:
• A local storage system as a content repository. Usually this would be a NAS system for the unstructured file content used in data analytics, but it could also be a new generation object storage system capable of handling potentially billions of objects.
• Cloud storage may be the target for the analyzed data either as files or objects. With cloud storage, the storage costs could be reduced compared to adding infrastructure and archiving storage systems in IT for what may be a highly varying amount of capacity required. The costs are dependent on the amount of time the data is retained.
Ultimately this could be a massive amount of data. Archiving storage systems are typically self-protecting with remote replication to another archiving system or to cloud storage. The requirement for data protection may be another variable depending on the value of the data.
The big in big data analytics can mean big money if the decisions about where to store the information and how long to retain it are not strategically made. The main focus for big data analytics so far has been on the speed of the initial data analysis. Where to put the data to be retained must be considered as well and this can be a major concern for IT.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Nexenta Systems, which sells storage systems based on ZFS technology, revamped its leadership team and pulled in $24 million in funding today with an eye on going public.
Mark Lockareff takes over as CEO from Evan Powell, who is shifting to chief strategy officer. Nexenta also hired Bridget Warwick – formerly at BlueArc and NetApp – as chief marketing officer.
The Santa Clara, Calif.-based company has raised a total of $55 million in funding, including a $21 million round last year. The latest funding is Nexenta’s D round.
Lockareff comes to Nexenta from Bridge Adivsory Partners, where he served as managing director. He said he will focus on driving Nexenta’s next stage of growth as a software-defined storage vendor. The company’s core product is NexentaStor, which is based on open-source ZFS technology. The software runs on commodity servers, turning them into multiprotocol storage systems.
“There are a lot of different directions our product can get pulled into, so we have to be disciplined in the direction,” Locareff said. “We have the two hardest parts underneath us now [building a product and generating revenues]. Now it’s time to build a management team and the infrastructure for growth. We are moving to become a public company someday.”
Lockareff said the $24 million will be used to build out its field engagement to work with partners and joint marketing efforts. It also will be used to build out core features in the product and product a road map for resellers. Nexenta is working on getting its software to run on SSDs.
“There is an array of SSD providers and each might have different approaches in configurations,” Lockareff said. “Also, a lot of plug-in players want to work with us.”
Nexenta’s latest financing is led by new investor Four Rivers Group, with participation by previous Nexenta investors Menlo Ventures, TransLink Capital, Javelin Ventures, Sierra Ventures, Razor’s Edge Ventures, and West Summit Capital. In addition to Four Rivers, Presidio Ventures and UMC Capital participated in the funding.
Sykera, preparing to make its SkyHawk cheap all-flash storage arrays generally available in a few months, has $51.6 million in funding to market that platform and fuel development on its next system.
Skyera this week said it closed a mega-B Round led by Dell Ventures with other strategic partners participating. The round was actually $45.6 million, with Skyera’s $6 million seed money included in the $51.6 million figure.
Skyera came out of stealth last August, claiming it can sell all-flash storage at less than $3 per gigabyte. That would make the systems about the same price as spinning disk arrays. SkyHawk arrays have been in limited production through its beta program.
Tony Barbagallo, Skyera’s VP of marketing, said the startup is working on the next generation that will include more enterprise features such as active-active controllers, Fibre Channel support, high availability and the ability to scale up and out.
He said the first gen systems have complete solid state and storage management software such as LUN management, thin provisioning, read-only and writeable snapshots and encryption.
“This is disruptive technology, which is why Dell was excited,” Barbagallo said.
He said Skyera clears the biggest obstacle to flash adoption.
“There is one reason – and one reason only – why people aren’t dumping hard drive storage and moving to flash. That’s the cost of flash,” he said. “A number of vendors have picked fringe areas relative to the storage market to sell flash into. VDI, Hadoop data clusters and anything high performance computing are fringe areas of the mainstream market. They need the performance only flash can provide, and they’re willing to pay thirty dollars a gig to get it.
“[Skyera CEO] Rado [Danilak] said we need a way to break that price barrier, and that’s been our strategy from the start.”
The all-flash market is dominated by well-funded startups, but that will change over the next year or so. EMC is expected to release its “Project X” array from technology acquired from XtremeIO this year and NetApp this week said it will have a freshly designed FlashRay system in 2014 to go with its EF540 all-flash array for high performance computing.
Dell is the one major vendor without a fully fleshed-out flash strategy. There is no agreement for Dell to use or sell Skyera technology, but its investment gives it a say in development.
Skyera’s funding release included a quote from Marius Haas, president of Dell’s enterprise solutions group, praising the startup for its “innovative technology that is breaking new ground in enterprise solid-state storage systems, including controllers, memory and software.”
Dell is the only named Skyera investor. Barbagallo said all of the investors are strategic and there is no traditional venture capitalist money behind the startup.
You’ve probably heard of software-defined networking by now. The next step in storage, according to startup Jeda Networks, is software-defined storage networking.
Jeda came out of stealth this week with Fabric Network Controller (FNC) software, which it describes as intelligent software that installs on a hypervisor and gives standard Ethernet switches the ability to run the most powerful SANs. Jeda software runs with adapters from Intel and Broadcom, Berman said. The software virtualizes the connection between servers and storage. FCN supports Fibre Channel and Fibre Channel over Ethernet (FCoE) protocols.
The idea is to remove the need for dedicated storage switches, which greatly reduces cost and fits in with the current converged infrastructure strategy pursued by many companies.
“Storage networks are too expensive and too complex, and they don’t scale,” said Jeda CEO Stuart Berman, a veteran of storage networking companies Emulex and Vixel.
Berman said his software installs as a virtual machine on VMware ESX, and will eventually run on hypervisors from Microsoft, Red Hat, Citrix and others.
“We’ve virtualized the way servers talk to storage,” he said. “People will see the virtual machine that they install on their VMware server. We talk to switches and adapters in storage and servers. We can be an application on top of [VMware-owned SDN play] Nicira or Big Switch.”
Can Jeda’s software match Fibre Channel’s low latency? That is a requirement to make it in storage networking. We should find out soon enough. Berman said Jeda has two unannounced OEM wins, and he hopes to see his software show up in shipping products around the middle of the year.
He intends to set up a VAR program as well, but this type of software seems best suited to OEM distribution.
Barracuda Networks has jumped into the crowded online file sharing pond, and today enlisted Drobo as a partner to help get started.
Barracuda’s Copy cloud file sharing service launched as a private beta last year, and will enter public beta this year. The security/data protection vendor will offer customers of Drobo’s new 5N SMB/prosumer NAS box 5GB of free cloud file storage on Copy. Drobo customers can license additional capacity from Barracuda.
Besides seeding its cloud with potential customers, Barracuda general manager Guy Suter said the partnership can make for a smoother interaction between on-premise and cloud storage.
“To us, the Drobo looks like another device that we synch files to,” he said. “Having local storage and cloud storage interact with each other seamlessly helps your workflow a lot.”
For Drobo, the deal gives its customers a quick way to use the cloud as a complement to the storage inside the box. Erik Pounds, Drobo VP of product management, said he expects customers to embrace the cloud even after they buy on-premise storage. The cloud can serve as backup of critical files.
“A lot of data stored in remote or home offices is inhibited by the four walls of that home or office,” he said. “We’re not afraid of the cloud because the amount of data that needs to be stored and shared is massive. The average data on Drobo storage is 3 TB, so there’s a lot of desire to use both.”
Copy is also available as a standalone service, but Barracuda can use the help from Drobo in making its way among dozens of competitors already in the market, including Dropbox, Box and EMC-owned Syncplicity.
Suter points out the cloud file sharing market is young, and current contenders are still grappling with the best way to serve both users and companies. He said the goals for Copy are to facilitate “easier sharing, and to make it more secure than what’s out there, and company friendly.”
Under the company friendly category, he said Copy gives administrators the ability to create separate areas to keep proprietary company data. “Users can have an area for personal data, but there’s another areas for company data,” he said. “Companies can revoke access to company data.”
Barracuda is known mostly for its firewall products, but it does offer a hybrid backup service based on technology gained when it bought backup software vendor Yosemite Technologies in 2009. Some of that data protection technology is used for Copy.
And you can expect Barracuda to go deeper into data protection. BJ Jenkins joined Barracuda as CEO last October after running EMC’s backup and recovery division.
It’s a sign of the times that news of NetApp’s FlashRay all-flash storage system this week overshadowed its FAS6200 high-end disk array upgrade.
The FAS6200 is the highest performing and largest capacity platform of NetApp’s mainstream storage family. FlashRay won’t be available for another year, and probably won’t approach FAS6200 sales for years.
But flash storage is so much more interesting these days and, besides, it’s not every day that NetApp reveals it is developing a non-Data OnTap storage system.
The FAS6200 hardware isn’t that much different from the previous versions. The exceptions are that the new systems have substantially more memory and support 4 TB drives. The memory boost results in better performance and the larger drives bring the maximum cluster capacity to 65 PB. The FAS6220, 6250, and 6290 replace the FAS6210, 6240 and 6280 arrays and V-Series gateways.
The dual-controller 6220 holds 1,200 drives and 4.7 PB in a 6U chassis with 96 GB of memory. The 6250 and 6290 have two 6U chassis, and each system holds 1,440 drives and 5.6 PB. The 6250 has 144 GB of memory and the 6290 has 192 GB of memory.
Flash can play a big part in these systems, too. The 6290 holds up to 16 TB of total Flash Cache and Flash Pool capacity, the 6250 holds 12 TB of flash and the 6210 4 TB of flash. Flash Cache is controller based and optimizes performance of data throughout the array. Flash Pools accelerate performance of data inside a volume.
The FAS6200 series competes mostly with EMC’s VMAX 10K entry level enterprise system and the higher end of the midrange VNX family, IBM’s XIV and V7000, the larger of Hewlett-Packard’s StoreServ arrays, and Hitachi Data System’s Virtual Storage Platform (VSP) and Unified Storage-VM systems.
Three months after closing its StorSimple acquisition, Microsoft is still keeping its roadmap plans under wraps. The only sign of StorSimple integration so far is what Microsoft calls ASAP – the Azure storage acceleration program.
ASAP is a quick and easy way to purchase cloud storage using StorSimple’s controllers and the Microsoft Windows Azure cloud service. Customers can buy a StorSimple iSCSI storage controller with 50 TB or 100 TB of capacity provisioned to move data to the Azure cloud for a hybrid setup using on-premise and cloud storage. That means the purchase and provisioning are handled in one step instead of a customer having to engage StorSimple and a cloud provider separately.
Mark Weiner, a StorSimple executive and now a director of product marketing for Microsoft storage, said purchasing through ASAP lowers the cost of storage capacity by at least 60% versus traditional storage infrastructure.
Weiner said biggest change since the acquisition is StorSimple’s product has gone global under Microsoft. Before the sale, it was U.S.-focused. When asked if StorSimple still worked with other cloud providers, he said, “technically, there’s no reason why we can’t. But obviously we are focused on a joint solution with Azure, either purchased on ASAP or purchased separately.”
Weiner assures us that StorSimple is expanding and improving its technology under Microsoft, and Microsoft sees cloud storage as a big growth area.
“You will see a lot of ongoing innovation from StorSimple as part of Microsoft,” he said. “I still see my engineering colleagues late in the office, there’s no slowing down.”