NetApp and Cisco today said they are expanding their FlexPod reference architecture, which consists of NetApp storage arrays, Cisco servers and networking, and software from other partners. NetApp and Cisco are adding an entry-level FlexPod that uses NetApp FAS2240 storage with Cisco C-Series Nexus 5000 Switches, Nexus 2232 Fabric Extender and UCS 6200 Fabric Interconnects.
The entry-level version is the first FlexPod to use NetApp’s FAS2000 entry-level storage systems and Cisco rack-mounted UCS devices. The other FlexPod architectures use NetApp FAS3200 and FAS6200 arrays and bladed UCS versions. The entry-level system is to support 500 to 1,000 users. It also the first FlexPod with iSCSI boot support.
“When we started [with FlexPod] we were talking about scaling from the midrange of our product families to the high end. Today we’re talking about an entry-class system,” said Jim Sangster, senior director of solutions marketing for NetApp. “It has the same common structure for support and management.” Sangster said more than 850 customers use the FlexPod reference architecture.
NetApp launched its FlexPod architecture in 2010 mainly as an answer to rival EMC’s Vblock integrated stack, but the storage vendors take different paths in bringing the products to market. Vblocks are sold through VCE, an alliance consisting of EMC, Cisco and VMware. They also have specific model numbers and configurations. NetApp sells FlexPod as a reference architecture that the vendor and partners can configure according to customer workloads. Although Cisco is a VCE partner, it maintains a close relationship with NetApp on the FlexPod architecture.
Adding an entry level version and more Cisco gear isn’t a huge announcement, but it underscores NetApp’s commitment to its converged architecture strategy for virtual infrastructure and private clouds. The same goes for EMC, which is preparing for a Thursday event to launch a new bundle that it claims will “dramatically simplify the deployment of private cloud.”
NetApp’s Sangster also pitches FlexPod as a faster and less expensive way for customers to build a private cloud. NetApp and Cisco said they have pre-tested automation and orchestration software from CA Technologies, Cloupia and Gale Technologies, with pre-validated software coming for monitoring and analytics. The automation and orchestration software is CA Automation Suite for Data Centers, Cloupia Unified Infrastructure Controller and GaleForce Turnkey Cloud for FlexPod.
“These vendors, with more coming, have met specific levels of API support,” said Satinder Sethi, VP of Cisco’s Server Access Virtual Technology Group. “This validates they have achieved a certain level of integration and makes sure we have management of the storage, network and server layers.
Customers can also manage FlexPod via open APIs from Cisco Intelligent Automation for Cloud, VMware vCloud Director or VMware vCenter Server.
Sangster said customers can scale the entry level FlexPods for capacity by adding additional FAS2240 nodes or scaling up to a FAS3200 or FAS6200. They can scale capacity by adding UCS server nodes. There is no data migration required to move from one NetApp system to another, and all UCS models are managed by Cisco UCS Manager.
Unlike VCE’s Vblocks, FlexPods do not have specific model numbers. Sangster said some partners sell small, medium and large reference architectures but they are not limited to specific NetApp and Cisco products. “There’s not a hard-coded bill of materials,” he said.
The new configuration options will be available next month.
Red Hat today rolled out the beta version of Red Hat Storage Software 2.0, used to build scale-out network-attached storage (NAS) for unstructured data. The upgraded version includes new features such as the ability to access both file and object-based data from a single storage pool and support for Hadoop in “big data” environments.
Version 2 is the first major upgrade for Red Hat since it acquired startup Gluster last year. Current versions of Red Hat Storage on the market are re-branded versions of the GlusterFS product with tweaks to better support the Red Hat Enterprise Linux (RHEL) operating system.
Red Hat Storage Software 2.0 makes it easier to manage unstructured CIFS, NFS and GlusterFS mount points. The unified file and object feature allows for users to save data as an object and retrieve it as a file, or save data as an object and retrieve it as a file.
“A typical use case would be a customer can choose to save something as an object or file. So you can upload a photo as a file but in the portal software it is converted into an object,” said Sarangan Rangachari, general manager for storage at Red Hat.
The 2.0 version supports Hadoop MapReduce, which is a programming language and software framework for writing applications that rapidly process large amounts of data in parallel on large clusters of compute modes. “What we provide in this release is the underlying file system in MapReduce-based applications that use the Hadoop Distributed File System (HDFS),” Rangachari said.
The Red Hat Storage Software provides a global namespace capability that aggregates disk and memory resources into a unified storage volume. The software runs on commodity servers and uses a combination of open source Gluster software, which Red Hat acquired in October 2011, and Red Hat Linux 6. In February, Red Hat also introduced the Red Hat Virtual Storage Appliance for scale-out NAS delivered as a virtual appliance. This allows customers to deploy virtual storage servers the same way virtual machines are deployed in the cloud.
The Red Hat appliance allows the ability to aggregate both Elastic Block Storage (EBS) and Elastic Compute Cloud (EC2) instances in Amazon Web Service environments.
Conversations with IT people about long-term archiving usually begin by focusing on a specific storage device, and then it quickly becomes apparent that much more is involved. Addressing a long-term archive is a complex issue that requires education to understand. There is no single silver-bullet product.
The technology discussions include devices/media for storing data and the storage systems and features utilized. Storage systems that automatically and non-disruptively migrate data from one generation of a system to another are key to long-term archiving. I use the analogy of pushing something along in a relay race.
The information maintained in an archive is another key consideration. Information is data with context, where the context is really an understanding of what the data is, what it means, and what its value is. Maintaining information over time requires applications that understand the information, devices that can read the information, and a method for determining when the information no longer has value as part of a data retention policy. Kicking the can of information down the road for years when it has no value makes no sense.
The ability to read and understand the information years into the future is another major concern for long-term archiving. Without applications that do this, the issue of addressing long-term archiving becomes moot. I try to divide the problem into two parts. The first is defining information that is “system of record” where the data must be processed by the application to produce results. The simplest example of this is business records that produce reports, statistics, or other numbers. In this case, there must be a linkage between the information and the application.
If the application changes or is replaced, then the information also must be carried along with translation so the new app understands it. If not, the information no longer has value.
The second part of the application issue concerns information that needs to be viewable in the future where no application is needed. This case is created by putting the information in a viewable format that will persist for a long time. Today that would be a PDF document. At some point that may change and the PDF documents would have to be translated or transformed for the new viewable format, once again requiring a linkage between the information and application.
You must address all of these points for a long-term archive to achieve its goal of making information available and readable when it’s needed.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
The Storage Networking World (SNW) conference was disrupted Tuesday for a couple of hours when a tornado hit the Dallas-Fort Worth area in Texas. The day started out with cloudy skies, but in the afternoon a siren went off throughout Dallas, which was the first sign that something was amiss.
I was sitting in the outside balcony at the Omni Hotel and Resort in downtown Dallas interviewing an executive from Mezeo Software when news started circulating that a tornado was in the area. It didn’t take long before SNW attendees started heading for the windows to watch the strange swirl of clouds in the distance. Hotel personnel quickly started to order everyone on the balcony to move back into the hotel and away from the windows. But not everyone was willing to miss seeing the potential of a tornado hitting the city. Many people kept going back to the windows, pulling out their phones and taking pictures.
One SNW attendee, a meteorologist who has been chasing storms for 10 years, started arguing with hotel employees because he wanted to watch the cloud movement from a window while also tracking its progress from his iPad.
The exhibit hall was closed and sessions were canceled or delayed for at least an hour while everyone waited for the tornado to pass. A heavy rain storm followed after the dark, swirling clouds lifted, giving attendees plenty to talk about besides storage when the conference resumed.
This wasn’t the first time SNW was held at the site of nasty weather. The 2005 fall SNW was disrupted by a hurricane in Orlando, Fla., that prevented many would-be attendees of making it to the show. Fall SNW returned to Orlando last fall for the first time since the hurricane (Orlando had only been the site for the spring show in recent years). The fall SNW in October 2012 will be in Santa Clara.
Hitachi Global Storage Technologies, now part of Western Digital, today launched the first 4 TB enterpris hard drive.
The Ultrastar 7K4000 is a 3.5-inch 7,200 rpm SATA drive with a 2 million hours mean time between failure (MTBF) and five-year warranty. Current SATA enterprise drives top out at 3 TB, and HGST’s main enterprise drive rival Seagate has not yet released a 4 TB drive.
HGST VP of product marketing Brendan Collins said he sees the larger drives as a boon for big internet companies and cloud providers because they allow organizations to pack in 33% more capacity than they can now while reducing power by 24%.
“If you’re a massive data center running out of space and you have to react to petabyte growth, one way of doing that is replacing 3 TB drives with 4 TB,” he said.
OEM partners are qualifying the drives, and Collins said he expects them to ship in volume around the middle of the year. But some vendors may hold off shipping due to the transition to the new Advanced Format 4K hard drive sectors. In moving from 512-byte sectors to 4,096-byte sectors, Advanced Format handles large files more efficiently and improves data integrity. However, server and storage vendors must rewrite their software to support the new format.
The Ultrastar 7K4000 is known as a 512e (emulation) drive because it is configured with 4,096-byte sectors and 512-byte firmware that allows software written for the older format to work with the new drive format. However, there will be performance degradation during the translation process and Collins said some storage vendors might wait until native 512-byte versions are available later this year before shipping the drives.
“Storage system vendors design their own file systems,” Collins said. “Some are ready [for 4K] and can drop it in immediately with no impact. If they’re not ready, they can wait for the native [512-byte] version.”
Collins expects the largest storage vendors to use the 512e drives. He also said HGST will likely have a SAS version of the 4 TB drive later this year.
Violin Memory today picked up another $50 million in funding and a new strategic partner in SAP. If the market cooperates, it will be the last funding round before Violin follows its solid-state storage rival Fusion-io to an initial public offering (IPO).
Violin has pulled in $150 million in funding since former Fusion-io CEO Don Basile became Violin’s CEO in late 2009. Basile said Violin has grown from 100 employees to 320 since last June, and sales increased 500% over the last year. He puts Violin’s valuation at $800 million, which is probably more than 10 times its annual revenue.
Violin likes to bring in funding money from strategic investors as well as venture capitalists. Violin’s NAND supplier Toshiba has been an investor since the first funding round, and joined SAP as the largest investors in this latest round. Previous investor Juniper Networks and newcomer Highland Capital Partners were other investors in today’s round.
“We’re getting in the habit of this,” Basile said after closing his fourth funding round at Violin. “At the end of last year we considered the public market, but our bankers weren’t sure if the public market was open in the first quarter of 2012, so we took a mezzanine round. This gives us money to grow and operate regardless of market decisions.”
Violin sells all-flash storage arrays and caching appliances. Basile points to EMC’s VFCache PCIe caching product and its plans for a Project Thunder flash-based shared storage appliance that will compete with Violin as proof that the enterpreise flash market is poised to take off.
By Basile’s count, there are at least 30 companies selling all-flash arrays now, although he said Violin mostly competes with traditional storage vendors offering solid-state drives mixed in their hard drive arrays. Solid-state storage companies raised more than $300 million in funding in 2011, and have also been prime acquisition targets. “
Violin acquired the assets of Gear6 in 2010, and turned the technology into its vCache NFS Caching product. Basile said some of Violin’s latest funding may be used for small acquisitions to enhance its product line. “We’re an active reviewer of companies,” he said. “Expect us to acquire things that make sense to buy rather than engineer from the ground up.”
Disaster recovery in the cloud is improving by the day.
At least three vendors upgraded services in the past week, concentrating on faster recovery for small enterprises and SMBs.
EVault added a four-hour option for its EVault Cloud Disaster Recovery Service (EVault CDR) to go with its previous 24- and 48-hour SLA options. EVault is promising to have applications on the four-hour SLA up and running within that window.
EVault president Terry Cunningham said four hours is the magic number to gain critical mass for his company’s cloud DR service because it opens the door for heavily regulated businesses that cannot stand long outages for critical systems.
“This opens up the whole market for us,” Cunningham said. “One customer said, ‘When you deliver four hours, you get all our business.’”
He said the technology is available for more granular snapshots and shorter backup windows, making the four-hour SLA possible. The EVault service includes a minimum of one DR test per year, and customers can choose different SLAs for different applications. They can use the four-hour recovery for critical apps, and the longer recovery options for others. He declined to give exact pricing because it is set by EVault’s distribution partners, but the four-year SLA costs twice as much as the 24-hour option.
EVault, owned by Seagate, changed its name back from i365 to Evault last December.
Not everyone is so impressed with four-hour recovery. QuorumLabs promises instant recovery with its new Hybrid Cloud Disaster Recovery service that lets customers install one of the vendor’s onQ appliances on site and replicate to another appliance at a QuorumLabs’ off-site data center.
QuorumLabs’ hybrid service keeps up-to-day virtual clones of critical systems that run on the appliance or in the cloud. The service builds new recovery nodes continuously and the vendor says the cloud appliance can take over for failed servers with one mouse click.
“Compared to our offering – ready in minutes, tested daily – [four-hour recovery] is like a pizza delivery guaranteed to arrive sometime in the next several days,” QuorumLabs CEO Larry Lang said.
QuorumLabs already has customers who set up DR by installing appliances at two locations, but not all of its customers have a second site. “If something were to happen, we bring up an exact copy of that server in your cloud,” Lang said. “Users just redirect their client to the cloud. Literally in an hour they can have something up and running.”
QuorumLabs’ service is priced by the number of servers and the amount of data protected. Lang said a customer with 10 servers and 3 TB would pay about $20,000 per year.
Zetta also upgrade its cloud backup and DR service. Zetta’s DataProtect 3.0 uses the ZettaMirror software agent on the customer site and synchronizes data to one of the vendor’s cloud data centers. The latest version adds support for Apple desktops and laptops as well as Microsoft SQL Server and Windows system state, improves performance with compression and a metadata cache and allows snapshots of synched data.
EVault’s Cunningham said the cloud’s role in data protection has made the business more competitive. He said customers are re-evaluating their backup and DR processes and find it easier to switch vendors.
“It used to be that when you made a backup deal, it was for life,” he said. “We used to sell you some software and say ‘Good luck with that, hope it works out.’ Today it’s a service. We have to earn the business every month.
“The customer has more options for switching now. There are some technical challenges, but you can do it. If vendors screw up, they lose the customers.”
Atlantis Computing today launched Atlantis ILIO for Citrix XenApp, which helps reduce I/O and latency problems often associated with application virtualization. The product runs on a VMware vSphere hypervisor and is aimed at customers planning to virtualize XenApp 6.5 with Windows Server 2008 R2.
The new product is built on the same codebase as Atlantis ILIO for VDI but this new version is targeted at customers deploying application virtualization. Atlantis ILIO helps eliminate I/O bottleneck because it processes I/O locally within the hypervisor’s memory. It does inline deduplication to reduce the amount of data hitting the NAS or SAN.
Atlantis ILIO for XenApp is a virtual machine that is deployed on each XenApp server and creates an NFS datastore that acts as the storage for the XenApp VMs running on Windows Server 2008 R2.
“We correct the problem the way we do with VDI,” said Seth Knox, Atlantis’ director of marketing. “All duplicate storage traffic is generally eliminated before it’s sent to the storage. “
Torsten Volk, senior analyst for Enterprise Management Associates, said Atlantis ILIO for XenApp helps optimize performance because it sequentializes and dedupes the I/O traffic. He also said support for XenApp will broaden Atlantis’ market substantially.
“There is a much larger customer base for Citrix XenApp compared to the VDI market and only minimal changes to the Atlantis ILIO codebase were required to accommodate XenApp,” Volk said. “Not many are using VDI because the ROI is still unclear, but XenApp is a well-liked and vastly adopted platform that has provided tremendous customer value for over a decade.”
Knox said there are customers who ask for both products, but agreed there will be more demand for ILIO for XenApp.
“There is a much larger install base of people using XenApp,” Knox said. “Many of our customers use both VDI and XenApp, so they asked us to do a version for XenApp.”
According to the vendor, Riak CS lets customers store and retrieve content up to 5 GB per object, is compatible with the Amazon S3 API, has multi-tenancy features, and reports on per-tenant usage data and statistics on network I/O. Pricing for Riak CS starts at $10,000 per hardware node, which comes to about 40 cents per GB for a 24 TB node.
Riak CS is Basho’s second software application. Its Riak NoSQL database is based on principles outlined in the 2007 Amazon Dynamo white paper. While Riak is an open source application, Riak CS is not. Basho added multi-tenancy, S3 API compatibility, large object support and per tenant usage, billing and metering to Riak CS to make it a cloud application.
“We look at ourselves as an arms dealer of Amazon principles [outlined in the 2007 Amazon Dynamo distributed white paper],” Basho CMO Bobby Patrick said. “Riak CS is for large service providers looking for scalability and tenancy, and also large companies that want S3 without AWS [Amazon Web Services]. This is S3-compatible, but for a private cloud.”
He said several large multinational companies are evaluating Riak CS as a method of keeping important data in-house behind a firewall.
Riak CS is built to run on commodity hardware. Patrick said it will compete mainly with OpenStack Swift object storage, but it will also come into competition from EMC’s Atmos and software from smaller vendors such as Scality Ring and Gemini Mobile Cloudian.
“Any hosting company, any telecom company, any infrastructure-as-a-service company, is going to have to evolve from expensive shared storage to cloud storage for economic scale benefits,” Patrick said. “A new architecture is needed for that. They need to do it on cheap commodity hardware and in a way they can manage it.”
DataDirect Networks (DDN) launched two storage systems for people who want to start small in their approach to “big data.”
DDN is known for storage systems that deliver extreme performance and capacity but also carry large price tags. To try to broaden its market, the vendor this week introduced lower-priced arrays, including one that starts at $100,000 during introduction pricing that runs until the end of June.
“We found there are a lot of customers and prospective customers looking to start with DataDirect at a lower price and form factor while benefitting from scalability,” DDN marketing VP Jeff Denworth said.
The new systems are the DDN SFA10K-M and SFA10K-ME. The 10K-M scales to 720 TB with InfiniBand or Fibre Channel networking and with SAS, SATA or solid-state drives (SSDs). Customers can upgrade the 20u system to the larger SFA10K-X.
The SFA10K-ME is the same hardware as the 10K-M, but can be bundled with DDN’s GridScaler or ExaScaler parallel file systems. The promotional $100,000 price is for a SFA10K-M with eight InfiniBand ports, a 60-slot disk enclosure, and 16 GB of mirrored cache.
DDN says its new systems cost 40% less with a 57% smaller form factor than its larger SFA storage arrays.
“The news of dramatically smaller footprints and reduced-cost SFA entry points is not what we’re used to hearing from a company that is accustomed to extending the scalability and performance envelopes of big data applications,” Taneja Group analyst Jeff Byrne wrote of DDN’s new systems in a blog on the Taneja web site.
Denworth said the new systems fill a gap in DDN’s product line between the S2A6620 midrange storage for media/entertainment and high performance computing and the SFA10K-X high-bandwidth petabyte capacity platforms.
“Customers can grow the system as requirements and budget dictates,” Denworth said.
SFA10K-M customers can upgrade to DDN 10K or SFA12K systems, but they would have to take the systems offline. There are no non-disruptive upgrades.