Storage Soup


April 12, 2012  12:20 PM

Consumerization of IT leads to accidental storage admins

Randy Kerns Randy Kerns Profile: Randy Kerns

One of the ongoing changes in IT is the transition to IT generalists configuring and managing storage in all but the largest enterprises. This was always common in small enterprises, but now is increasingly the case in the mid-tier enterprise, too. Beyond storage, the IT generalist handles server operating systems, networking, and the virtualization hypervisors.

Another dynamic occurring along these lines is called the consumerization of IT. People that use technology such as smart phones or iPads in their daily lives are becoming administrators at the IT generalist level. The general consumer technology user:

· Must know about setting up accounts and security.

· Understand how to protect data in the cloud.

· Know how to migrate data to a new device.

· Understand about setting file sharing options such as access to photos on Snapfish.

What has happened here?  IT operations have become part of many people’s lives.  Most are doing these administrative tasks out of necessity with no training other than some interactive guidance.  Some do it incorrectly, some struggle through the administration, and others provide services – in my case, I’m the admin for the PCs, etc. for my daughters.

This shift even changes the way midrange enterprise storage is managed. Element managers (the storage vendor’s storage system management software) must be designed with expectations that an IT generalist will manage the storage environment.

Storage vendors should assume the IT generalist using the element manager has a limited base of storage knowledge. They should expect that no manual will be read either on paper or in electronic form. And they should realize that when there is a complex set of choices, they should assume the wrong one will be tried first and correction action or second chances will be necessary.

This leads to the demand for a new GUI that is highly interactive with icons to demonstrate actions and status. The GUI must seem simple, belying the underlying complexity.

Without a plan and real education, we’ve created a mass unpaid workforce of IT generalists.  So, when do we get a new generation of storage administrators without planning for it?

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

April 10, 2012  11:49 AM

Toshiba takes bow for inventing NAND flash, wasn’t always so

ITKE ITKE Profile: ITKE

Written by: John Hilliard

Toshiba America Electronic Components Inc., is giving itself a self-high-five for the invention of NAND flash 25 years ago – a technology that, among other uses, allows customers to store critical data in bananas or cake-worthy celebrations of industrial design.

But Toshiba isn’t giving any kudos to the man credited with the invention, perhaps because he represents a part of the story the vendor would rather forget.

In the company’s press release, Scott Nelson, a Toshiba vice president, called NAND flash a “game changer” and boasted “The cost/performance of NAND flash continues to stand the test of time. NAND flash is leading the way to thin and light hardware, has made the mobility of content possible, and is enabling ‘green’ storage in the data centers.”

According to market research firm IC Insights, NAND flash sales are expected to hit $32.8 billion this year, an 11% hike from 2011, and it may top DRAM sales for the first time.

But many anniversaries include awkward memories that everyone wants to forget, and this has one for Toshiba. Toshiba’s awkward memory here is its relationship with one of the experts credited with the invention of the technology, Fujio Masuoka. And no, Masuoka is not mentioned in Toshiba’s announcement.

According to Forbes.com, Toshiba tried for a while to not take credit for inventing flash storage, gave the credit to Intel and downplayed the work of Masuoka in developing NAND flash.

“For his work, Masuoka says, he was awarded a few hundred dollars from Toshiba and only after a Japanese newspaper gave his new type of memory an award of invention of the year in 1988,” Forbes reported in 2002, also noting that Toshiba “disputes” Masuoka’s account (the company said he was promoted).

The dispute landed Toshiba and Masuoka in court. Masuoka sued Toshiba in 2004 for about $9 million, according to Business Week, and the case was settled a few years later for about $750,000.


April 10, 2012  8:16 AM

NetApp, Cisco expand FlexPod for smaller private clouds

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and Cisco today said they are expanding their FlexPod reference architecture, which consists of NetApp storage arrays, Cisco servers and networking, and software from other partners. NetApp and Cisco are adding an entry-level FlexPod that uses NetApp FAS2240 storage with Cisco C-Series Nexus 5000 Switches, Nexus 2232 Fabric Extender and UCS 6200 Fabric Interconnects.

The entry-level version is the first FlexPod to use NetApp’s FAS2000 entry-level storage systems and Cisco rack-mounted UCS devices. The other FlexPod architectures use NetApp FAS3200 and FAS6200 arrays and bladed UCS versions. The entry-level system is to support 500 to 1,000 users. It also the first FlexPod with iSCSI boot support.

“When we started [with FlexPod] we were talking about scaling from the midrange of our product families to the high end. Today we’re talking about an entry-class system,” said Jim Sangster, senior director of solutions marketing for NetApp. “It has the same common structure for support and management.” Sangster said more than 850 customers use the FlexPod reference architecture.

NetApp launched its FlexPod architecture in 2010 mainly as an answer to rival EMC’s Vblock integrated stack, but the storage vendors take different paths in bringing the products to market. Vblocks are sold through VCE, an alliance consisting of EMC, Cisco and VMware. They also have specific model numbers and configurations. NetApp sells FlexPod as a reference architecture that the vendor and partners can configure according to customer workloads. Although Cisco is a VCE partner, it maintains a close relationship with NetApp on the FlexPod architecture.

Adding an entry level version and more Cisco gear isn’t a huge announcement, but it underscores NetApp’s commitment to its converged architecture strategy for virtual infrastructure and private clouds. The same goes for EMC, which is preparing for a Thursday event to launch a new bundle that it claims will “dramatically simplify the deployment of private cloud.”

NetApp’s Sangster also pitches FlexPod as a faster and less expensive way for customers to build a private cloud. NetApp and Cisco said they have pre-tested automation and orchestration software from CA Technologies, Cloupia and Gale Technologies, with pre-validated software coming for monitoring and analytics. The automation and orchestration software is CA Automation Suite for Data Centers, Cloupia Unified Infrastructure Controller and GaleForce Turnkey Cloud for FlexPod.

“These vendors, with more coming, have met specific levels of API support,” said Satinder Sethi, VP of Cisco’s Server Access Virtual Technology Group. “This validates they have achieved a certain level of integration and makes sure we have management of the storage, network and server layers.

Customers can also manage FlexPod via open APIs from Cisco Intelligent Automation for Cloud, VMware vCloud Director or VMware vCenter Server.

Sangster said customers can scale the entry level FlexPods for capacity by adding additional FAS2240 nodes or scaling up to a FAS3200 or FAS6200. They can scale capacity by adding UCS server nodes. There is no data migration required to move from one NetApp system to another, and all UCS models are managed by Cisco UCS Manager.

Unlike VCE’s Vblocks, FlexPods do not have specific model numbers. Sangster said some partners sell small, medium and large reference architectures but they are not limited to specific NetApp and Cisco products. “There’s not a hard-coded bill of materials,” he said.

The new configuration options will be available next month.


April 9, 2012  9:28 PM

Red Hat 2.0 supports unified file and object-based data

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Red Hat today rolled out the beta version of Red Hat Storage Software 2.0, used to build scale-out network-attached storage (NAS) for unstructured data. The upgraded version includes new features such as the ability to access both file and object-based data from a single storage pool and support for Hadoop in “big data” environments.

Version 2 is the first major upgrade for Red Hat since it acquired startup Gluster last year. Current versions of Red Hat Storage on the market are re-branded versions of the GlusterFS product with tweaks to better support the Red Hat Enterprise Linux (RHEL) operating system.

Red Hat Storage Software 2.0 makes it easier to manage unstructured CIFS, NFS and GlusterFS mount points. The unified file and object feature allows for users to save data as an object and retrieve it as a file, or save data as an object and retrieve it as a file.

“A typical use case would be a customer can choose to save something as an object or file. So you can upload a photo as a file but in the portal software it is converted into an object,” said Sarangan Rangachari, general manager for storage at Red Hat.

The 2.0 version supports Hadoop MapReduce, which is a programming language and software framework for writing applications that rapidly process large amounts of data in parallel on large clusters of compute modes. “What we provide in this release is the underlying file system in MapReduce-based applications that use the Hadoop Distributed File System (HDFS),” Rangachari said.

The Red Hat Storage Software provides a global namespace capability that aggregates disk and memory resources into a unified storage volume. The software runs on commodity servers and uses a combination of open source Gluster software, which Red Hat acquired in October 2011, and Red Hat Linux 6. In February, Red Hat also introduced the Red Hat Virtual Storage Appliance for scale-out NAS delivered as a virtual appliance. This allows customers to deploy virtual storage servers the same way virtual machines are deployed in the cloud.

The Red Hat appliance allows the ability to aggregate both Elastic Block Storage (EBS) and Elastic Compute Cloud (EC2) instances in Amazon Web Service environments.


April 5, 2012  7:55 AM

Long-term archives require detailed planning

Randy Kerns Randy Kerns Profile: Randy Kerns

Conversations with IT people about long-term archiving usually begin by focusing on a specific storage device, and then it quickly becomes apparent that much more is involved. Addressing a long-term archive is a complex issue that requires education to understand. There is no single silver-bullet product.

The technology discussions include devices/media for storing data and the storage systems and features utilized. Storage systems that automatically and non-disruptively migrate data from one generation of a system to another are key to long-term archiving. I use the analogy of pushing something along in a relay race.

The information maintained in an archive is another key consideration. Information is data with context, where the context is really an understanding of what the data is, what it means, and what its value is. Maintaining information over time requires applications that understand the information, devices that can read the information, and a method for determining when the information no longer has value as part of a data retention policy. Kicking the can of information down the road for years when it has no value makes no sense.

The ability to read and understand the information years into the future is another major concern for long-term archiving. Without applications that do this, the issue of addressing long-term archiving becomes moot. I try to divide the problem into two parts. The first is defining information that is “system of record” where the data must be processed by the application to produce results. The simplest example of this is business records that produce reports, statistics, or other numbers. In this case, there must be a linkage between the information and the application.

If the application changes or is replaced, then the information also must be carried along with translation so the new app understands it. If not, the information no longer has value.

The second part of the application issue concerns information that needs to be viewable in the future where no application is needed. This case is created by putting the information in a viewable format that will persist for a long time. Today that would be a PDF document. At some point that may change and the PDF documents would have to be translated or transformed for the new viewable format, once again requiring a linkage between the information and application.

You must address all of these points for a long-term archive to achieve its goal of making information available and readable when it’s needed.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 4, 2012  8:48 AM

Storage world gets close-up look at a disaster

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

The Storage Networking World (SNW) conference was disrupted Tuesday for a couple of hours when a tornado hit the Dallas-Fort Worth area in Texas. The day started out with cloudy skies, but in the afternoon a siren went off throughout Dallas, which was the first sign that something was amiss.

I was sitting in the outside balcony at the Omni Hotel and Resort in downtown Dallas interviewing an executive from Mezeo Software when news started circulating that a tornado was in the area. It didn’t take long before SNW attendees started heading for the windows to watch the strange swirl of clouds in the distance. Hotel personnel quickly started to order everyone on the balcony to move back into the hotel and away from the windows. But not everyone was willing to miss seeing the potential of a tornado hitting the city. Many people kept going back to the windows, pulling out their phones and taking pictures.

One SNW attendee, a meteorologist who has been chasing storms for 10 years, started arguing with hotel employees because he wanted to watch the cloud movement from a window while also tracking its progress from his iPad.

The exhibit hall was closed and sessions were canceled or delayed for at least an hour while everyone waited for the tornado to pass. A heavy rain storm followed after the dark, swirling clouds lifted, giving attendees plenty to talk about besides storage when the conference resumed.

This wasn’t the first time SNW was held at the site of nasty weather. The 2005 fall SNW was disrupted by a hurricane in Orlando, Fla., that prevented many would-be attendees of making it to the show. Fall SNW returned to Orlando last fall for the first time since the hurricane (Orlando had only been the site for the spring show in recent years). The fall SNW in October 2012 will be in Santa Clara.


April 3, 2012  1:46 PM

Get ready for 4 TB SATA drives

Dave Raffo Dave Raffo Profile: Dave Raffo

Hitachi Global Storage Technologies, now part of Western Digital, today launched the first 4 TB enterpris hard drive.

The Ultrastar 7K4000 is a 3.5-inch 7,200 rpm SATA drive with a 2 million hours mean time between failure (MTBF) and five-year warranty. Current SATA enterprise drives top out at 3 TB, and HGST’s main enterprise drive rival Seagate has not yet released a 4 TB drive.

HGST VP of product marketing Brendan Collins said he sees the larger drives as a boon for big internet companies and cloud providers because they allow organizations to pack in 33% more capacity than they can now while reducing power by 24%.

“If you’re a massive data center running out of space and you have to react to petabyte growth, one way of doing that is replacing 3 TB drives with 4 TB,” he said.

OEM partners are qualifying the drives, and Collins said he expects them to ship in volume around the middle of the year. But some vendors may hold off shipping due to the transition to the new Advanced Format 4K hard drive sectors. In moving from 512-byte sectors to 4,096-byte sectors, Advanced Format handles large files more efficiently and improves data integrity. However, server and storage vendors must rewrite their software to support the new format.

The Ultrastar 7K4000 is known as a 512e (emulation) drive because it is configured with 4,096-byte sectors and 512-byte firmware that allows software written for the older format to work with the new drive format. However, there will be performance degradation during the translation process and Collins said some storage vendors might wait until native 512-byte versions are available later this year before shipping the drives.

“Storage system vendors design their own file systems,” Collins said. “Some are ready [for 4K] and can drop it in immediately with no impact. If they’re not ready, they can wait for the native [512-byte] version.”

Collins expects the largest storage vendors to use the 512e drives. He also said HGST will likely have a SAS version of the 4 TB drive later this year.


April 2, 2012  10:28 AM

Flash array vendor Violin Memory plays familiar funding tune

Dave Raffo Dave Raffo Profile: Dave Raffo

Violin Memory today picked up another $50 million in funding and a new strategic partner in SAP. If the market cooperates, it will be the last funding round before Violin follows its solid-state storage rival Fusion-io to an initial public offering (IPO).

Violin has pulled in $150 million in funding since former Fusion-io CEO Don Basile became Violin’s CEO in late 2009. Basile said Violin has grown from 100 employees to 320 since last June, and sales increased 500% over the last year. He puts Violin’s valuation at $800 million, which is probably more than 10 times its annual revenue.

Violin likes to bring in funding money from strategic investors as well as venture capitalists. Violin’s NAND supplier Toshiba has been an investor since the first funding round, and joined SAP as the largest investors in this latest round. Previous investor Juniper Networks and newcomer Highland Capital Partners were other investors in today’s round.

“We’re getting in the habit of this,” Basile said after closing his fourth funding round at Violin. “At the end of last year we considered the public market, but our bankers weren’t sure if the public market was open in the first quarter of 2012, so we took a mezzanine round. This gives us money to grow and operate regardless of market decisions.”

Violin sells all-flash storage arrays and caching appliances. Basile points to EMC’s VFCache PCIe caching product and its plans for a Project Thunder flash-based shared storage appliance that will compete with Violin as proof that the enterpreise flash market is poised to take off.

By Basile’s count, there are at least 30 companies selling all-flash arrays now, although he said Violin mostly competes with traditional storage vendors offering solid-state drives mixed in their hard drive arrays. Solid-state storage companies raised more than $300 million in funding in 2011, and have also been prime acquisition targets. “

Violin acquired the assets of Gear6 in 2010, and turned the technology into its vCache NFS Caching product. Basile said some of Violin’s latest funding may be used for small acquisitions to enhance its product line. “We’re an active reviewer of companies,” he said. “Expect us to acquire things that make sense to buy rather than engineer from the ground up.”


March 29, 2012  8:01 AM

Cloud can be game-changer for DR

Dave Raffo Dave Raffo Profile: Dave Raffo

Disaster recovery in the cloud is improving by the day.

At least three vendors upgraded services in the past week, concentrating on faster recovery for small enterprises and SMBs.

EVault added a four-hour option for its EVault Cloud Disaster Recovery Service (EVault CDR) to go with its previous 24- and 48-hour SLA options. EVault is promising to have applications on the four-hour SLA up and running within that window.

EVault president Terry Cunningham said four hours is the magic number to gain critical mass for his company’s cloud DR service because it opens the door for heavily regulated businesses that cannot stand long outages for critical systems.

“This opens up the whole market for us,” Cunningham said. “One customer said, ‘When you deliver four hours, you get all our business.’”

He said the technology is available for more granular snapshots and shorter backup windows, making the four-hour SLA possible. The EVault service includes a minimum of one DR test per year, and customers can choose different SLAs for different applications. They can use the four-hour recovery for critical apps, and the longer recovery options for others. He declined to give exact pricing because it is set by EVault’s distribution partners, but the four-year SLA costs twice as much as the 24-hour option.

EVault, owned by Seagate, changed its name back from i365 to Evault last December.

Not everyone is so impressed with four-hour recovery. QuorumLabs promises instant recovery with its new Hybrid Cloud Disaster Recovery service that lets customers install one of the vendor’s onQ appliances on site and replicate to another appliance at a QuorumLabs’ off-site data center.

QuorumLabs’ hybrid service keeps up-to-day virtual clones of critical systems that run on the appliance or in the cloud. The service builds new recovery nodes continuously and the vendor says the cloud appliance can take over for failed servers with one mouse click.

“Compared to our offering – ready in minutes, tested daily – [four-hour recovery] is like a pizza delivery guaranteed to arrive sometime in the next several days,” QuorumLabs CEO Larry Lang said.

QuorumLabs already has customers who set up DR by installing appliances at two locations, but not all of its customers have a second site. “If something were to happen, we bring up an exact copy of that server in your cloud,” Lang said. “Users just redirect their client to the cloud. Literally in an hour they can have something up and running.”

QuorumLabs’ service is priced by the number of servers and the amount of data protected. Lang said a customer with 10 servers and 3 TB would pay about $20,000 per year.

Zetta also upgrade its cloud backup and DR service. Zetta’s DataProtect 3.0 uses the ZettaMirror software agent on the customer site and synchronizes data to one of the vendor’s cloud data centers. The latest version adds support for Apple desktops and laptops as well as Microsoft SQL Server and Windows system state, improves performance with compression and a metadata cache and allows snapshots of synched data.

EVault’s Cunningham said the cloud’s role in data protection has made the business more competitive. He said customers are re-evaluating their backup and DR processes and find it easier to switch vendors.

“It used to be that when you made a backup deal, it was for life,” he said. “We used to sell you some software and say ‘Good luck with that, hope it works out.’ Today it’s a service. We have to earn the business every month.

“The customer has more options for switching now. There are some technical challenges, but you can do it. If vendors screw up, they lose the customers.”


March 27, 2012  5:54 PM

Atlantis unveils ILIO for Citrix XenApp

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Atlantis Computing today launched Atlantis ILIO for Citrix XenApp, which helps reduce I/O and latency problems often associated with application virtualization. The product runs on a VMware vSphere hypervisor and is aimed at customers planning to virtualize XenApp 6.5 with Windows Server 2008 R2.

The new product is built on the same codebase as Atlantis ILIO for VDI but this new version is targeted at customers deploying application virtualization. Atlantis ILIO helps eliminate I/O bottleneck because it processes I/O locally within the hypervisor’s memory. It does inline deduplication to reduce the amount of data hitting the NAS or SAN.

Atlantis ILIO for XenApp is a virtual machine that is deployed on each XenApp server and creates an NFS datastore that acts as the storage for the XenApp VMs running on Windows Server 2008 R2.

“We correct the problem the way we do with VDI,” said Seth Knox, Atlantis’ director of marketing. “All duplicate storage traffic is generally eliminated before it’s sent to the storage. “

Torsten Volk, senior analyst for Enterprise Management Associates, said Atlantis ILIO for XenApp helps optimize performance because it sequentializes and dedupes the I/O traffic. He also said support for XenApp will broaden Atlantis’ market substantially.

“There is a much larger customer base for Citrix XenApp compared to the VDI market and only minimal changes to the Atlantis ILIO codebase were required to accommodate XenApp,” Volk said. “Not many are using VDI because the ROI is still unclear, but XenApp is a well-liked and vastly adopted platform that has provided tremendous customer value for over a decade.”

Knox said there are customers who ask for both products, but agreed there will be more demand for ILIO for XenApp.

“There is a much larger install base of people using XenApp,” Knox said. “Many of our customers use both VDI and XenApp, so they asked us to do a version for XenApp.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: