Storage Soup

A SearchStorage.com blog.


April 19, 2012  7:08 AM

NAB shows storage plays a big role in media/entertainment



Posted by: Randy Kerns
media and entertainment storage, NAB

The National Association of Broadcasters (NAB) conference in Las Vegas this week drew a large number of storage vendors vying for the growing media and entertainment storage market. I’ve attended this conference the last five years, and seen more storage vendors every year. The storage vendors who go to NAB include those well-known in the IT space plus others that specifically focus on media and entertainment.

The target audience is different in the media and entertainment space than in general IT. The backgrounds of the people looking to store media content are different from those in traditional IT and their needs are also different. Their titles do not translate directly to mainstream IT, and they use unique terminology that requires knowledge of their business to really understand.

This poses a challenge for storage vendors. To meet their needs, the vendors must understand these differences and speak their customers’ languages.

They need to understand that the applications that store and retrieve information are also different. The workflow in media and entertainment dictate the type of applications that will be used at various times during production and delivery. Another critical consideration is the need for data interchange. This role is still handled by removable media in many cases.

The media and entertainment market requires large amounts of data that is growing exponentially, driven by improved camera resolution driving higher capacity being produced. Special purpose systems are used to modify (edit) data and multiple operations and people are used in the workflows. Data requirements change during the workflow process. Storage systems must support high performance for post-production, large numbers of streams for broadcast, and high integrity with large capacity for archiving.

Characteristics such as point-in-time copies that are crucial in traditional IT have only nominal value in media and entertainment. Vendors need to promote the right set of features to reach these companies. Without the correct focus, opportunities are missed and the vendor demonstrates a lack of understanding of the customer needs.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

April 16, 2012  8:11 AM

Amplidata bulks up hardware with 3 TB drives



Posted by: Sonia Lelii
amplidata, amplistor XT, erasure codes, object storage

Two weeks after optimizing its object-based storage software, Amplidata is making its appliance denser to hold more data and use less power.

The Amplistor XT Storage System now supports 3 TB SATA drives in its new AS30 module, allowing it to hold 30 TB in a 1U box and scale to 1.2 PB in a rack with 40 modules. The AS30 will eventually replace Amplidata’s AS20, which holds 2 TB drives and 20 TB in one appliance.

Amplidata claims the AS30 uses about 30 percent less power than the AS20, requiring 2.2 watts per terabyte when idle and 3.3 watts per terabyte when in use. That’s about the same power of a 60-watt light bulb for the entire 30 TB module.

“The really big thing is the power consumption is just over 65 watts, when powered and idle with no disk activity,” said Paul Speciale, Amplidata’s VP of products. “When there is activity, it consumes 3.3 watts per terabyte. But just because of the low performance, these systems can go tens of gigabytes per system so you are not giving up on performance.”

Amplidata’s storage platform is designed for cloud archiving of media and entertainment files, and “big data” file storage. Amplidata sees the media and entertainment industry as a key target for the larger drives.

The vendor improved its BitSpread erasure coding software and data management with its latest AmpliStor XT software released earlier this month.

Randy Kerns, senior strategist at Evaluator Group, said erasure code-based technology becomes more important with higher capacity drives because there is a greater probability of drive failures in the larger drives.

“As you get to higher capacity drives, you have a greater exposure to a second drive failure and rebuild times are longer,” Kerns said. “With that exposure, the probability goes up. Two terabyte drives typically take eight hours to rebuild in a normal system, so it becomes more important when you go to three or four terabyte drives in a multi-petabyte system because you have a higher probability of a problem happening. Media and entertainment is very sensitive to these issues and Amplilidata is targeting that market.”

Amplidata’s AS30 has a starting price of under $0.60 per Gigabyte.


April 12, 2012  10:58 PM

Oracle provides tape analytics for StorageTek libraries



Posted by: Sonia Lelii
health metrics, STorageTek, tape

Oracle has introduced its StorageTek Tape Analytics, a software that monitors, manages the health and proactively captures the performance of StorageTek libraries that are located across the world from a single pane of glass.

The software resides outside the library, in a dedicated server database and captures library, drive and media health metrics through an out-of-band process so that tape drives are never taken offline to collect the data. The metrics information is sent to a central collection point, where the data is analyzed for potential problems that could cause errors in the media. It looks for capacity limitations, intrusion entry points while also providing administrators with recommendations to help prevent data loss. The software is built on the Oracle Fusion middleware code base.

Oracle executives said the StorageTek Tape Analytics does granular drill downs into the health specifics of drives and the media. The software connects to each library through a single Gigabit Ethernet connection. StorageTek tape libraries use the SNMP protocol to pass drive and media health information directly to the analytics software through a dedicated IP port. The software can pull more than 100 attributes from the drives.

“We get all the informtion directly from the library and it’s done from the control path not the data path,” said Scott Allen, an Oracle senior product manager. “This offers a more secure approach.”


April 12, 2012  12:20 PM

Consumerization of IT leads to accidental storage admins



Posted by: Randy Kerns
IT generalist, storage admin, storage management

One of the ongoing changes in IT is the transition to IT generalists configuring and managing storage in all but the largest enterprises. This was always common in small enterprises, but now is increasingly the case in the mid-tier enterprise, too. Beyond storage, the IT generalist handles server operating systems, networking, and the virtualization hypervisors.

Another dynamic occurring along these lines is called the consumerization of IT. People that use technology such as smart phones or iPads in their daily lives are becoming administrators at the IT generalist level. The general consumer technology user:

· Must know about setting up accounts and security.

· Understand how to protect data in the cloud.

· Know how to migrate data to a new device.

· Understand about setting file sharing options such as access to photos on Snapfish.

What has happened here?  IT operations have become part of many people’s lives.  Most are doing these administrative tasks out of necessity with no training other than some interactive guidance.  Some do it incorrectly, some struggle through the administration, and others provide services – in my case, I’m the admin for the PCs, etc. for my daughters.

This shift even changes the way midrange enterprise storage is managed. Element managers (the storage vendor’s storage system management software) must be designed with expectations that an IT generalist will manage the storage environment.

Storage vendors should assume the IT generalist using the element manager has a limited base of storage knowledge. They should expect that no manual will be read either on paper or in electronic form. And they should realize that when there is a complex set of choices, they should assume the wrong one will be tried first and correction action or second chances will be necessary.

This leads to the demand for a new GUI that is highly interactive with icons to demonstrate actions and status. The GUI must seem simple, belying the underlying complexity.

Without a plan and real education, we’ve created a mass unpaid workforce of IT generalists.  So, when do we get a new generation of storage administrators without planning for it?

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 10, 2012  11:49 AM

Toshiba takes bow for inventing NAND flash, wasn’t always so



Posted by: ITKE
nand flash, Toshiba

Written by: John Hilliard

Toshiba America Electronic Components Inc., is giving itself a self-high-five for the invention of NAND flash 25 years ago – a technology that, among other uses, allows customers to store critical data in bananas or cake-worthy celebrations of industrial design.

But Toshiba isn’t giving any kudos to the man credited with the invention, perhaps because he represents a part of the story the vendor would rather forget.

In the company’s press release, Scott Nelson, a Toshiba vice president, called NAND flash a “game changer” and boasted “The cost/performance of NAND flash continues to stand the test of time. NAND flash is leading the way to thin and light hardware, has made the mobility of content possible, and is enabling ‘green’ storage in the data centers.”

According to market research firm IC Insights, NAND flash sales are expected to hit $32.8 billion this year, an 11% hike from 2011, and it may top DRAM sales for the first time.

But many anniversaries include awkward memories that everyone wants to forget, and this has one for Toshiba. Toshiba’s awkward memory here is its relationship with one of the experts credited with the invention of the technology, Fujio Masuoka. And no, Masuoka is not mentioned in Toshiba’s announcement.

According to Forbes.com, Toshiba tried for a while to not take credit for inventing flash storage, gave the credit to Intel and downplayed the work of Masuoka in developing NAND flash.

“For his work, Masuoka says, he was awarded a few hundred dollars from Toshiba and only after a Japanese newspaper gave his new type of memory an award of invention of the year in 1988,” Forbes reported in 2002, also noting that Toshiba “disputes” Masuoka’s account (the company said he was promoted).

The dispute landed Toshiba and Masuoka in court. Masuoka sued Toshiba in 2004 for about $9 million, according to Business Week, and the case was settled a few years later for about $750,000.


April 10, 2012  8:16 AM

NetApp, Cisco expand FlexPod for smaller private clouds



Posted by: Dave Raffo
converged infrastructure, flexpod, private cloud, vblock

NetApp and Cisco today said they are expanding their FlexPod reference architecture, which consists of NetApp storage arrays, Cisco servers and networking, and software from other partners. NetApp and Cisco are adding an entry-level FlexPod that uses NetApp FAS2240 storage with Cisco C-Series Nexus 5000 Switches, Nexus 2232 Fabric Extender and UCS 6200 Fabric Interconnects.

The entry-level version is the first FlexPod to use NetApp’s FAS2000 entry-level storage systems and Cisco rack-mounted UCS devices. The other FlexPod architectures use NetApp FAS3200 and FAS6200 arrays and bladed UCS versions. The entry-level system is to support 500 to 1,000 users. It also the first FlexPod with iSCSI boot support.

“When we started [with FlexPod] we were talking about scaling from the midrange of our product families to the high end. Today we’re talking about an entry-class system,” said Jim Sangster, senior director of solutions marketing for NetApp. “It has the same common structure for support and management.” Sangster said more than 850 customers use the FlexPod reference architecture.

NetApp launched its FlexPod architecture in 2010 mainly as an answer to rival EMC’s Vblock integrated stack, but the storage vendors take different paths in bringing the products to market. Vblocks are sold through VCE, an alliance consisting of EMC, Cisco and VMware. They also have specific model numbers and configurations. NetApp sells FlexPod as a reference architecture that the vendor and partners can configure according to customer workloads. Although Cisco is a VCE partner, it maintains a close relationship with NetApp on the FlexPod architecture.

Adding an entry level version and more Cisco gear isn’t a huge announcement, but it underscores NetApp’s commitment to its converged architecture strategy for virtual infrastructure and private clouds. The same goes for EMC, which is preparing for a Thursday event to launch a new bundle that it claims will “dramatically simplify the deployment of private cloud.”

NetApp’s Sangster also pitches FlexPod as a faster and less expensive way for customers to build a private cloud. NetApp and Cisco said they have pre-tested automation and orchestration software from CA Technologies, Cloupia and Gale Technologies, with pre-validated software coming for monitoring and analytics. The automation and orchestration software is CA Automation Suite for Data Centers, Cloupia Unified Infrastructure Controller and GaleForce Turnkey Cloud for FlexPod.

“These vendors, with more coming, have met specific levels of API support,” said Satinder Sethi, VP of Cisco’s Server Access Virtual Technology Group. “This validates they have achieved a certain level of integration and makes sure we have management of the storage, network and server layers.

Customers can also manage FlexPod via open APIs from Cisco Intelligent Automation for Cloud, VMware vCloud Director or VMware vCenter Server.

Sangster said customers can scale the entry level FlexPods for capacity by adding additional FAS2240 nodes or scaling up to a FAS3200 or FAS6200. They can scale capacity by adding UCS server nodes. There is no data migration required to move from one NetApp system to another, and all UCS models are managed by Cisco UCS Manager.

Unlike VCE’s Vblocks, FlexPods do not have specific model numbers. Sangster said some partners sell small, medium and large reference architectures but they are not limited to specific NetApp and Cisco products. “There’s not a hard-coded bill of materials,” he said.

The new configuration options will be available next month.


April 9, 2012  9:28 PM

Red Hat 2.0 supports unified file and object-based data



Posted by: Sonia Lelii
files, hadoop, object, storage software

Red Hat today rolled out the beta version of Red Hat Storage Software 2.0, used to build scale-out network-attached storage (NAS) for unstructured data. The upgraded version includes new features such as the ability to access both file and object-based data from a single storage pool and support for Hadoop in “big data” environments.

Version 2 is the first major upgrade for Red Hat since it acquired startup Gluster last year. Current versions of Red Hat Storage on the market are re-branded versions of the GlusterFS product with tweaks to better support the Red Hat Enterprise Linux (RHEL) operating system.

Red Hat Storage Software 2.0 makes it easier to manage unstructured CIFS, NFS and GlusterFS mount points. The unified file and object feature allows for users to save data as an object and retrieve it as a file, or save data as an object and retrieve it as a file.

“A typical use case would be a customer can choose to save something as an object or file. So you can upload a photo as a file but in the portal software it is converted into an object,” said Sarangan Rangachari, general manager for storage at Red Hat.

The 2.0 version supports Hadoop MapReduce, which is a programming language and software framework for writing applications that rapidly process large amounts of data in parallel on large clusters of compute modes. “What we provide in this release is the underlying file system in MapReduce-based applications that use the Hadoop Distributed File System (HDFS),” Rangachari said.

The Red Hat Storage Software provides a global namespace capability that aggregates disk and memory resources into a unified storage volume. The software runs on commodity servers and uses a combination of open source Gluster software, which Red Hat acquired in October 2011, and Red Hat Linux 6. In February, Red Hat also introduced the Red Hat Virtual Storage Appliance for scale-out NAS delivered as a virtual appliance. This allows customers to deploy virtual storage servers the same way virtual machines are deployed in the cloud.

The Red Hat appliance allows the ability to aggregate both Elastic Block Storage (EBS) and Elastic Compute Cloud (EC2) instances in Amazon Web Service environments.


April 5, 2012  7:55 AM

Long-term archives require detailed planning



Posted by: Randy Kerns
long-term achiving

Conversations with IT people about long-term archiving usually begin by focusing on a specific storage device, and then it quickly becomes apparent that much more is involved. Addressing a long-term archive is a complex issue that requires education to understand. There is no single silver-bullet product.

The technology discussions include devices/media for storing data and the storage systems and features utilized. Storage systems that automatically and non-disruptively migrate data from one generation of a system to another are key to long-term archiving. I use the analogy of pushing something along in a relay race.

The information maintained in an archive is another key consideration. Information is data with context, where the context is really an understanding of what the data is, what it means, and what its value is. Maintaining information over time requires applications that understand the information, devices that can read the information, and a method for determining when the information no longer has value as part of a data retention policy. Kicking the can of information down the road for years when it has no value makes no sense.

The ability to read and understand the information years into the future is another major concern for long-term archiving. Without applications that do this, the issue of addressing long-term archiving becomes moot. I try to divide the problem into two parts. The first is defining information that is “system of record” where the data must be processed by the application to produce results. The simplest example of this is business records that produce reports, statistics, or other numbers. In this case, there must be a linkage between the information and the application.

If the application changes or is replaced, then the information also must be carried along with translation so the new app understands it. If not, the information no longer has value.

The second part of the application issue concerns information that needs to be viewable in the future where no application is needed. This case is created by putting the information in a viewable format that will persist for a long time. Today that would be a PDF document. At some point that may change and the PDF documents would have to be translated or transformed for the new viewable format, once again requiring a linkage between the information and application.

You must address all of these points for a long-term archive to achieve its goal of making information available and readable when it’s needed.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 4, 2012  8:48 AM

Storage world gets close-up look at a disaster



Posted by: Sonia Lelii
dallas tornado, PCIe flash, snw

The Storage Networking World (SNW) conference was disrupted Tuesday for a couple of hours when a tornado hit the Dallas-Fort Worth area in Texas. The day started out with cloudy skies, but in the afternoon a siren went off throughout Dallas, which was the first sign that something was amiss.

I was sitting in the outside balcony at the Omni Hotel and Resort in downtown Dallas interviewing an executive from Mezeo Software when news started circulating that a tornado was in the area. It didn’t take long before SNW attendees started heading for the windows to watch the strange swirl of clouds in the distance. Hotel personnel quickly started to order everyone on the balcony to move back into the hotel and away from the windows. But not everyone was willing to miss seeing the potential of a tornado hitting the city. Many people kept going back to the windows, pulling out their phones and taking pictures.

One SNW attendee, a meteorologist who has been chasing storms for 10 years, started arguing with hotel employees because he wanted to watch the cloud movement from a window while also tracking its progress from his iPad.

The exhibit hall was closed and sessions were canceled or delayed for at least an hour while everyone waited for the tornado to pass. A heavy rain storm followed after the dark, swirling clouds lifted, giving attendees plenty to talk about besides storage when the conference resumed.

This wasn’t the first time SNW was held at the site of nasty weather. The 2005 fall SNW was disrupted by a hurricane in Orlando, Fla., that prevented many would-be attendees of making it to the show. Fall SNW returned to Orlando last fall for the first time since the hurricane (Orlando had only been the site for the spring show in recent years). The fall SNW in October 2012 will be in Santa Clara.


April 3, 2012  1:46 PM

Get ready for 4 TB SATA drives



Posted by: Dave Raffo
4 TB hard drives, Add new tag, Advanced Format, SATA drives

Hitachi Global Storage Technologies, now part of Western Digital, today launched the first 4 TB enterpris hard drive.

The Ultrastar 7K4000 is a 3.5-inch 7,200 rpm SATA drive with a 2 million hours mean time between failure (MTBF) and five-year warranty. Current SATA enterprise drives top out at 3 TB, and HGST’s main enterprise drive rival Seagate has not yet released a 4 TB drive.

HGST VP of product marketing Brendan Collins said he sees the larger drives as a boon for big internet companies and cloud providers because they allow organizations to pack in 33% more capacity than they can now while reducing power by 24%.

“If you’re a massive data center running out of space and you have to react to petabyte growth, one way of doing that is replacing 3 TB drives with 4 TB,” he said.

OEM partners are qualifying the drives, and Collins said he expects them to ship in volume around the middle of the year. But some vendors may hold off shipping due to the transition to the new Advanced Format 4K hard drive sectors. In moving from 512-byte sectors to 4,096-byte sectors, Advanced Format handles large files more efficiently and improves data integrity. However, server and storage vendors must rewrite their software to support the new format.

The Ultrastar 7K4000 is known as a 512e (emulation) drive because it is configured with 4,096-byte sectors and 512-byte firmware that allows software written for the older format to work with the new drive format. However, there will be performance degradation during the translation process and Collins said some storage vendors might wait until native 512-byte versions are available later this year before shipping the drives.

“Storage system vendors design their own file systems,” Collins said. “Some are ready [for 4K] and can drop it in immediately with no impact. If they’re not ready, they can wait for the native [512-byte] version.”

Collins expects the largest storage vendors to use the 512e drives. He also said HGST will likely have a SAS version of the 4 TB drive later this year.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: