Storage Soup


May 13, 2013  1:30 PM

New Cleversafe CEO aims at holes in storage status quo

Dave Raffo Dave Raffo Profile: Dave Raffo

Object storage startup Cleversafe switched CEOs today. Founder Chris Gladwin gives up the CEO post and becomes vice chairman while continuing to set Cleversafe’s technical vision. John Morris moves into the CEO job after spending four years at Juniper Networks as EVP of field operations and strategic alliances, three years with Pay By Touch as COO and then CEO, and 23 years with IBM.

Chicago-based Cleversafe is among a group of object-based storage startups – a group that also includes Scality, Amplidata, Caringo and Exablox – looking to crack the market while the major vendors rev up their own object platforms to handle petabyte and perhaps exabyte data stores that push the limits of RAID.

Morris spoke with Storage Soup today to outline his plans for Cleversafe.

What brought you to Cleversafe?

Morris: As I left Juniper, I wanted a chance to lead a company. One of the spaces I was looking at in technology was storage, where I think there is a lot of status quo ready to be challenged. I had heard of Cleversafe and followed it a little bit in the local Chicago media. I’m a Chicago guy anyway, and as I got more into learning about the company from Chris Gladwin and [chairman] Chris Galvin and the other board members, it seems to me it’s a great combination of technology that’s ready and accepted by customers, and momentum that is building dramatically.

It’s a great time for me to come in and join the company and bring a depth of experience in scaling businesses to match up with the company’s great technology.

Chris Gladwin is staying with the company, so it seems like Cleversafe won’t be going in a completely new direction with the change. Is this a signal that Cleversafe is moving into a new business phase, and why was the change made now?

Morris: I have to be careful not to disturb the phase we’re in now. We brought in a million dollar order last week. We have a couple of other strategic big orders that we expect to bring in this week. The company is on a tear with customer momentum building, but we also want to make sure that we’re growing in a way that is sustainable and allows us to scale to the heights we want. And that’s where I’m going to be helping to bring in approaches we take to not just have a great few quarters but to have many great years.

How many customers does Cleversafe have?

Morris: We have dozens of customers, that’s as specific as we’ll get. But we’re growing every week.

What’s your biggest challenge in this job?

Morris: We’re attacking an entrenched status quo of big entrenched competitors. In reality, there’s a lot of square peg for a round hole because the kind of data that we’re great at storing is not very well attacked by the status quo technology, in particular the RAID technology out there. So the biggest challenge is making sure the customers understand there’s an alternative out there that gives them better reliability and scalability at a lower cost. As a small company, it’s hard to make yourself known out there so that’s probably the biggest challenge I have now.

What’s the headcount at Cleversafe?

Morris: A little over 100. We’re definitely growing.

Who do you consider the major competitors in the object storage market?

Morris: The dominant players out there – EMC, HP, IBM, Hitachi Data Systems – those are the guys dominating market place and the competitors we think about. They’re throwing the wrong tool at the problem. We think we have a much better tool. Those are the guys I wake up thinking about.

We think we have a much better alternative for the fastest growing part of their market, which is the unstructured data around video and photos and audio and image files, and that sort of thing.

What about some of your fellow startups who sell object storage?

Morris: We’ve shipped more object oriented storage than anybody, by a long shot. Certainly more than the smaller players out there.

When was your last funding round?

Morris: In 2010 we raised $31 million in our C round. We’re in a  fortunate position, we have investors who like what we’re doing and are anxious to help us do more. So fundraising as a big challenge is not something I come in having to face.

Does that mean you’re close to profitability?

Morris: One of the fun things for me about moving into a private company is [laughs], we have an easy answer to that question, which is we’re not releasing any type of financial data around revenue.

Object storage seems to be a big piece of the ViPR software EMC announced last week. Do you expect to see more focus on object storage from the big vendors?

Morris: One of the hardest things that we had to try to do when I was at IBM for a couple of decades was eat our own children. And I’m counting on our large competitors having the same sort of trepidation. While they have offerings in this space, I think they continue to lead with what drives a lot of profit in their current business and that’s an opportunity for us.

May 10, 2013  4:04 PM

EMC World wrap-up: Isilon, VNX, Syncplicity future directions

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS – EMC World was short on product upgrades this year with the exception of the new ViPR platform, but the vendor did enhance a few products while previewing features expected soon in others:

Isilon

Isilon’s OneFS operating system added post-process block-level deduplication, native Hadoop Distributed File System (HDFS) 2.0 support, a REST Object Access to Namespace interface for data access and support for OpenStack Swift and Cinder integration. The dedupe will be included in the next version of  OneFS due later this year, and the other features are available now.

During a session on Isilon during the show, Isilon director of product management Nick Kirsch laid out strategic initiatives for the clustered NAS platform.

Isilon is working on using flash to increase performance in several ways, including putting file system metadata on flash and using flash as a read cache first and eventually as a write cache. Kirsch also said Isilon will add support for commodity consumer drives as a low-cost tier.

“If you’re going to deploy an exabyte of data, there has to be a step change in price,” he said.

Kirsch said Isilon is working on a software-only version of OneFS, and will support moving data to the cloud and using the cloud as a “peering mechanism” to connect to multiple clouds.

No timetable was given for availability of these future features.

VNX

Rich Napolitano, president of EMC’s unified storage division, previewed future features for VNX arrays. These included a flash-optimized controller, a VNX app store that would allow customers to run applications such as a virtual RecoverPoint appliance directly on a VNX array and a virtual VNX array that can run on commodity hardware or in the cloud.

Syncplicity

A year after buying file sharing vendor Syncplicity, EMC added a policy-based hybrid cloud capability that lets customers use private and public clouds simultaneously.

Customers can set policies by folders or by users to determine where content will reside. For example, legal documents from users can stay on on-premise storage while less sensitive data can go out to a public cloud. Files that require heavy collaboration such as engineering documents can be spread across multiple sites and have geo-replication features so uses can always access them locally.

EMC also added Syncplicity support for its VNX unified storage arrays, following on the support it gave EMC’s Isilon and Atmos storage platforms in January. Syncplicity will also support EMC’s ViPR software-defined storage platform when that becomes available later this year.

“Our strategy is to provide ultimate choice for storage backends,” said Jeetu Patel, VP of the EMC Syncplicity business unit. “So you can expect to be able to run Syncplicity on non-EMC platforms over time.”

Data Protection Suite

EMC’s backup and recovery group launched the Data Protection Suite, which consists of no new products but is a new way to package products such as Data Domain, Avamar, NetWorker, Data Protection Advisor and SourceOne. Customers can purchase backup and archiving products together with licenses based on consumption and deployment models.


May 9, 2013  4:53 PM

Fusion-io’s new CEO addresses ‘misconceptions’

Dave Raffo Dave Raffo Profile: Dave Raffo

One day after a surprising CEO shakeup, the new boss of PCIe flash leader Fusion-io denied the change came because of a failure of the outgoing chief, problems at the company or because it is looking for a buyer.

Shane Robison, a former long-time Hewlett-Packard executive who replaced Fusion-io CEO David Flynn Wednesday, made a statement and took questions during a technology webcast hosted by the vendor.

Fusion-io said that founders Flynn and chief marketing officer Rick White resigned their positions to pursue investing opportunities. The company said Flynn and White will remain on the board and serve in advisory roles for the next year.

The news was poorly received by investors, as Fusion-io’s stock price fell 19% to $14.60 by the end of Wednesday. It dropped slightly to $14.23 today.

There has been speculation that Flynn was pushed out because Fusion-io’s revenue declined last quarter following a pause in spending by its two largest customers, Apple and Facebook. It didn’t help that Fusion-io’s stock already dropped to $17.47 from a high of $39.60 in late 2011 before Wednesday. While at HP, Robison worked on its acquisitions of Compaq, Mercury Interactive, Opsware, EDS, 3Com and Autonomy  – leading to speculation that he was brought in to sell Fusion-io.

Robison began his webcast today by saying he wanted to “hopefully clear up some misconceptions.”

He said the move was discussed for “a long time” even if there was no public indication that Flynn would leave. He said the previous leadership team did a great job taking Fusion-io from startup to successful public company but its focus has shifted. The goals now are to move into international markets and make sure it releases quality products on time.

“It’s not unusual as startups evolve to medium-size companies that you need different skill sets as you go through these cycles,” he said.

Robison said Fusion-io did not reveal the CEO change when it reported earnings two weeks ago because the decision was not completed yet.

“Unfortunately, it was a surprise,” Robison said. “And nobody – especially the street – likes surprises. This caused a lot of speculation. A lot of times when these changes happen it’s because there is a problem with the company. I can tell you there is not a problem with the company. The company’s doing very well.

“The company has built a lead, and we need to maintain that lead and invest in R&D and in some cases, M&A.”

Robison said another misconception was that the board “brought me in to dress the company up and sell it. We’re not working on that. This was a decision that was about how we get experienced management in place to take the company to the next level.”

Robison has never been a CEO in his more than 30 years in the IT business. Besides serving as HP’s executive vice president and chief strategy officer from 2002-2011, he also worked for AT&T and Apple. He was blamed by other members of the HP board for not realizing that Autonomy did not have as much revenue that it claimed (a charge that Autonomy leaders have denied) before HP agreed to pay $11.3 billion to acquire it in 2011.

Robison did not discuss the Autonomy deal today. He defended his qualifications by saying that some of the business units that he has run inside of large companies were as big as Fusion-io.

He said his strength is his operational experience and Fusion-io needs to balance good operations with innovative technology.

The move comes as Fusion-io faces greater competition after having the PCIe flash market mostly to itself over its first few years. Intel, Micron, Virident, LSI, STEC, OCZ and Violin Memory have PCIe cards.

Storage giant EMC sells cards from Virident and Micron under OEM deals as its XtremSF brand, and its marketing concentrates on claims that those cards are superior to Fusion-io’s. EMC executives at EMC World this week also revealed plans to bring out MCx flash-optimized controllers for hybrid storage arrays and it has an XtremIO flash array that competes with the NexGen storage systems that Fusion-io acquired last month.

Robison said he spent time Wednesday with key large customers, and their reaction to the news was positive.

Others are wondering if the deal will lead to more changes.

“Surprise management changes usually portend more news in the following days and weeks,” Storage Strategies Now analyst James Bagley wrote in a note today. “As we have reported over the last year, we felt that Fusion-io had a tough future ahead with increasing competitors in its core market. Its recent acquisition of NexGen, a storage array manufacturer and Fusion-io customer, is a good move into a broader market where Fusion’s deep software expertise and larger resources should help revenue expansion.”

Objective Analysis analyst Jim Handy also published a note on the change, maintaining “Fusion-io is in an enviable position”  because the company was the first to introduce a PCIe SSD, and early with caching software and the ability to make SSDs appear in memory in virtualized systems.

“This resulted in the company’s competitors always remaining one or two steps behind in their efforts to compete,” Handy added. “It would appear that the two key architects of this strategy have now moved on, so outsiders should carefully watch to see if the underlying strategy, the one that has served the company so well in the past, will continue to be followed, or if a new path will be tried.””


May 9, 2013  12:09 PM

Keeping all data is a dangerous policy

Randy Kerns Randy Kerns Profile: Randy Kerns

There is a prevalent problem in Information Technology today – too much data.

Most of the data is in the form of files and called unstructured data. Unstructured data continues to increase at rates that average around 60% per year according to most of our IT clients.

Structured data is generally thought of as information in databases and this type of data is experiencing a much smaller increase in size than unstructured data. The unstructured data is produced internal to IT and from external sources. The external sources include sensor data, video information, and social media data. This type of growing data is alarming because there are so many sources and the information is used in data analytics that typically originate outside of IT.

The big issue is what to do with all that data that is being created. The data is stored while needed, which is during the processing for applications or analytics and while it may be required for reference, further processing, or the inevitable “re-run” in some cases. But what is to be done with the data later? Later in this case means when the probability of access drops to the point that it is unlikely to be accessed again. There is also cases when the processing is complete (or project is complete) and the data is to be “put on the shelf” much as we would in closing the books on some operation. Does the data still have value as new applications or potential usages develop? Will there be a potential legal case that will require the data to be produced?

The default decision for most operations is to save everything forever. This decision is usually made because there is no policy around the data. IT operations do not set the policies for data deletion. Because the different types of data have different value and the value changes over time, the business owners or data owners must set the policy. IT professionals generally understand the value but usually are not empowered to make those policy decisions. Sometimes the legal staff sets the policy, which absolves IT of the responsibility, but that may not be the best option. In a few companies, a blanket policy is used to delete data after a specific amount of time. This may not withstand a legal challenge in some liability cases.

Saving all the data has compounding cost issues. It requires buying more storage, adding products to migrate data to less expensive storage, and increasing operational expenses for managing the information, power, cooling, and space. Moving the data to a cloud storage location has some economic benefit, but that may be short-sighted. The charges for data that does not go away continue to compound. Storing data outside the immediate concern of IT staff takes away from the imperative to make a decision about what to do with it.

Besides the costs of storing and managing the data, the danger is that there may be some legal liability for keeping data for a long time. The potential for an adverse settlement based on old data is there and has been proven extremely costly. More impacting to IT operations is the discovery and legal hold required. Discovery requires searching through all the data, including backups, for requested information and legal hold means no deletions of almost anything – no recycling of backups. This causes even more operational expense.

Not establishing a deletion policy that can pass a legal challenge is a failing of a company and results in additional expense and liability. IT may the first responders on the retain-forever policy, but it is a company issue.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 8, 2013  8:54 AM

EMC rivals’ venomous reactions to ViPR

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS — Hitachi Data Systems and NetApp wasted little time sending reviews of EMC’s new ViPR software. Both sent e-mails panning EMC’s attempt at software-defined storage.

You obviously woudn’t expect EMC’s competitors to have good things to say about ViPR, especially competitors who also offer storage virtualization. But after hearing EMC bang the drums about it this week at EMC World, let’s listen to other opinions:

“ViPR is essentially a YASRM — Yet Another Storage Resource Manager,” wrote Sean Moser, HDS VP of software platforms product management. “Another bite at the apple for EMC after the failure of Invista and its ancestors. In ViPR terms they call this function a control plane – an attempt to provide a single management framework across all EMC storage platforms, and eventually across third party storage as well.”

He called the attempt to provide a management platform across third-party storage “a pipe dream as there’s no motivation for third-parties to write to your SRM API to allow their products to be nicely managed by a tool not of their own making. So part one of ViPR is to create an SRM tool that allows clients to use enterprise storage much as they would Amazon — a set of software APIs that abstract the detail of the underlying storage, presenting Storage as a Service. While conceptually a good idea, it will be impossible to really do outside of EMC storage.

“The other key function with ViPR is storage virtualization; the long sought storage hypervisor. However, even for EMC’s own storage platforms (at least in version 1.0), ViPR only allows control plane (i.e. management functions) for file and block. The only data plane support is for object-based storage. So for now, it’s just a new Atmos front-end that adds an SRM management layer for block and file.”

Moster maintains that the Hitachi Content Platform (HCP) had the support for file, block and object that EMC claims ViPR will have. “Further, there’s no gymnastics required to make this happen – you get it straight out of the box,” he added.

Brendon Howe, NetApp vice president of product and solutions marketing, wrote that the software-defined storage concept is a good one. But, he added, NetApp does it better in its Clustered Data OnTap operating system.

“NetApp provides this capability with our with open and flexible Storage Virtual Machine (SVM) technology in Clustered Data OnTap,” Howe wrote. “[NetApp provides] hardware independence spanning NetApp optimized to commodity hardware to the cloud with Amazon Web Services. Combining the best set of software-enabled data services with programmable APIs and the broadest set of integrations is precisely how Data ONTAP became the most deployed storage operating system.”

Well, you didn’t expect EMC’s claim of being the first to provide software-defined storage to go unchallenged, did you?


May 3, 2013  10:09 AM

EMC’s RecoverPoint, SRDF shake hands

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC will make a bunch of product launches next week at its annual EMC World conference in Las Vegas. But there were upgrades the vendor couldn’t wait to announce, so it revealed a handful of data protection changes this week.

The changes centered on EMC’s RecoverPoint replication software, which uses continuous data protection (CDP) to allow any point in time data recovery.

RecoverPoint 4.0 and the Symmetrix Remote Data Facility (SRDF) replication application for EMC’s VMAX enterprise arrays have been integrated, to the degree where customers can use them on the same volume. Previously, SRDF and RecoverPoint ran on the same VMAX system but not the same volume.

RecoverPoint’s CDP can now run across two arrays, so every change made on the volume can be recorded and replicated remotely to another array. That makes data continuously available across two arrays, allowing customers doing technology refreshes to move data without downtime.

“This represents a key step forward in our integrated data protection strategy,” said Colin Bailey, EMC director of product marketing. “CDP brings almost an almost infinite number of points of recovery for total data protection for critical applications.”

EMC also now offers a software-only version of RecoverPoint called vRPA for VNX midrange arrays for easier and cheaper deployment on existing systems.


May 2, 2013  7:44 AM

Brocade says its FC SAN sales disappointed last quarter

Dave Raffo Dave Raffo Profile: Dave Raffo

Maybe Brocade has been a little over-optimistic about Fibre Channel SANs.

After Brocade executives gushed bout how lucrative the FC market remains on the switch maker’s last earnings call, the vendor Wednesday said the quarter that just ended didn’t go as planned.  Brocade downgraded its forecast for the quarter, mainly because of a sharp drop in its SAN revenue.

Brocade said its overall revenue in the quarter that ended Tuesday would be between $536 million and $541 million, down from its previous forecast of $555 million to $575 million. FC SAN revenue is now expected to come in between $373 million to $376 million, down six percent to seven percent from last year and 10 percent to 11 percent from last quarter. Brocade said revenue for the quarter that ends in April typically drops five percent to eight percent from the previous quarter, which includes the end-of-year budget flush from many storage shops.

According to Brocade’s press release, “the lower-than-expected SAN revenue was duo to storage demand softness in the overall market which impacted the company’s revenue from some of its OEM partners.”

Two of its largest OEM partners, EMC and IBM, reported disappointing results for last quarter. EMC missed Wall Street’s estimates for revenue and IBM continued its trend of declining storage hardware sales. According to EMC CEO Joe Tucci, “customers are still being very cautious with their IT spending.”

At least Brocade’s Ethernet business is going as expected. The forecast is for $163 million to $165 million in revenue, up 14% to 15% from last year and down four percent to five percent from the previous quarter.

After Brocade’s last earnings report in February, its new CEO Lloyd Carney said his optimism about FC SANs was one of the reasons he took the job. “Fibre’s not dead anymore,” he declined.

Maybe it’s just napping. In Brocade’s release Wednesday, Carney hinted that the FC SAN revenue drop will not be permanent.  “We believe that by leading the Fibre Channel industry with innovative technology and solutions that are relevant to the problems that customers face today, Brocade continues to be well-positioned for long-term success in the data center,” Carney said.

It may not help Brocade that its switch rival Cisco is rolling out its first major FC product overhaul in years, and is upgrading to 16 Gbps FC nearly a full year after Brocade.

Brocade will give its full earnings report May 16.


April 26, 2013  7:40 AM

Archiving explained

Randy Kerns Randy Kerns Profile: Randy Kerns

The term archiving can be used in different contexts. Its use across vertical markets and in practice leads to confusion and communication problems. Working on strategy projects with IT clients has led me to always clarify what archive means in their environments. To help this out, here are a few basics about what we mean when we say “archive.”

Archive is a verb and a noun. We’ll deal with the noun first and discuss what an archive means depending on the perspective of the particular industry.

In the traditional IT space such as commercial business processing, etc., an archive is where information is moved that is not normally required in day-to-day processing activities. The archive is a storage location for the information and typically seen as either an online archive or a deep archive.

An online archive is where data is moved from primary storage that can be seamlessly and directly accessed by the applications or users without involving IT or running additional software processes. This means the information is seen in the context in which the user or application would expect. The online archive is usually protected with replication to another archive system separate from the backup process. The size of an online archive can be capped by moving information based on criteria to a deep archive.

A deep archive is for storing information that is not expected to be needed again but cannot be deleted. While it is expected to be much less expensive to store information there, accessing the information may require more time than the user would normally tolerate. Moving data to the deep archive is one of the key areas of differentiation. Some online archives can have criteria set to automatically and transparently move data to the deep archive while others may require separate software to make the decisions and perform the actions.

In healthcare, information such as radiological images is initially stored in an archive (which translates to primary storage for those in the traditional IT space). Usually as images are stored in the archive, a copy is made in a deep archive as the initial protected copy. The deep archive will be replicated as a protected copy. Based on policies, the copy in the archive may be discarded after a period of time (in many cases, this may be one year) with the copies on the deep archive still remaining. Access to the copy on the deep archive is done by a promotion of a copy to the archive in the case of a scheduled patient visit or by a demand for access due to an unplanned visit or consultative search.

For media and entertainment, the archive is the repository of content representing an asset such as movie clips. The archive in this case may have different requirements than a traditional IT archive because of the performance demands on access and the information value requirements for integrity validation and for the longevity of retention, which could be forever. Discussing the needs of an archive in this context is really about an online repository with specific demands on access and protection.

As a verb, archive is about moving information to the physical archive system. This may be the actual application that stores the information in the archive. An example of this would be a Picture Archiving and Communications System (PACS) or Radiology Information System (RIS) system in healthcare. In other businesses, third-party software may move the information to the archive. In the traditional IT space, this could be a solution such as Symantec Enterprise Vault that could move files or emails to an archive target based on administrator set criteria.

As archiving attracts more interest because of the economic savings it provides, there will be additional confusion added with solution variations. It will always require a bit more explanation to draw an accurate picture.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 19, 2013  10:44 AM

Nimble Storage adds reference architecture, new level of analytics

Dave Raffo Dave Raffo Profile: Dave Raffo

Startup Nimble Storage is taking a page out of NetApp’s playbook with its private cloud reference architecture put together with Cicso and Microsoft. And it is going beyond other storage vendors’ monitoring and analytics capabilities with its InfoSight services.

This week Nimble launched its SmartStack for Microsoft Windows Server and System Center reference architecture. It ncludes a three-rack unit of Nimble’s CS200 hybrid storage, Cisco UCS C-Series rackmount servers and Windows Server 2012 with Hyper-V and Microsoft Systems Center 2012. The reference architecture is designed to speed deployment of private clouds with up to 72 Hyper-V virtual machines.

Last October, Nimble rolled out a reference architecture for virtual desktop infrastructure (VDI) with Cisco and VMware.

The reference architecture model is similar to that of NetApp’s FlexPod, which also uses Cisco servers and networking. NetApp has FlexPod architectures for Microsoft and VMware’s hypervisors. EMC added Vspex reference architectures last year, two years after NetApp launched FlexPods.

Nimble’s InfoSight appears ahead of other storage vendors’ analytics services. It goes beyond “phone-home” features to collect performance, capacity, data protection and system health information for proactive maintenance. Customers can access the information on their systems through an InfoSight cloud portal.

What makes InfoSight stand out is the depth of the information amassed. Nimble claims it collects more than 30 million sensor values per array per day, grabbing data every five minutes. It can find problems such as bad NICs and cables, make cache and CPU sizing recommendations and give customers an idea of what type of performance they can expect from specific application workloads.

“Nimble collects a much larger amount of data than is traditionally done in the industry,” said Arun Taneja, consulting analyst for the Taneja Group. “Traditionally, an array would grab something from a log file at the end of the day. These guys are grabbing 30 million data points. Then they return that information proactively to users in the form of best practices and provide proactive alerts about product issues. I think everybody will end up there, but it might take five years. “


April 19, 2013  8:24 AM

Storage playing key role in entertainment industry

Randy Kerns Randy Kerns Profile: Randy Kerns

The National Association of Broadcasters (NAB) conference has become a big focus for storage vendors. The growth in media content and the increased resolution of recordings make for a fast growing market for storage demand. And, the data is not thrown away (deleted). Media and entertainment (M&E) industry data is primarily file-based with a defined workflow using files of media in a variety of formats.

The large amount of content favors storage archiving solutions to work with media asset management for repositories of content. But, these archives are different than those used in traditional IT. The information in M&E archives is expected to be retrieved frequently and the performance of the retrieval is important. For rendering operations, high performance storage is necessary and the sharing capabilities for the post-production processes determine product usability.

Evaluator Group met with a number of storage vendors at this month’s NAB conference. Below are some of the highlights from a few of those meetings.

• For tape vendor Spectra Logic, Hossein Ziashakeri the VP of Business Development talked about changes in the media and entertainment market and Spectra Logic. He said media and entertainment is becoming more of an IT environment. Software is driving this, particularly automation tools. And the new generation of people in media and entertainment are more IT savvy than in the past. M&E challenges include the amount of content being generated. The need to keep everything is driving an overwhelming storage demand. The cost and speed of file retrieval are major concerns. Spectra Logic is a player because the M&E market has a long history with tape, which has become more of an archiving play than a backup play.

• Mike Davis, Dell’s director of marketing and strategy for file systems, said Dell’s M&E play is primarily file-based around its Compellent FS8600 scale-out NAS. Davis said M&E customers also use Dell’s Ocarina data reduction, which allowed one customer to reduce 3 PB of data. The FS8600 now supports eight nodes and 2 PB in a single system.

• Quantum has had a long term presence in the media and entertainment market with StorNext widely deployed for file management and scaling. StorNext product marketing manager Janet Lafleur said Quantum will announce its Lattus-M object storage system integrated with StorNext in May. Quantum’s current Lattus-X system supports CIFS and NFS along with objects. Quantum also has a StorNext AEL appliance that includes tape for file archiving.

• Hitachi Data Systems (HDS) had a major presence at NAB with several products on display, including Hitachi Unified Storage (HUS) storage, HNAS and Hitachi Content Platform (HCP) archiving systems. Ravi Chalaka, VP of solutions marketing, Jeff Greenwald, senior solutions marketing manager, and Jason Hardy, senior solutions consultant spoke on HDS media and entertainment initiatives. HDS is looking at solid state drives (SSDs) to improve streaming and post-production work. HNAS to Amazon S3 cloud connectivity has been available for two months, and HDS has a relationship with Crossroads to send data from HCP to Crossroads’ StrongBox LTFS appliances.

• StorageDNA CEO Tridib Chakravrty, CEO and director of marketing Rebecca Greenwell spoke about the capabilities of their company’s data movement engine. StorageDNA’s DNA Evolution includes a parallel file system built from LTFS that extracts information into XML for searching. StorageDNA technology works with most media asset management software now. The vendor plans to add S3 cloud connectivity.

• Dot Hill sells several storage arrays into M&E market through partnerships, including its OEM deal to provide build Hewlett-Packard’s MSA P2000 system. Jim Jonez, Dot Hill’s senior director of marketing, said the vendor has several partners in the post-production market.

• CloudSigma is a cloud services provider that uses solid state storage to provide services for customers such as content product software vendor Gorilla Technology. CloudSigma CEO Robert Jenkins said the provider hosts clouds in Zurich and Las Vegas built on 1U servers with four SSDs in each. The SSDs solve the problem of dealing with all random I/Os. He said CloudSigma plans to add object storage through a partnership with Scality, which will provide geo-replication.

• Signiant sells file sharing and file movement software into the M&E market. Doug Cahill, Signiant’s VP of business development, said his vendor supports the new Framework for Interoperable Media Services (FIMS) standard and recently added a Dropbox-like interface for end users. Signiant’s software works as a browser plug-in to separate the control path from the data path.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: