Storage Soup


November 7, 2011  6:51 PM

CommVault considers backup on the edge the next frontier

Dave Raffo Dave Raffo Profile: Dave Raffo

Among the sales last quarter that helped CommVault surpass its revenue expectations was a large organization that became the first production customer for CommVault’s mobile backup technology.

CommVault launched its Simpana Edge Protection in April, but CommVault CEO Bob Hammer said the release was limited to one large customer that implemented it on 20,000 devices. He predicts this is the beginning of big things to come from the remote backup market.

Hammer said the technology used for remote backup is trickier than backing up only servers. The software needs to be network-aware depending on whether it is connecting over WiFi, the WAN or LAN. It also needs to work with firewalls and other types of security organizations have for edge devices.

“There’s a lot of complicated technology here, and we wanted to work with this one large customer and get all the issues buttoned up,” Hammer said. “We’ve enabled a user to restore information directly without burdening IT. It’s not only about backing up the device, but getting information into an archive for compliance and search.

“It’s at the beginning of its lifecycle, but it will enable us to move that technology into other devices in the edge like tablets and smartphones.”

Hammer said backing up virtual machines, archiving and cloud backup were the major drivers that helped CommVault report $97.5 million in revenue last quarter for a 30% year over year increase.

“We see the cloud going mainstream now for backup,” he said.

He said the customers want to make backup and archiving one process, and that will be one of the focuses of the next version of Simpana.

“That’s a key fundamental technology going forward,” he said. “You have to do it backup and archive as one single process. You move data one time into backup, and create one copy of the data for backup and archive. That’s not a trivial move, but we believe that’s the way it’s going to go. We think it’s the only way to manage data – you have to move it off the front end quickly and store in in a low-cost index, or what we call a strategic archive.”

November 6, 2011  6:46 PM

Think business when measuring storage efficiency

Dave Raffo Brein Matturro Profile: Brein Matturro

By Francesca Sales and Rachel Kossman, Assistant Site Editors

The best way to approach storage efficiency is to measure storage from a business perspective, Jon Toigo, CEO and managing principal of Toigo Partners International consultancy, told attendees last week at a Storage Decisions seminar in Newton, Mass., on building an efficient storage operation.

Toigo defined storage efficiency from engineering, business and operational perspectives, but stressed the business perspective as the most effective gauge when measuring storage efficiency. “How are our investments in storage going to increase our productivity and our competitiveness?” he asked. “That’s the bottom line. What is it doing for us? Are we just hosting a bunch of data … that doesn’t really deliver any value and recovery?”

Toigo told attendees that Gartner recently predicted that the popularity of server virtualization technology would add to the problem — increasing storage capacity needs by 600%. The way for storage pros to counteract intimidating numbers like that, he said, was to use a broad definition when considering storage efficiency — but pay close attention to specific metrics.

From the engineering standpoint, he said, efficiency is the ratio of the output to the input of any system; from the business perspective, it’s a comparison of what is produced with what can be achieved with the same consumption of resources; and from the operational perspective, efficiency is defined as the skillfulness in avoiding wasted time and effort.

It’s also important to collect baseline data about storage, he said, but that’s a challenge for storage managers because of the wide range of storage systems running in data centers. “There are lots of different configurations for storage, a lot of different storage products, a lot of different standards for storage,” Toigo said.

Toigo advises storage admins to measure efficiency using five metrics: capacity allocation, capacity utilization, storage performance (I/O throughput), data protection (downtime avoidance) and storage energy.

Collecting these metrics is becoming increasingly more vital as the “non-trivial” challenges to storage efficiency continue to pile up. Toigo listed factors such as the neglect of data management, the narrow interpretation of storage management as capacity management, yielding to vendor “marketecture” over architecture, and storage administrators’ tendency to address problems by buying more hardware instead of addressing the source of the problem.

Instead of succumbing to these “tactics from the trenches,” Toigo advises developing a strategic storage plan, which involves a three-part, measurable process. First comes an analysis of the current state of company requirements, as well as current and future market and technology trends. Then, Toigo said, it’s important to assess the options to meet these requirements, in terms of time, budget and other business parameters. Finally, implement the plan in a manner that allows for ongoing testing. This strategy building process, Toigo contended, will ultimately enhance storage efficiency.

Several IT administrators at the seminar said they are evaluating ways to improve their storage efficiency. Keith Price, system administrator at Johnson & Wales University in Providence, R.I., said his IT team is looking to buy a new SAN to replace a system coming off support.

“We’re just trying to figure out how to figure out what we want,” he explained. “We’re doing that by doing what Jon said, benchmarking item by item.” Price’s department manages an extensive collection of databases on its SAN – Exchange and customer relationship management (CRM), for instance – as well as file systems.

System programmers Edith Allison and Michael Orcutt make up the enterprise storage team for the University of Connecticut, and are seeking ways to improve their storage from a price/performance standpoint as the university centralizes its IT operation.

“UConn is at a crossroads,” Allison said. “We have central IT, and the university has lots of little pockets of IT, and we’ve all just come together under one IT leader for the first time.”

The university has a Fibre Channel SAN, and the team manages 300 TB of data across all the academic units of the university. “We’re looking at how we are going to become a more efficient organization, how we’re going to save money. We’re a state agency, we have no money,” Allison said, laughing. “We’re a state and a public university, so it’s a double whammy.”


November 4, 2011  6:56 PM

Retrospect backup looks ahead to new era

Dave Raffo Dave Raffo Profile: Dave Raffo

After going through two ownership changes since mid-2010, the Retrospect SMB backup software team re-launched this week as an independent company.

Most of the team for the newly private Retrospect, Inc., goes back to when Dantz Development Corp. owned the software before EMC acquired Dantz in 2004. EMC made Retropsect part of its Insignia SMB brand, but eventually lost interest in that market and sold Retrospect to Sonic Solutions in May of 2010. Sonic carried Retrospect software as part of its Roxio brand until digital entertainment company Rovi acquired Sonic for $720 million last December. Rovi wasn’t interested in the backup software market, so the Retrospect team spun itself off.

Retrospect made its coming-out announcement Thursday, when it also launched Retrospect 9 for the Mac with cloud support.

Eric Ullman, one of the Retrospect founders, said there less than 50 people at the new company with about two-thirds of them developers. He said the parting with Rovi was amicable because both sides realized Retrospect was not a good fit with its new parent.

“Backup software is not even close to what Rovi’s business is, and their management quickly realized that Retrospect was not going to be a long-term product at the company,” he said.
Ullman said throughout the ownership changes, Retrospect has kept most of the same channel partners and he hopes to hit the ground running as an independent company.

“We’ve maintained our channel amazingly well over the years,” said Ullman, who leads Retrospect’s product development. “The feedback we’ve received has been almost 100 percent positive from our partners.

“Our sales peaked during the first year at EMC. After EMC dropped focus, sales dropped. They have not changed significantly since we left EMC, but now that we’re focused on just one thing, we expect to grow sales back up.”

Restrospect 9 for Mac supports WebDAV (Web-based Distributed Authoring and Versioning), which makes it easy to integrate into cloud backup services. Retrospect also added a 64-bit network backup client for Intel-based Macs that uses optional AES-256 encryption and lets users initiate backups and restores from their desktops.

Although Retrospect is known largely for its Mac backup, Ullman said Retrospect for Windows accounts for most of its sales. It also has more competition for Windows SMB customers, going head-to-head with Symantec Corp. Backup Exec, CA ARCserve and Acronis software.

“We’re about 60 percent to two-third Windows now, but we expect that to change with the Mac upgrade,” Ullman said. “There’s really not a lot else in the Mac market.”


November 3, 2011  4:26 PM

Fusion-io CEO: EMC’s Project Lightning will cost too much

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io showed there is a hunger for server-based PCIe solid-state drive (SSD) accelerator cards by beating Wall Street estimates for sales last quarter, and CEO David Flynn said he’s not worried about EMC cutting into his success when the storage vendor comes out with its server-based flash product.

EMC has been touting its Project Lightning product since May. The product is in beta and expected to become generally available next month. But Fusion-io’s Flynn maintains putting flash in the server alongside high performance storage arrays is too expensive for widespread adoption.

“EMC is trying to make it additive to its existing business,” Flynn said during the vendor’s earnings conference call Wednesday night. “It’s relegated only to customers willing to pay an additional premium for performance on top of the premiums they already pay [for storage arrays].”

Flynn said using Fusion-io cards in servers allow customers to boost performance without using high-end storage arrays, keeping costs down. EMC obviously wants to continue selling storage arrays alongside servers with PCIe flash. Flynn questioned how many EMC storage system customers will want to pay for another flash device.

“We believe this isn’t about higher performance storage at yet a higher cost,” Flynn said. “This is about bringing cost way down. We believe customers will not pay twice, especially if the performance is solved out front. Instead, they will gravitate to lower cost solutions.”

He said Fusion-io’s IO Turbine software will let organizations get many of the storage management benefits of Project Lightning without EMC arrays.

Fusion-io, which became a public company with an IPO in March, reported revenue of $74.4 million and net income of $7.2 million for last quarter. That compares to $27 million in revenue and a $5.8 million loss a year ago when it was a private company. Fusion-io’s forecast for last quarter was in the $60 million to $65 million range.


November 2, 2011  3:53 PM

Cloud vision still unclear

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

During the EMC Forum 2011 hosted by the storage giant a few weeks ago, EMC president Pat Gelsinger described the still-young cloud computing era as “the most disruptive we have seen in the last 40 years.” He was talking about disruption for customers, but watching storage vendors deal with the cloud makes it clear that the cloud is also disruptive to their plans.

We’ve seen traditional storage vendors try to wash their legacy products as private, public and hybrid cloud technologies as they seek ways to continue selling those technologies under the cloud banner. Keeping storage clouds loosely defined is in their vested interest, at least until customers figure out exactly how – or if – they want to use the cloud.

EMC is a perfect example of a vendor looking to define cloud storage around its image. EMC originally hailed its Atmos object-based platform as its cloud product. But at its recent Forum, EMC showcased Isilon scale-out NAS and its VNX midrange unified storage platform as private and hybrid cloud products. Atmos was hardly mentioned.

EMC is also among the vendors who talk of server virtualization as a fundamental cloud technology. That’s no surprise, because EMC is majority owner of server virtualization market leader VMware and many of its customers have already gone down the virtualization path or are planning to do so. When asked to define the cloud, Gelsinger mentioned virtualization, having a shared pool of computing networking and storage, and an automated managed environment. “What we call IT as a service,” he said.

There seems to be as many definitions of storage clouds as there are people in the storage industry.

Let us know what you think.


October 31, 2011  7:30 PM

Mastering DR is a critical skill for storage pros

Randy Kerns Randy Kerns Profile: Randy Kerns

When working with storage professionals, I always try to understand where storage fits in their organization’s strategic initiatives. The business environment they work in and how they interact with the business owners of critical applications will explain a great deal about the opportunities and limitations for improving their storage strategy.

Storage professionals interact with business owners in a variety of ways. These include:

  • The storage team partners with the business owners in planning storage and data protection.
  • The storage group is perceived as a resource to be called upon by the business owners. The group provides storage at a particular rate (ie “gold level”) which dictates performance, data protection and cost.
  • The business owners are less than cooperative with the storage team, making demands while providing little planning or guidelines regarding their needs. And the business owners complain that storage provisioning is always holding them back.

There are variations of these, and some extreme cases that make for interesting discussions, but storage professionals always raise one common point. That is, when it comes to business continuance/disaster recovery (BC/DR), the storage group plays a key role in putting together an effective solution. Planning, implementing, and periodically testing BC and DR for a business or organization are complicated, costly and necessary processes for most organizations. This is where the storage team is a critical resource, and its influence reaches into the deployment of storage for critical applications.

Planning BC and DR requires an expertise gained from experience. Storage people generally understand this, and can leverage these processes for making more effective and long-term storage decisions.

Understanding all the options and technologies involved in BC and DR is an important skill for storage professionals. They need to be continually learning about technologies and products to be effective. This information will help them make decisions at critical moments about deploying applications that can add to the success of a company.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 27, 2011  8:52 PM

LSI accelerates its move into flash

Dave Raffo Dave Raffo Profile: Dave Raffo

LSI, which left the storage systems business this year, is going full bore into the enterprise flash business.

LSI acquired flash controller chip vendor SandForce for $322 million Wednesday, seven months after it sold off its Engenio storage systems division to NetApp for $480 million. LSI already had an equity stake in SandForce and is one of its customers. SandForce also sells chips to OCZ, Smart Modular, Viking Technology and others.

LSI uses SandForce’s solid-state drive (SSD) chip in its server-based WarpDrive PCIe cards. When the SandForce deal closes – probably in January – LSI will have more control over that technology at a time when server-based PCIe flash will be gaining a lot of attention.

Fusion-io turned its early dominance of PCIe flash for enterprises into a successful IPO, and competitors are lining up to challenge Fusion-io. EMC is among them with its Project Lightning product that is in beta and expected to ship by the end of 2011. Industry sources say EMC will use PCIe cards from Micron and LSI as part of Project Lightning. LSI executives won’t name their OEM customers, but LSI CEO Abhi Talwalkar said Wednesday that he expects a major storage vendor to start selling WarpDrive adapters at the end of this year.

Gary Smerdon, vice president of LSI’s accelerated solutions division, said owning the flash controller chip technology will result in tighter integration of LSI’s flash and management products. The acquisition also guarantees that LSI can keep the flash IP that is already in its products.

“We believe the market for PCIe flash adapters is a rapidly growing market,” Smerdon said. “Now we have a division to specifically focus on the PCIe opportunity. We’re using SandForce’s FSP (flash storage processor), but we didn’t want to talk about a lot of the benefits before because that begs the questions, ‘Where are you getting this from?,’ and ‘What happens if something happens to [SandForce]?’”

LSI executives say they intend to keep SandForce’s customers, too. At least one seems happy to stay onboard for now. After the deal was announced, OCZ CEO Ryan Petersen released a statement saying “SandForce has been a great partner, and we expect the added resources of LSI will only benefit SandForce’s customers. Moreover, because OCZ and SandForce previously contemplated this scenario, we expect that this combination will have no material impact to our existing product lines or business.”

OCZ is SandForce’s largest customer and is responsible for most of SandForce’s revenue, which is expected to be around $60 million this year.

SandForce is the second SSD device startup acquired this year. SanDisk acquired Pliant Technology for $327 million in May.


October 27, 2011  12:55 PM

Index Engines revs its discovery appliance

Dave Raffo Brein Matturro Profile: Brein Matturro

By Todd Erickson, News and Features Writer

Index Engines is giving its e-discovery platform a facelift with a new look and new features.

Index Engines renamed its products, bringing them all under the new Octane brand. The latest version is Octane 4, and has a re-designed GUI and compliance archive to make it easier to search and collect data.

Index Engines added a departmental archive to its policy-based information management platform to let storage administrators and legal teams capture, retain and secure litigation and compliance related files and email messages. The scalable archive can be created within the Octane 4 appliance or on another disk for long-term retention and legal collection.

Jim McGann, Index Engines’ vice president of marketing, called the new archive a “sandbox” for legal and compliance teams because once the archive is populated, lawyers can narrow and refine searches for relevant litigation and compliance related data.

The 2U Linux-based appliance hooks into the network and can auto-discover information sources based on IP addresses, or you can point it at your file, email and backup resources. The collection engine can collect data from many sources, including file shares, Exchange servers, and diskand tape backup.

Customers can scan and copy information into the archive based on user created policies — such as date ranges, custodians, document types, and keywords — and schedule it to automatically collect changed or new files.  This means storage administrators working with legal teams don’t have to keep going back to do new searches to update the archive.

Pricing for an Octane 4 appliance  starts at $50,000 for 100-user accounts.


October 26, 2011  8:45 PM

Violin tunes up for Big Data analytics

Dave Raffo Dave Raffo Profile: Dave Raffo

Violin Memory CTO of software Jonathan Goldick sees solid state playing a key role in storage for Big Data, and he’s not talking about scale-out NAS for large data stores.

Goldick says solid-state drives (SSDs) can help run analytics for Hadoop and NoSQL databases better in storage racks than in shared-nothing server configurations.

“We’re focused on the analytics end of Big Data – getting Hadoop and NoSQL into reliable infrastructures while getting them to scale out horizontally,” he said. “Scale-out NAS is a different part of the market.”

Today, Violin said its 3000 Series flash Memory Arrays have been certified to work with IBM’s SAN Volume Controller (SVC) storage virtualization arrays. Goldick pointed to this combination as one way that Violin technology can help optimize Big Data analytics. The vendors say SVC’s FlashCopy, Easy Tier, live migration and replication data management capabilities work with Violin arrays.

Goldick said running Violin’s SSDs with storage systems speeds the Hadoop “shuffle phase” and provides more IOPS without having to add spindles. SVC brings the management features that Violin’s array lacks.

“Hadoop is well-optimized for SATA drives, but there’s always a phase when it’s doing random I/O called the ‘shuffle phase,’ and you’re stalled waiting for disks to catch up,” said Goldick, who came to Violin from LSI to set the startup’s data management strategy. “We’re looking at a hybrid storage model for Big Data. You’ve heard of top-of-the-rack switches, we look at Violin as the middle-of-the-rack array. It gives you fault tolerance and the high performance you need to make Big Data applications run at real-time speeds.”

He said Hadoop holds data in transient data stores and persistent data stores. It’s the persistent data – which is becoming more prevalent in Hadoop architectures – where flash can help. “So you think of Hadoop not just as analytics but as a storage platform,” he said. “That’s where IBM SVC bridges a gap for us. When data is transient you don’t need data management services as much. When you start keeping the data there, it becomes a persistent data store of petabytes of information. You need data management features that enterprise users have come to expect – things like snapshotting, metro-clustering, fault tolerance over distance.”

Violin’s 3000 series is also certified on EMC’s Vplex federated storage system. EMC is talking about Big Data more than any other storage vendor, with its Isilon clustered NAS as well as its Greenplum analytics systems. EMC president Pat Gelsinger last week said Big Data technologies will be the focus of EMC’s acquisitions over the coming months.

If Goldick is correct, we’ll be hearing a lot more about Big Data analytics in storage.

“Last year Big Data was about getting it to work,” he said. “This year it’s about optimizing performance for a rack. People don’t want to run thousands of servers if they can get the efficiency from a rack.”

There are other ways of using SSDs to speed analytics – inside arrays, or as PCIe cards in storage systems or servers. Violin’s Big Data success will be determined by its performance against a crowded field of competitors.

  Bookmark and Share     0 Comments     RSS Feed     Email a friend


October 26, 2011  3:13 PM

Who makes the call on archiving?

Randy Kerns Randy Kerns Profile: Randy Kerns

Data archiving makes sense when primary storage gets filled up with data that is no longer active. Data growth on primary storage – the highest performing storage with the most frequent data protection policies – results in increasing capital and operational costs.

Organizations can save money by moving the inactive data or data with a low probability of access to secondary storage or archive storage. The question is, who owns the decision of what to move?

IT directors and managers I’ve talked to have a mixed response to that question. Some say it is the business unit’s decision, but IT cannot get a response from them about what data can be archived or moved to secondary storage. Others say that IT has the responsibility but does not have the systems or software in place to do the archiving effectively, usually because they lack a budget for this. And a few say it is IT’s responsibility, and they are in the process of archiving data.

Those who archive with the initiative coming from IT say it is important to make the archiving and retrieval seamless from the user standpoint. Seamless means the user can access archived data without needing to know that the data has been archived or moved. It’s acceptable if the retrieval takes a few extra seconds, as long as there are no extra steps (operations) added to the user’s access.

Implementing archives with seamless access and rules-based archiving by IT requires specific system capabilities. These systems must work at the file system (or NAS) level to be able to move data to secondary or archive systems, and then to retrieve that data.

External tiering, or archiving, is highlighted in the Evaluator Group report that can be downloaded here.  This is a major tool in the IT repertoire to help control costs and meet expanding capacity demands. The decision-process about archiving needs to be made by IT, but requires the system capabilities to make it a seamless activity for users.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: