Storage Soup


October 16, 2014  10:34 AM

HP and Scality officially tie the knot

Dave Raffo Dave Raffo Profile: Dave Raffo
Cleversafe, HP, Storage

Object storage vendor Scality has scored a reseller deal with Hewlett-Packard, which the private company’s CEO said will greatly expand its global reach.

Scality and HP have worked together closely in the field, and a lot of Scality’s Ring software runs on HP Proliant servers.

“We’ve been working with  all the server vendors since the beginning,” Scality CEO Jerome Lecat said. “HP has been the most proactive in coming up with a server that fits our industry.”

HP sells Scality software on the ProLiant SL4540 and DL360p Gen 8 servers.

Lecat said Scality has more than 40 PB of customer data deployed on HP servers. Scality-HP customers include DailyMotion, TimeWarner Cable and European television station RTL2, he said.

Lecat said the deal is crucial for Scality because “we’re still a relatively small company, and we do not have thousands of sales people around the globe like HP does.”

The deal is not exclusive. HP sells its own StoreAll product with object storage, and it also works closely with Cleversafe. There is no formal reseller deal with Cleversafe, but it is featured alongside Scality on HP’s object storage software for ProLiant web page.

Lecat said Cleversfe’s dsNet object storage is more suited for long-term archives while Sclaity Ring is for active applications such as email and video archiving.

“We don’t see ourselves as an object storage company,” Lecat said. “Object storage companies only focus on archiving. Our ambitions are larger than that. We have a lot of media companies running video on demand, consumer web mail and other applications. We’re not just deep and cheap archiving.”

October 10, 2014  4:26 PM

Druva moves from endpoint to server backup

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Cloud Backup, Storage

Druva is taking its enterprise endpoint backup software and moving it into backup for small businesses and remote and branch office backup.

The company this week launched Druva Phoenix, a centralized management backup and archive product targeting companies that have tight budgets, limited local IT staff or none at all. The software is based on Druva’s nSync enterprise endpoint backup and nCube architecture. Phoenix is an agent-based software with global deduplication that is done at the source level.

Druva Phoenix is offered an alternative to traditional server backup that requires secondary storage, tape and archiving.

“This is a pure play software as a service cloud product,” said Jaspreet Singh, Druva’s CEO and founder. “The core to solving backup to the cloud is building a scalable deduplication in the cloud. In the last five and a half years, we built endpoint backup for the cloud. In the last 18 months, we were looking for what we can solve next. The remote office looked interesting.

“We thought we could remove a few processes by introducing Phoenix,” he said. “We are extending from endpoint to remote offices. It’s a very natural extension for us.”

Phoenix has a software-based cache accelerator for backup and restores, which resides on the server in the remote or branch office. The rest of the data is moved into the Amazon cloud.

“Because there is not much metadata, it can scale fairly well,” Singh said.

Singh said without deduplication, the amount of data stored in the cloud becomes exorbitant. For instance, 1 TB of data can multiple to 719 TB of data after it is retained for seven years if dailies, incrementals and full backups are done.

“One data reduction price-point is based on the source data,” Singh said.

Jason Buffington, senior analyst at Enterprise Strategy Group, said ROBO servers are the next “battleground” for cloud-based backup where it makes sense.  For the  remote office, he said the decision to back up to the cloud depends on whether  IT wants to control ROBO backups or just manage the data repositories.

Druva’s endpoint software lends itself to small business and ROBO backup and archiving because the software was designed with administrative over-site capabilities, Buffington said. The software also comes with a three-year, seven-year and infinite retention policy.

“No one would keep endpoint data for an infinite amount of time,” Buffington said. “But it should be a requirement for server-based protection.”


October 10, 2014  2:27 PM

A look at access methods for open systems and mainframes

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

The term access method is frequently used to identify types of I/O in open systems. Many who use it probably don’t understand the historical context for what has been known as an access method for over 50 years.  In open systems, the types of I/O are for block data, file data, and object data. Access methods represent how the types of data are stored on devices.

The term access method comes from the mainframe world and denotes a number of well known (at least to those who have worked with mainframes) means to store or access information. Access methods are really software routines accessed by application programs using software commands that are inline calls to system functions. You can call these Application Program Interfaces (APIs). The closest equivalent function in open systems would be a device driver.

There are many types of access methods and most deal with how data is organized, usually in the form of records, which are typically fixed length blocks of data in a dataset.

Some the familiar access methods for storage in the mainframe world include:

  • BSAM – Basic Sequential Access Method
  • QSAM – Queued Sequential Access Method
  • BDAM – Basic Direct Access Method
  • BPAM – Basic Partitioned Access Method
  • ISAM – Index Sequential Access Method
  • VSAM – Virtual Storage Access Method
  • OAM – Object Access Method

An example of doing I/O in an application in QSAM would be to set up buffers in memory for queued I/O (multiple records in a block) and then do a GET or PUT.  Interestingly, the basic I/O for S3 object access is GET and PUT.

Open systems access methods are termed:

  • Block – individual blocks of data are read or written from/to storage
  • File – a stream of bytes that represent a file with associated file metadata is written or read within the organization of a hierarchical tree structure.
  • Object – data segments and user or system-defined metadata is stored in a flat namespace with access through object ID resolution.

The open systems access methods don’t map directly to those in the mainframe world, but you can understand them if you know the mainframe methods. The term access method in open systems isn’t wrong, it just means a slightly different thing. Translating between the two will help understand the meaning.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 9, 2014  4:57 PM

Symantec follows HP down breakup path

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage, Symantec, Veritas

The divorce rate for IT vendors spiked this week. Hewlett-Packard and Symantec said they are separating into pieces, and it might take a marriage counselor to keep EMC together.

Symantec today confirmed it is splitting off its information management business from the security business. The security company will keep the Symantec name, while the Information Management company has no name yet. The split is scheduled to complete by the end of 2015.

The Information Management arm will be a storage vendor, with products in backup and recovery, archiving, eDiscovery, storage management, and information availability solutions. John Gannon, who retired as Quantum COO in 2005 and also led HP’s personal computing division, becomes general manager of the new storage company.

Michael Brown, named Symantec’s permanent CEO last month, will continue to run Symantec.

“We’re confident this is the right thing from a strategy standpoint,” Brown said.

Brown said Symantec’s leadership team decided it was too difficult to remain a market leader in security and data management, and that led to the breakup decision. The security and storage companies came together in 2005 when Symantec acquired Veritas for $13.5 billion, but there have been intermittent rumors that the backup business would be spun off or sold for years.

The security part of the business has been the bigger piece of Symantec, with $4.2 billion of revenue in fiscal year 2014 compared to $2.5 billion for information management.

So, does anyone think they should call the new information management company Veritas?


October 9, 2014  3:26 PM

EMC claims products work better together despite a call for breakup

Dave Raffo Dave Raffo Profile: Dave Raffo
Pivotal, RSA, Storage, VMware

EMC has issued two responses to the letter that investor Elliott Management made public Wednesday calling for the vendor to spin off VMware and/or explore a merger with other large companies.

EMC first released a direct response to the Elliott letter, saying little except to repeat claims that EMC is exploring options but believes its strategy is sound.

An indirect response did a better job of making EMC’s case for keeping its federation of EMC, VMware, RSA, and Pivotal together. That response came today in the form of a release touting its Federation Software-Defined Data Center Solution.

The solution is little more than a combination of products from EMC’s companies with extras such as a self-service portal and scripts to tie them together. But the concept shows how the parts of the EMC Federation work together, testing the products at the federation’s engineering lab on the VMware campus, and putting pieces together to solve distinct data center problems.

Is it a coincidence that the data center solution release came one day after Elliott’s letter to CEO Joe Tucci and the EMC board questioning the value of EMC keeping everything under one umbrella? Bharat Badrinath. EMC’s Senior Director of Global Solutions Marketing, isn’t saying.

“That’s something Joe and the board will determine,” he said of the spinout and merger issue.

Badrinath’s job is pushing products, not mergers. EMC’s solution announcement also provided this list of EMC Federation products brought together as part of the software-defined data center solution:

  • Management and Orchestration: VMware vCloud Automation Center, VMware vCenter Operations Management, VMware IT Business Management, EMC Storage Resource Manager
  • Hypervisor: VMware vSphere, the industry’s most widely deployed virtualization platform
  • Networking: VMware NSX, the network virtualization and security platform for the software-defined data center.  VMware NSX brings virtualization to existing networks and transforms network operations and economics
  • Storage: Designed for EMC ViPR & EMC Storage, EMC Storage Resource Manager, VMware Virtual SAN.
  • Hybrid Cloud Deployment Models: Connectivity to VMware vCloud Air
  • Choice of Hardware: Built on converged infrastructure and can be deployed on a   variety of hardware including VCE Vblock and VSPEX.
  • PaaS: Delivering Platform-as-a-Service with Pivotal CF
  • Documented Reference Architectures

The point EMC wants to make is these products from different parts of the federation are intertwined and cannot be broken apart without harm.

“We have four strategically aligned companies which are working together at times, but there are also times when they are independent and operate on their own,” Badrinath said. “Customers can pick products developed independently or together. It’s all about us being better together or bringing the best of the best within the four businesses.”

Other solutions that will follow include Platform-as-a-Service, End-User Computing, Virtualized Data Lake and Security Analytics. Badrinath said they all should be available by early 2015.

Badrinath said the testing for the software-defined data center portion of the program took 40,000-person hours of engineering across federation companies. He also emphasized that EMC and VMware continue to work with outside partners, even if those partners such as Microsoft or other storage vendors compete with federation companies at times.

While the federation’s software-defined data center initiative has been going on for months, the release sounds as if it were put together to counter specific complaints from Elliott. The letter, signed by Elliott portfolio manager Jesse Cohn, said the EMC storage company and VMware “hinder one another” because they compete in areas, and the relationship prevents them from developing other critical relationships. Cohen said EMC’s stock is underperforming, the company is undervalued, and EMC and VMware would both be better off apart.

“As time passes, this untenable situation is going to get worse,” he wrote to EMC.


October 3, 2014  9:56 PM

Red Hat faces long-term decisions on Gluster, Ceph in storage portfolio

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

While launching the latest version of Red Hat Storage Server yesterday, the vendor provided little insight into the long-term positioning of its storage software portfolio and the chances that it might combine its Gluster-based Storage Server and Inktank Ceph Enterprise product lines.

Ranga Rangachari, vice president and general manager of storage and big data at Red Hat, said the company hopes to “get back to our customers and partners in the very near future with a consolidated vision of where this journey is going.” He addressed the topic in response to a question during the company’s Webcast entitled “Advancing software-defined storage,” which he said customers view as the ability to take advantage of industry-standard x86 servers with the intelligence resting in the software.

Rangachari noted simply that Red Hat’s acquisition of Inktank Storage Inc. this year brought object- and block-based storage to the table and complemented the file system capabilities the company gained through its 2011 acquisition of Gluster Inc.

Gluster had sold a supported version of the open source GlusterFS distributed file system in much the same way that Inktank sold a supported version of open source Ceph. Any innovative software development work rests with their respective open source project communities.

“The Gluster and the Ceph communities continue to thrive independently and thrive really well,” said Rangachari, claiming that Gluster and Ceph combined for almost two million downloads during the last nine months. “The innovation that’s going on on both those projects will continue to happen unabated.”

Red Hat put out new versions of each of the commercially supported products this year. Storage Server 3, launched yesterday, is based on open source GlusterFS 3.6 and adds support for snapshots, multi-petabyte scale-out capacity, flash drives and Hadoop-based data analytics. Inktank Ceph Enterprise 1.2, released in July, was based on open source Ceph’s Firefly release and added erasure coding, cache tiering and updated tools to manage and monitor the distributed object storage cluster.

The Ceph open source project claims to be a unified system providing object, block and file system storage. Ceph’s file system runs on top of the same object storage system that provides object storage and block device interfaces, according to the project’s Web site.

“It’s fair to say that file is probably the least well evolved of those three,” said Simon Robinson, a research vice president in storage at New York-based 451 Research LLC. “The file capability is very immature. It’s not enterprise-grade.”

But, as the Ceph technology improves, Red Hat will need to confront the question of whether to continue to focus on Gluster and Ceph, said Robinson.

“I think Red Hat’s bet buying Gluster was, ‘Hey, look at all this unstructured data. Look how quickly it’s growing. We need a play here.’ Three years ago, that play was NAS. Today it looks slightly different,” said Robinson. “When we think about the growth of unstructured data, it’s actually object that is seen as the future architecture rather than NAS.”

He cited Amazon and Microsoft Azure as proof points of the object model working at scale. “It’s just a case of how does that percolate down into the enterprise. It will take time,” he said.

Robinson said he doesn’t think it makes sense for Red Hat to physically merge Gluster and Ceph. He predicted that if Red Hat Storage does catch on, its success will be through Ceph – “the darling of the storage startup world” – tied to the broader success of the open source OpenStack cloud technology platform. Ceph has already started to gain momentum among cloud service providers, he said.

“Everybody’s playing with OpenStack, and if you’re playing with OpenStack, you’ve probably heard of Ceph. And Ceph has the interest of the broader storage community,” said Robinson. “Other big players are really interested in making Ceph a success. That works for Red Hat’s advantage.”

Henry Baltazar, a senior analyst at Cambridge, Massachusetts-based Forrester Research Inc., said he sees no problem with Red Hat having Gluster-based file and Ceph-based block and object storage options at this point, since the company doesn’t have much market share.

“They’re going to have two platforms in the foreseeable future. Those aren’t going to merge,” predicted Baltazar. “Gluster is definitely the file storage type. There are ways they could use it that can complement Ceph. It still remains to be seen where it will wind up 10 years from now.”


October 3, 2014  4:21 PM

EMC links arms with CloudLink for DRaas

Dave Raffo Dave Raffo Profile: Dave Raffo
Disaster Recovery, DRaaS, EMC, Storage

EMC is aiming its new RecoverPoint for Virtual Machines at cloud DR, in partnership with cloud security vendor CloudLink Technologies.

RecoverPoint for Virtual Machines is a hypervisor-based version of EMC’s RecoverPoint replication software. It will be generally available Nov. 17. EMC will also integrate the product with CloudLinksSecureVSA, which provides encryption for data at rest and data in motion.

The combined products can allow service providers to build DR as a Service (DRaaS), and enterprises can use them to replicate data to private or public clouds for DR.

RecoverPoint for Virtual Machines is a software-only product. Unlike previous versions of RecoverPoint, it is storage-agnostic so it doesn’t require EMC arrays to run. It works with any VWware certified storage. It is not hypervisor-agnostic yet, though. It supports VMware vSphere today with support for Microsoft Hyper-V and KVM hypervisors on the roadmap.

It is EMC’s first replication software that works on the individual VM level. Instead of replicating storage LUNs as other RecoverPoint versions do, RecoverPoint for Virtual Machines splits and replicates writes for VMware vSphere VMs. It requires splitter code on each NSX node running protected VMs, and at least one virtual appliance at each site. Customers can replicate VMs regardless of hardware running at either end.

CloudLinksSecure VSA adds security. It allows customers to store and manage encryption keys on-premise.

“One of the big inhibitors of going to a public cloud is security,” said Jean Banko, director of product marketing for EMC’s data protection division. “That’s why we partnered with CloudLink.”


October 3, 2014  2:45 PM

Nimble, Actifio show their Oracle chops

Dave Raffo Dave Raffo Profile: Dave Raffo
Actifio, Oracle OpenWorld, Storage

Oracle wasn’t the only vendor to toot its own storage horn at Oracle OpenWorld this week. A couple of smaller vendors played up their budding relationship with the database giant to show how they are growing their number of enterprise customers.

Array vendor Nimble Storage added a pre-validated SmartStack reference architecture for Oracle’s JD Edwards EnterpriseOne application and copy data management pioneer Actifio increased its integration with Oracle apps to attract more database administrator customers.

Nimble first introduced its SmartStack reference architectures in late 2012 through a partnership with Cisco, and claims more than 200 SmartStack customers. The JD Edwards version consists of Nimble’s CS300 storage array, Cisco UCS Mini Server, Oracle VM, Oracle Linux, Oracle Database, and JD Edwards EnterpriseOne 9.1.

Nimble’s previous SmartStack flavors include VDI for Citrix and VMware, Microsoft Critical Applications, Oracle Critical Applications, Desktop/Server Virtualization, Server Virtualization/Private Cloud and Data Protection with CommVault.

Radhika Krishnan, Nimble VP of product marketing and alliances, said the JD Edwards SmartStack came about because more enterprises are starting to deploy Nimble storage, and JD Edwards can be a tricky app to size correctly.

“Sizing tends to be challenging, depending on the number of end users you have,” she said.

Actifio used the conference to show off its expanded support for Oracle applications, allowing it to become an Oracle Platinum Level partner. The expanded support for Actifio CDS includes Oracle Database 12c, RMAN and Exadata products.

The value of Actifio’s software is it allows organizations to uses one copy of data for production, backup, test/development or any other application.

Actifio reps said they modified their platform’s workflows to enable DBAs to automate the data flow, with the help of RESTful APIs.

Oracle DBAs can use the automated workflow to provide copies of their databases to developers in minutes, according to Actifio senior director of global marketing Andre Gilman.

“I call it RMAN on steroids,” he said. “The old school way can take days to weeks, and even with newer technologies it takes hours.

“You can create a live clone of a database and update a virtual image to all your developers at the same time. You don’t have to repeat the process as part of daily maintenance. You put it in as one workflow and it’s all automated.”

Actifio director of product marketing Chris Carrier said the integration came from months of co-development work with Oracle. He said Actifio uses RMAN to do its change-block tracking and built its LogSmart log management system specifically around Oracle, although it works with other databases. “If you show Oracle DBAs a different way to manage data, they get nervous. But if you’re leveraging RMAN, they like that,” Carrier said.


September 30, 2014  4:04 PM

EMC’s XtremIO 3.0 adds compression with disruptive upgrade for existing users

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

EMC Corp. made available the 3.0 release of its XtremIO all-flash array today with new inline compression capabilities and performance improvements – but existing customers who want the software upgrade need to prepare for significant disruption to their production environments.

Josh Goldstein, vice president of marketing and product management for EMC’s XtremIO business unit, confirmed that users will need to move all of their data off of their XtremIO arrays to do the upgrade and then move it back onto the system once the work is complete.

EMC’s XtremIO division has taken some heat on the disruptive – and some say “destructive” – nature of the upgrade, especially in view of the company’s prior claims that the product supported non-disruptive upgrades (NDU) of software.

Goldstein said the company decided to make an exception for the 3.0 upgrade based on customer input about the inline compression capabilities, which he claimed could double the usable capacity of an XtremIO array in many cases.

“This was a choice that was made, and it was not an easy choice,” said Goldstein. “We could have delayed the feature. Originally we were planning to put this in later in the roadmap. If we had chosen to, we could have tied this to another hardware release, and it would have been something that existing customers could never take advantage of. Our customer base told us emphatically that that was not what they wanted.”

Goldstein said that EMC will provide, at no cost to customers, the option of professional services and extra XtremIO “swing” capacity, if necessary, to ensure that they have an equivalent system on which to put their data while the upgrade is taking place.

The disruptive nature of the 3.0 upgrade came to light recently through an “XtremIO Gotcha” blog post from Andrew Dauncey, a leader of the Melbourne, Australia, VMware user group (VMUG). Dauncey wrote: “As a customer with limited funds, this is the only array for a VDI project, where the business runs 24/7, so to have to wipe the array has massive impacts.” He said a systems integrator had offered a loan device to help with the upgrade.

Dauncey worked as a systems engineer at a public hospital in Australia at the time of his initial post on Sept. 14. He has since gone to work for IBM as a virtualization specialist.

In a blog post last Sunday, Dauncey noted EMC’s “marketing collateral” that advertised “non-disruptive software and firmware upgrades to ensure 7×24 continuous operations,” and he accused EMC of “false advertising” prior to the release of the updated 3.0 firmware.

Goldstein said, “The releases that we’ve had from the time the product went GA up until now were all NDU. The releases that we have going forward after this point will all be NDU as well.”

Chris Evans, an IT consultant who writes the blog “Architecting IT,” said via an e-mail that HP upgraded its platform to cater to flash, and SolidFire released a new on-disk structure in the operating system for its all-flash array without disruptive upgrades. He said, what’s surprising in the XtremIO case is “that EMC didn’t foresee the volume of memory to store metadata only 12 months after their first release of code.”

Chad Sakac, a senior vice president of global systems engineering at EMC, shed some light on the technical underpinnings of the upgrade through his “Virtual Geek” personal blog, which he said has no affiliation to the company. He said the 2.4 to 3.0 upgrade touches both the layout structure and metadata indirection layer, and as a result, is disruptive to the host. He pointed to what he said were similar examples from “the vendor ecosystem.”

Goldstein confirmed that the block size is changing from 4KB to 8 KB, but he said the block-size change is not the main reason for the disruptive upgrade. He said it’s “all these things taken together” that the company is doing to both add compression and improve performance.

“We already had inline deduplication in the array, and that means that you have to have metadata structures that can describe how to take unique blocks and reconstitute them into the information that the customers originally stored,” Goldstein said. “When you add inline compression, you have to have similar metadata information about how to reconstitute compressed blocks into what the customer originally stored. Those kinds of changes are things that change the data structures in the array, and that’s what we had to update.”

Goldstein said customers should not have to endure an outage in the “vast majority of cases.” Goldstein claimed that XtremIO is “overwhelmingly used” in virtual environments and moving virtualized workloads is not difficult. He mentioned VMware’s Storage VMotion and EMC’s PowerPath Migration Enabler as two of the main options to help, but he said there are others.

Customers also may choose to remain on the 2.4 code that EMC released in May. Goldstein said that EMC will continue to provide bug fixes on the prior 2.4 release for “quite a long time.”

“There’s nothing forcing them to upgrade,” he said.

Craig Englund, principal architect at Boston Scientific Corp., said EMC contacted Boston Scientific’s management about the disruptive upgrade in the spring. At the time, the IT team already had a loaner array for test purposes, and they asked to keep it longer after learning the 3.0 upgrade was destructive.

“It reformats the array. It’s destructive. It’s not just disruptive,” Englund said. “You have to move all of your data off the array for them to perform the upgrade.”

But, Englund said the team can “move things around storage-wise non-disruptively” because the environment is highly virtualized through VMware. He said he’s willing to go through the inconvenience to gain the ability to run more virtual desktops and SQL Server databases on the company’s existing XtremIO hardware. Early tests have shown a 1.9 to 1 capacity improvement for the database workloads and 1.3 to 1 for VDI, he said.

“They could have said, ‘If you want these new features, it’s coming out in the next hardware platform, and you’ll have to buy another frame to get it.’ But, they didn’t, and I think that’s great,” Englund said. “To try to get this out to all of the existing customers before they get too many workloads on them, I think, was considerate.”


September 30, 2014  2:28 PM

All those video cameras can be a boon for storage companies

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Whenever Big Brother is watching, there is a storage vendor eager to store whatever Big Brother is seeing. And today, Big Brother is watching more places than ever before.

The video surveillance market consisting of cameras, software, DVRs, storage and other hardware is expected to reach $26 billion by 2018 and is growing twice as fast as the overall IT market, according to market research firm IHS. The main accelerators of that growth projection are the common use of surveillance in markets such as government, city surveillance and transportation, and the transformation of video from analog to more capacity-hungry digital.

So it’s no surprise that storage stalwarts EMC and Seagate are pushing hard into video surveillance.

EMC today launched a video surveillance practice, which includes a VNX array and partnerships tailored to the market. The VNX-VSS100 is configured for video surveillance cameras on the edge, with 4 TB nearline SAS drives and a mix of memory and connectivity to handle video files. The VNX-VSS100 comes in 24 TB and 120 TB configurations, and has been validated with video surveillance software and cameras, according to Michael Gallant, senior director of EMC’s video surveillance practice.

EMC has also tested its Isilon scale-out NAS arrays for core storage of video surveillance data in the data center. EMC has tested its storage with surveillance technology vendors such as Axis, Genetec, Milesone and Verint. Its video surveillance distributor and integrator partners include Avnet, Ingram Micro, and Scansource.

Gallant said EMC has been in the video surveillance market for eight years but the VNX-VSS100 is its first main storage platform built specifically for the market.

He said storage is the fastest growing part of the video surveillance market, and is expected to be around $3 billion in 2016.

“This is one of the most storage intensive application workloads,” Gallant said. “Governments are requiring longer retention of video, and the data collected is considered more valuable now. Organizations are putting a lot of edge storage devices to cover subway systems, railway stations and bus stations. There is a need for highly available high performance storage at the edge, and that data is being brought back to the core.”

Seagate today launched the Seagate Surveillance HDD, a hard –drive available in 1 TB to 4 TB capacities with 5 TB and 6 TD versions expected by the end of the year. The drive includes Seagate Rescue services, which the vendor said can typically restore data within two weeks with a more than 90 percent data recovery success rate. The drive is designed for large streaming workloads used in video surveillance, and has a one million hour mean time between failure MTBF rate to stay in the field longer.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: