Storage Soup

August 12, 2016  3:22 PM

Solid-state drives bulk up for capacity

Dave Raffo Dave Raffo Profile: Dave Raffo
samsung, Seagate

SANTA CLARA, California — Solid-state drives have been much faster than hard disk drives from the start, and now they’re dwarfing HDDs in capacity too.

At Flash Memory Summit this week, Seagate demonstrated a 60 TB 3.-5 inch SAS drive and Samsung said it would have a 32TB 2.5-inch SAS drive out in 2017 and a 100-plus TB SSDs by 2020.

The largest capacity enterprise drive out now is Samsung’s 16TB drive, which recently began showing up in all-flash arrays from NetApp and Hewlett-Packard Enterprise 3PAR arrays.

Samsung’s large drives are based its 512-Gb V-NAND chip. The vendor stacks 512 V-NAND chips in 16 layers to create a TB package, and 32 of those TB packages are combined into the 32TB SSDs. Samsung points out its 32TB will enable greater density than Seagate’s 60TB SSD because 24 2.5-inch drives can fit into the same space as 12 3.5-inch SSDs.

Seagate will own the density crown for a while if it gets its 60TB SSD before Samsung’s 60TB drive.

Seagate senior director of product management Kent Smith said he expects the 60TB drive to be available within a year. He said the drive will enable active-active archives. “Take a social media site with a lot of photos that people need to access quickly,” he said. “People hate waiting. This is for when you need lots of capacity but you need it to respond quickly.”

SSDs are already making 15,000 RPM HDDs scarce and relegating 10,000 RPM drives to servers. With the larger drives, SSDs can also move into traditional capacity workloads.

“Flash for bulk data becomes attractive in places where data center space is limited,” said DeepStorage consultant Howard Marks.

HDD giant Seagate is trying to show it is serious about SSDs. Its main spinning disk rival Western Digital has invested heavily in flash, including its $17 billion acquisition of SanDisk completed earlier this year. Seagate has been more active on server-side flash — it also launched new Nytro NVMe cards at FMS – but has been slow to embrace enterprise SSDs.

“It’s a surprise to me that Seagate hasn’t taken its dominance in hard drives and moved that to SSDs,” Objective Analysis analyst Jim Handy said during flash market update at FMS.

Samsung also had more products to talk about than big SSDs. The vendor said it expects to release a ultra-low latency Z-SSD and launch a 1 TB ball grid array (BGA) in-2017. Ultra thin BGAs are for notebooks and tablets, but the Z-SSD will be used for enterprise systems running applications such as real-time analysis. Samsung senior SSD product manager Ryan Smith said the first Z-SSD product will be 1TB with larger capacities planned.

One area Samsung is in no rush to be first in is quad-level cell (QLC) SSDs that store 4 bits per NAND cell. While other vendors said they would have QLC in 20017 or 2008, Samsung’s Smith said he sees no reason to hurry past triple-level cell (TLC) flash.

“We feel strongly that TLC is the right strategy,” he said. “What do you gain from QLC? We decided what we’re currently offering is the best choice.”

August 12, 2016  6:32 AM

Cloudian and AWS team up for on-premise cloud storage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloudian and Amazon Web Services are now offering a new service that allows customers to use Cloudian HyperStore Hybrid Storage offering that stores data locally but leverages the S3 object storage.

AWS cloud storage will manage the usage and billing for the customers.

It targets applications and data that customers want to keep on-premises and operate in a hybrid cloud mode, said Paul Turner, Cloudian’s chief marketing officer. That kind of data is stored behind the organization’s firewall using the S3-compatible HyperStore software.

“What is different here is you can procure it from the Amazon marketplace. What we have done is implemented a service where you can go (to the AWS cloud storage) market place and sign up for the S3 service and do it locally,” Turner said.

“It’s in the customer data center and as the storage is consumed, you pay as you go and all the billing is done through Amazon S3,” he said. “It’s an OPEX spend which is unusual because up until now customer data center solutions are a CAPEX spend.”

The service is a hybrid cloud storage offering so customers can also use the HyperStore to tier data into the public cloud, either in S3 or Amazon Glacier. The HyperStore is an S3-compatible object storage product. The AWS cloud storage and HyperSore service currently is available in regions across the United States and EMEA.

“As we go forward we will roll it out in other regions,” Turner said.

The cost is three cents per gigabyte, based on the average usage.

“Customers have been asking for this and one thing Amazon does really well is respond to customers,” Turner said. “They will build what is needed.”

August 11, 2016  7:48 PM

Hubstor Microsoft public cloud archive goes cool, deep

Garry Kranz Garry Kranz Profile: Garry Kranz

Hubstor is fine-tuning the Microsoft Azure-based cloud archive platform it launched in July. The Ontario, Canada, startup introduced CoolSearch, which it bills as searchable Microsoft public cloud-integrated deep storage for enterprises that must retain inactive data indefinitely.

Hubstor’s standard self-service active archive lets users access and share archived data stored in Microsoft public cloud storage. Hubstor’s role-based access controls are integrated with Microsoft Active Directory for user authentication.

Rather than knowledge workers generally, CoolSearch is aimed at privileged user groups that control access permissions. The idea is to enable corporate legal or security teams to quickly spin up high-volume, low-cost searches of unstructured data related to compliance, defensible data deletion or e-discovery.

The CoolSearch data-aware archive is an isolated tenant that resides in Hubstor’s Azure cloud or in a customer’s Microsoft public cloud account.  CEO Geoff Bourgeois touts CoolSearch as an alternative to legacy approaches to searching discoverable storage.

“We’re responding to demand from organizations that don’t care about end user access. They just need searchable, fully managed cool storage for investigations, compliance, and litigation activity,” Bourgeois said.

After a query is run, CoolSearch deploys the results in Microsoft Blob Storage, which is Microsoft’s public cloud storage for infrequently accessed data.  Hubstor scales down a CoolSearch search cluster once indexing is finished. As with its dedicated cloud archive service, Hubstor CoolSearch is available as a monthly subscription, with pricing based on consumption of Microsoft public cloud resources.

Hubstor provided a pricing chart based on a 100 TB CoolSearch cluster with triple redundancy, 25 TB of content indexing and 3% egress. Depending on the search cluster and its activity level, the vendor claims searched indexing costs range from 5 cents and 9 cents per GB. The Microsoft public cloud CoolSearch tenant can be switched to an inactive state to reduce costs when it’s not in use.

The CoolSearch managed service includes automatic data mapping to orphan users. PST splitting and optional deep processing aids discovery of stored Microsoft Outlook PST files. Policy-based index scoping controls which data gets ingested in a full context indexed search.

CoolSearch discovery searches accept keywords, wildcards, proximity, Boolean, boosting, grouping, fuzziness, and regular expressions.  Searches restriction include location, tags, active or orphan users, groups or data owners. Options include full-content search or configured metadata fields.  Full-text searches use hit highlighting, paging, sorting and relevancy to rank results. CoolSearch also allows customized metadata searches.

August 11, 2016  6:46 AM

SCSI trade group claims new SAS has pluses over NVMe/PCIe

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

NVMe and PCIe solid-state drives (SSDs) may be a hot topic at this week’s Flash Memory Summit, but the SCSI Trade Association is trying to remind everyone that new serial-attached SCSI (SAS) technology is on the way.

Rick Kutcipal, president of the SCSI Trade Association and product planner at Broadcom, said he expects the upcoming “24 Gigabits per second” (Gbps) SAS device-connect technology – which actually has a maximum bandwidth of 19.2 Gbps – to see its first use with SSDs.

“The biggest advantages will be in solid-state memory,” Kutcipal said.

He said the SCSI Trade Association hopes to hold its first plugfest for so-called “24 Gbps” SAS in mid-2017. He expects host bus adapters (HBAs), RAID cards, and expanders to support the new SAS technology in 2018, with server OEM products to follow in 2019.

Kutcipal claimed the 19.2 Gbps bandwidth would have a 21.5% per-lane performance advantage over non-volatile memory express (NVMe) running on top of PCI Express (PCIe) 4.0. The maximum bandwidth for single-lane PCIe 4.0 is 15.8 Gbps, he said.

SAS typically uses one lane to the drive, and enterprise NVMe SSDs typically use four-lane PCIe, Kutcipal acknowledged. Four-lane PCIe would obviously be faster than single-lane SAS.

But Kutcipal said, “The lanes are not free. [They’re] actually very expensive, so the comparison has to be per lane. SAS can go x2 or x4 [lanes] to the drive. ”

SAS uses the small computer system interface (SCSI) command set to transfer data between a host and a target storage device. SCSI was developed 30 years ago when hard disk drives (HDDs) and tape were the primary enterprise storage media. Manufacturers have continued to use serial-attached SCSI as a drive-connect with faster SSDs.

The SCSI Trade Association’s efforts to promote a new SCSI Express (SCSIe) interface to run SCSI commands over PCIe have largely fallen flat in comparison to the momentum behind NVMe with PCIe-based SSDs.

The NVM Express industry consortium developed NVMe as a lower-latency alternative to SCSI. NVMe streamlines the register interface and command set for use with faster PCIe-based SSDs and post-flash technologies, such as Intel-Micron’s 3D XPoint.

“SAS is inherently scalable, and NVMe is not,” Kutcipal said. “NVMe will scale to tens of devices, and it’s pretty arduous scaling, while SAS can go to thousands of devices. And there are arrays out there today that are thousands of devices.”

Kutcipal said NVMe cannot solve PCIe’s scaling challenges.

“The limitation in the scalability of NVMe as a device connect is really inherent in PCIe, not in NVMe,”he said. “That’s a big fundamental limitation of NVMe. It relies on PCI Express as its transport in the device connect world.”

SAS can serve as a device/drive connect as well as a storage networking technology. But Kutcipal said the dominant role for SAS is connecting a host bus adapter (HBA) or RAID card to an SSD or hard disk drive (HDD). SAS has distance limitations for storage networking, limiting its use to SANs inside the data center, he said.

The upcoming SAS specification has two parts: the SAS-4 physical layer and the SAS Protocol Layer (SPL)-4. The SPL-4 specification is expected to be complete and ready for use later this year, according to Kutcipal. He said SAS-4 would lag SPL-4 by a quarter.

In addition to the speed bump, new features on the way with next-generation SAS include Forward Error Correction, to ensure data integrity, and continuous adaptation, to enable the SAS transmitter to operate optimally, even if the temperature or operating voltage changes, Kutcipal said.

August 9, 2016  6:54 AM

HCI vendor Pivot3 reports more customers using more apps

Dave Raffo Dave Raffo Profile: Dave Raffo
Hyper-convergence, Pivot3

Pivot3 more than doubled its revenue in the first half of 2016 over 2015, which its CEO attributes to customers buying its hyper-converged appliances as a platform rather than for single applications.

Pivot3 CEO Ron Nash said Pivot3’s revenue increased by 103% over the past six month as it added more than 400 customers. That includes customers Pivot3 added through technology it acquired when it merged with flash storage vendor NexGen Storage in January. But Nash said revenue from NexGen made up less than 10% of Pivot3’s revenue in the quarter. The bulk of the growth came from customers expanding their hyper-converged workload. Nash said until the last six months or so, almost every Pivot3 system was used for a single application. But customers are now adding other apps to their hyper-converged appliance and new customers are buying hyper-converged for more than one app from the start.

“Once customers start using it, they say ‘This platform stays up, it’s easy to operate and has a small footprint,’ and then they start loading more applications on it,” Nash said.  “That’s the big change we’re seeing. Enough people have tried hyper-converged for a single app, and are now starting to buy it as a platform.”

He said 28% of Pivot3’s new sales in the first half of 2016 were for multiple applications from the start. The average spend of customers with multiple use cases is more than 500 % higher than customers with a single data center application use case. He pointed to a customer in the public transit industry with 6PB of data on 250 nodes.

The most common applications Pivot3 customers run are virtual desktops, backup, video surveillance and databases. Nash said the integration of NexGen’s quality of service with Pivot3’s hyper-converged appliances should prove particularly useful for multiple applications.

Despite the spike in sales, Nash said Pivot3 still rarely competes head-to-head with other hyper-converged products. He said three-quarters of Pivot3’s deals are against traditional server and storage products. The two best known hyper-converged products – Nutanix’s NX appliances and VMware Virtual SAN (VSAN) software — don’t show up in many competitive deals but do have an impact on Pivot3 by creating market awareness.

“Nutanix is out there spending tons of money educating market on hyper-converged infrastructure, which is fantastic for us,” Nash said. “I hope they keep advertising.”

As for VMware, Nash said he suspects it has a lot more VSAN customers than actual sales. “VMware doesn’t quote revenue, they quote customer numbers,” he said. “That’s what you say when you’re giving it away.”

Pivot3 also added Bill Stover as chief financial officer. Stover spent 18 years at Micron Technology, serving as vice president of finance and CFO of the public company. Nash said Stover’s background with a public company will help Pivot3 — still a private firm – grow into a more mature company.

August 5, 2016  9:14 AM

Pokémon Go’s lessons for storage pros

Dave Raffo Dave Raffo Profile: Dave Raffo
Pokemon GO

The Pokémon Go craze – mainly its augmented reality capability and server crashes – contains lessons for storage administrators.

Pokémon Go demonstrates how next-generation applications can drive cloud adoption as well as the pitfalls of handling that rapid adoption, according to Varun Chhabra, director of product marketing for EMC’s Advanced Software Division.

“A lot of the applications we use today already use geo-location,” Chhabra said. “What is interesting about Pokémon Go is the scale of usage when combined with geo-location tracking and data. That makes it especially challenging. Tens of millions of people are playing it, and the numbers are still going up.

Chhabra said while Pokémon Go developer Niantic has not disclosed its back end or storage infrastructure for the game that is attracting millions of users, it has clearly mastered the use of location-based applications. At the same time, it has been plagued by server crashes – delaying the launch of the game in Japan – and security issues that suggest it is growing too fast for its own infrastructure to keep up.

“When we talk about cloud-native apps, the assumption is, everything will work out OK if you have the infrastructure,” he said. “But you still need to manage data, manage the scale of users and figure out where the bottlenecks are.

There is speculation that Niantic is using NoSQL or PostgreSQL as its back-end database and Google Apps for its Platform-as-a-Service (PaaS) layer. But it has suffered server crashes that cannot be traced to any public cloud problems.

“It seems like they’re using the public cloud today, but even then they’ve had a fair share of outages even when there have been no outages in the public cloud,” Chhabra said. “So you can still have challenges with the public cloud. It’s how you write the application, and how you’re handling access for an avalanche of data.”

Chhabra said commercial enterprise application developers  can copy Pokémon Go’s success. For instance, retail stores can create apps to show shoppers in a store where a specific item is located. Or real estate agencies can develop an app with pop-ups showing which houses are for sale, where they are located, and their specs. These applications would tap into data that already exists.

“It should be easy to do, now that people are more comfortable holding up their screens without being embarrassed,” Chhabra said. “It’s more about creating an immersive user experience.”

He pointed to existing storage technologies such as object storage and data lakes that use analytics as tools that can be used in creating these immersive applications. But the development process is different than IT organizations are used to.

“You can’t throw the same approach at building an application for a geo-location mobile app than you do for traditional apps,” Chhabrasaid. “A lot of customers we talk to are talking about building apps from the ground up and learning how to use microservices.

“What is your storage platform doing for you natively to relieve the burden on developers? We’ve seen way too many examples of applications that don’t scale, and they crash the servers. Most businesses don’t expect to scale apps this fast, but they still have to test. Pokémon gets a pass, but most businesses don’t.”

August 4, 2016  6:02 PM

EMC container plugin supports any block storage

Garry Kranz Garry Kranz Profile: Garry Kranz

EMC has contributed an open source Apache Mesos container volume driver that supports any network-attached block storage system equipped with a Docker plugin, including storage of EMC competitors.

The EMC container plugin integration for Docker is a joint project of Apache Foundation and EMC code, part of EMC Emerging Technologies Division. It builds on previous EMC container initiatives. The Docker Volume Driver Isolator module exposes native Docker functionality through a command line interface.  It is part of the Apache Mesos distribution released in July.

“We’re making it possible for the community to do multi-tiered persistent storage within Docker, which up to now has been a struggle,” said Josh Bernstein, a vice president at EMC code.

Mesos  orchestrates deployment of containers on premises or in cloud storage. The Apache Mesos cluster manager presents abstracted data center compute, memory and storage in an aggregated resource pool. Mesos resides in the kernel to isolate resources as applications are shared across a distributed  framework.

Mesos lets users create a persistent volume to run a specific task from reserved disk. The volume persists on a node independently of the task’s sandbox and is returned to the orchestration framework when the task is complete.  If necessary, new or related tasks launch a container that consumes resources from the previous task. Docker recommends Apache Mesos as an  orchestration layer to implement large clusters of storage containers.

EMC’s container module communicates directly with Docker volume plugins, allowing developers to request a persistent volume from any block storage running under Mesos.  Mesos then passes the file request to EMC, which searches available storage to identify the volume and deliver it to the destined container host.

“Before this feature, while users could use persistent volumes for running stateful services, there were some limitations. First, the users were not able to easily use non-local storage volumes. Second, data migrations for local persistent volumes had to be manually handled by operators. The newly added Docker volume isolator addresses these limitations,” according to an Apache Software blog posted July 27.

Enterprise adoption of Docker is picking up, although several hurdles remain before containers are as ubiquitous as that of virtual machines. The Apache Mesos integration foreshadows the open source EMC container EMC libStorage project. LibStorage is extensible abstraction and provisioning presented as  common package for every heterogeneous storage and container runtime.

August 4, 2016  9:25 AM

Ctera builds new data migration, ILM and security capabilities into its platform

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Ctera Networks recently unveiled new enhancements to its Enterprise File Service Platform that include the ability to migrate data from an on-premise cloud to the public cloud without disrupting service, and support for information lifecycle management tools from Amazon S3 and the NetApp StorageGRID object storage.

The platform also has been upgraded to support Security Assertion Markup Language (SAML) 2.0 to centralize identity management for single sign-on (SSO) capabilities to access files and backups.

The new data migration tools targets customers who have not deployed a cloud strategy and want to start on-premises and need the flexibility to eventually move to the public cloud.

The Ctera Enterprise File Service Platform integrates Enterprise File Sync and Share, endpoint and data protection, along with branch and remote office storage. The new capability allows users to migrate workloads across storage nodes from any on-premise location to the public cloud.

“You can start with Ctera in (your) data center and then you can move to a public cloud. It’s moves very quickly. It provides flexibility,” said Jeff Denworth, CTERA’s senior vice president of marketing. “We built this tool because no one wants to be locked in. Now, you have a marketplace of options.”

The new ILM capability gives users a way to use the Ctera platform to tier high-performance workloads onto the NetApp StorageGRID and in-frequently accessed data to Amazon S3. It leverages ILM tools from Amazon S3) and NetApp StorageGRID to intelligently place files in cloud storage tiers according to their application profile.

Long-term archive and backup data can be directed to low-cost storage tiers, such as Amazon Web Services’s (AWS) Amazon S3-Standard Infrequent Access tier (Standard – IA), while interactive data, such as enterprise file sync and share workloads, can be stored on storage tiers that offer more cost-efficient ingress and egress capabilities.

“We tag data as it goes through our system as either interactive or archival and then its diverted to the general purpose tier like S3,” Denworth said.

The new security features CTERA now supports identity federation over Security Assertion Markup Language (SAML) 2.0 to so users can use centralized corporate user identity management and provide SSO capabilities for file and backup access. In conjunction with support for this new standard, Ctera also now is compatibility with leading SSO offerings, including Microsoft Active Directory Federation Services 2.0, Okta, OneLogin, and Ping Identity.

“(The platform) has been integrated with modern identity tools so users sign in with SSO,” Denworth said.

The Ctera Enterprise File Services Platform enables enterprise IT to protect data and manage files across endpoints, offices, and the cloud – all within the organization’s on-premises or virtual on-premises cloud storage.

The platform is powered by Ctera’s cloud service delivery middleware that users leverage to create, deliver, and manage cloud storage-based services such as enterprise file sync and share, in-cloud data protection, endpoint and remote server backup, and office storage modernization.

August 2, 2016  8:13 PM

Seagate, WD see exabyte growth with high-capacity enterprise HDDs

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Seagate, western digital

Unit shipments of hard disk drives (HDDs) may be on the decline, but the exabytes that Seagate Technology and Western Digital are shipping with their high-capacity enterprise HDDs is spiking.

Seagate noted during its earnings call today that HDD storage capacity hit a record 61.7 exabytes (EB) during the fiscal fourth quarter, on the heels of 60.6 EB in Q2 and 55.6 EB in Q3. Average per-drive capacity soared to a record 1.7 TB in Seagate’s fiscal Q4, which ended on July 1.

Steve Luczo, Seagate’s chairman and CEO, said demand was stronger than expected from cloud service providers (CSPs) in the fourth quarter. He noted that, on a year-over-year basis, average per-drive capacity grew 29%. In fiscal 2016, Seagate shipped 233 exabytes, including 70 exabytes for its “business-critical” product line – a 28% increase over the prior year.

Western Digital last week claimed to achieve overall exabyte growth of 12% on a year-over-year basis, largely driven by shipments of capacity enterprise HDDs to enterprise customers, according to Michael Cordano, president and chief operating officer. He said the growth of WD’s capacity-focused enterprise product line was 47% thanks to the ongoing success of high-capacity helium-based HDDs.

WD last week reported revenue of $13.0 billion for its last fiscal year, down 11% over last year’s $14.6 billion, and net income of $257 million for fiscal 2016. WD’s fourth-quarter revenue was $3.5 billion, and the company reported a $351 million loss.

Seagate Technology met or exceeded analysts’ expectations with $2.7 billion in revenue for its fiscal fourth quarter, largely driven by sales to cloud service providers. Seagate’s total revenue for fiscal 2016 was $11.2 billion, down 18.8% over last year’s $13.8 billion. Net income for the year was $248 million.

Both Seagate and Western Digital have been trying to diversify beyond their HDD businesses. WD last year acquired flash vendor SanDisk for $19 billion and object storage vendor Amplidata. Other past acquisitions include HDD competitor HGST, SSD maker sTec, all-flash array startup Skyera, PCI-flash vendor Virident Systems and flash-cache specialist VeloBit.

Seagate’s string of acquisitions includes Dot Hill Systems for $600 million last year, Avago’s LSI flash business in 2014 for $450 million and high-performance computing storage specialist Xyratex in 2013 for $374 million. Seagate sold off its EVault data protection business late last year to Carbonite for a mere $14 million in cash.

Luczo said Seagate completed the integration of Dot Hill and plans to launch converged storage products, including hybrid and all-flash arrays, later this year. He also noted that 12 TB helium near-line enterprise test units would be available this quarter for customer evaluation. Luczo said Seagate would refresh most of its high-volume capacity points over the next several quarters.

But Luczo cautioned that the growth rate for storage in the near term would likely fluctuate from quarter to quarter. He said the influence of the cloud service providers could be tricky to predict.

Near-line enterprise hard disk drives (HDDs) were hotter last quarter than Seagate anticipated they would be. Luczo said Seagate’s 8 TB enterprise HDD was the leading revenue SKU, as overall enterprise HDD revenue increased to 45% of total HDD sales. PC client shipments accounted for 25% of total HDD revenue.

Seagate said that although unit shipments of its HDDs have dropped 15% over the past five fiscal years, exabyte shipments have increased 112% and average capacity per drive has soared 133%. Luczo attributed the trends to the shift from client-server to mobile cloud architectures. He said most of the exabyte-scale growth relates to high-definition streaming content “where massive data ingest and sequential write operations” are critical.

Western Digital CEO Steve Milligan last week cited a key near-term priority as the transition to 3D NAND flash. He also noted that the company completed the alignment of the product and technology roadmaps for legacy WD, HGST and SanDisk products and opened a new wafer manufacturing facility in Japan with Toshiba.

WD expects 3D NAND wafer capacity to approach 40% of total NAND capacity by the end of 2017, according to Cordano.

Milligan said WD has been scaling down HDD capacity on a brick-and-mortar and head-count basis to react to the decline in the HDD market. He said WD had taken out 20% of its facilities and 25% of its head count during the last two years. Milligan said WD plans further reductions of up to one-third.

July 29, 2016  2:05 PM

OwnBackup CEO: Stay safe in the cloud

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Cloud Backup

Your cloud data may not be as secure as you think.

No matter where your data lives, you should put the same level of thought and care into its protection, according to Sam Gutmann, CEO of cloud-to-cloud backup and restore vendor OwnBackup. He pointed to the recent Salesforce outage that resulted in lost data.

“It really raised awareness,” Gutmann said. “There’s a myth that if it’s in the cloud, it’s safe.”

OwnBackup offers products to back up Salesforce data, ServiceNow data and social media accounts. Gutmann said OwnBackup allows users to compare two snapshots to see what has changed or been deleted, and then restore the database back to the way they want it. The company’s vision is to become a single pane of glass for backup and protection of software as a service data and platform as a service data stored in the cloud.

OwnBackup plans to add support for another application this year, and will likely add a couple more next year, but have not determined which ones yet. Microsoft Office 365 and Google Apps are common applications supported by vendors that protect data in the cloud.

Gutmann said there is no hurry to expand its product because “Salesforce is huge.” Customers are responsible for their data in Salesforce, Gutmann said. Salesforce recommends customers use one of the vendor’s “partner backup” products — which include OwnBackup — to ensure the safety of their data.

OwnBackup had customers affected by the outage who were able to restore data.

Many companies are moving business-critical data to the cloud. But on-premises platform requirements and vulnerabilities are also present in the cloud. Take the example of an employee on his way out the door who deletes important files.

“That threat is there no matter where the data is,” Gutmann said.

To that end, Gutmann offered a few more general tips for cloud-to-cloud backup and restore:

  • “Backup’s nice, but it all comes down to recovery,” so use a product that is strong in both disciplines
  • “Make sure the vendor understands the intricacies of your data,” and recognizes how complicated your setup is
  • Test your backup — verify you have a product that works

OwnBackup, which has sales and marketing in the United States and research and development in Israel, claims about 300 customers, ranging from small businesses to large manufacturing companies and universities. Gutmann, who helped found Intronis (now part of Barracuda) in 2003 and has been in the backup field for 16 years, said OwnBackup has about 20 employees, but that number should be closer to 30 by the end of the year.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: