Storage Soup


November 14, 2017  10:01 AM

Commvault GO: ‘Sully,’ Swan emphasize good data’s value

Dave Raffo Dave Raffo Profile: Dave Raffo
Data Management

We hear a lot of talk these days about machine learning and artificial intelligence. Those are hot and valuable technologies, but two speakers at Commvault GO last week highlighted the importance of human learning and genuine intelligence in using data.

Chesley “Sully” Sullenberger and polar explorer Robert Swan gave Commvault GO keynotes explaining how the proper use of good data can prove invaluable – and even save lives – without requiring analytics or any computers at all.

Sullenberger was well known to the Commvault GO audience for landing a US Airways airplane safely on the Hudson River in New York with 150 passengers aboard. He was widely hailed as a hero after that Jan. 15, 2009 event, and played by Tom Hanks in the movie Sully. Swan, a U.K. native, isn’t a U.S. household name but he has travelled to both the North and South Poles, and remains an active adventurer at age 60.

Their talks fit in the show’s theme of data management, even if they used their gut rather than fancy data analytics to interpret information.

“I used data not in any specific way, but used data to frame my decision,” Sullenberger said of his famous flight. “So what couldn’t be a computational decision was more of an intuitive one. But it really wasn’t, it was totally a cerebral exercise.

Yet it was a decision based on accurate information. It came from what Sullenberger knew about his airplane and from what more than three decades in the Air Force and as a commercial pilot taught him about flight and navigation.

Sullenberger considered his options within seconds when the plane lost its engines after striking a flock of geese. He took control of the plane from copilot Jeff Skiles, who had less experience with that type of aircraft.

Sullenberger then determined the plane with damaged engines could not make it to the nearest airports, LaGuardia in New York City or Teterboro in northern New Jersey. After deciding to land in the Hudson River, he knew the best spot would be between two water ferries. That way, their crews could rescue the passengers quickly enough in freezing water. Sullenberger calculated the best angle and speed to land the plane to keep the impact from destroying it.

He said his decisions were “based on having flown jets for years, and having managed the height and speed and total energy of jets very precisely for thousands of flights.”

Sullenberger said there were no flight simulations for water landing at the airline.

“The only training we ever got at water landing was a theoretical classroom discussion,” he told the Commvault GO attendees. “But because I knew not just what and the how, but the why, even in that situation I could set clear priorities. I learned that bad outcomes are almost never the result of a single fault, a single failure or a single error. Instead, they are the end result of a causal chain of events. I made sure when I saw these causal links in a chain begin to line up, I would intervene to break them.”

Pole walking, without electronic gadgets

Explorer Swan’s claim to fame is he led teams that ventured on foot to the South Pole and North Pole.

In other words, “I’m the first person in history stupid enough to walk to both poles,” he said during Commvault GO.

Actually, stupid didn’t figure in either mission.

He relied on data culled from those who went before him, and from scientific agencies such as NASA before his dangerous treks. His team walked 900 miles – including the final 70 days without radio communication – before reaching the South Pole on Jan. 11, 1986. He nearly drowned because of unseasonable melting of Arctic ice before arriving at the North Pole on May 14, 1989.

But, like “Sully,” Swan had to work and survive without the benefit of real-time computer data analytics. Swan had no electronics to call on, and not even compasses work at the South Pole.

“Whatever limited data we had, we used to stay alive,” he said. Swan said his critical data came from using “the sun, a sextant and a watch. We knew if we make mistakes, we’re going to die.”

Swan and his son Barry are due to set off Wednesday on a 600-mile expedition to the South Pole using only renewable energy. The trip is expected to take eight weeks. They will carry solar panels to power NASA-designed ice melters that will give them water to drink and cook with.

“One day, NASA will use these ice melters on Mars,” Swan said.

Swan’s mission now is to help clean up Earth, which scientific data tells us is in danger.

“My target is to clean up 326 million tons of carbon before end of 2025,” he said. “Our survival on earth and the data people like NASA provide to protect us, we should take that seriously. Climate change is happening. How much we’re causing it, we don’t know yet. But as a survivor, I’m going to try and do something about it. Just in case.”

Sullenberger agreed that today more than ever, we need to heed science and facts.

“We have an obligation to be scientifically literate,” Sullenberg said. “In other words, you can’t use data if you don’t understand it. We have an obligation to be good citizens, which means that if we must make important decisions, we need to be capable of independent critical through. And when we make important decisions, we must make them based on facts, not based on fears or falsehoods. And certainly not on big lies, even if they are told loudly and often.”

November 13, 2017  7:52 AM

Nutanix backup choices expand

Dave Raffo Dave Raffo Profile: Dave Raffo
Nutanix, Veritas, Veritas NetBackup

Options for Nutanix backup are growing for customers using the hyper-converged vendor’s AHV hypervisor.

At Nutanix’s European .NEXT user conference last week, backup software market leader Veritas pledged AHV support in NetBackup 8.1 and Comtrade Software updated its HYCU software built specifically for Nutanix backup.

Nutanix lists 10 data protection vendors that support AHV. Veritas, Veeam Software, Commvault and Comtrade have the greatest integration with AHV, according to Nutanix. Cohesity, Rubrik, Arcserve, Unitrends, Cloudian and Sureline Systems also support AHV.

Most Nutanix customers still use VMware hypervisors, but close to one-quarter of the customers have adopted the vendor’s KVM-based AHV hypervisor.

Veritas claims NetBackup support will enable faster backup and recovery of Nutanix virtual workloads though its protection technologies. Veritas supports AHV with its NetBackup Cloud Catalyst appliances that enable deduplication of data in the pubic cloud. NetBackup will also enhance the Nutanix backup process with its parallel streaming technology that backs up data across multiple hyper-converged nodes simultaneously and its CloudPoint cloud-based snapshots.

Comtrade Software said HYCU 2.0 for Nutanix backup will add support for VMware’s ESX hypervisor, Microsoft Exchange and DR across remote sites. Comtrade also said it plans support for Nutanix Acropolis File Services (AFS) in the first quarter of 2018. It will also become one of the first partners in the Nutanix Calm marketplace, allowing customers to download and install HYCU by clicking on a blueprint from the marketplace.

Comtrade launched HYCU in June to protect AHV virtual machines, and claims more than 20 paying customers.

HYCU 2.0 is planned for later in 2017. It can broaden the HYCU customer base through support of ESX, which is used by about 65% of Nutanix customers.

“ESX support is often asked for by customers,” said Subbiah Sundaram, Comtrade’s vice president of products. “Customers who want to migrate from ESX to AHV want staging areas. We’re adding that and making it easier for customers to experiment.”

Sundaram said HYCU will use Nutanix snapshots instead of VMware’s VADP to protect ESX.  He said that avoids VM stun, which is when I/O latency makes the VM unresponsive.

HYCU adds support of Exchange to its previous support of Microsoft SQL Server and Active Directory. It will enable mailbox level recovery and allow customers to clone entire instances for text and development. Comtrade is also adding the ability to restore databases from one SQL Server to another.

For DR, HYCU will enable Nutanix customers to set up standard Nutanix Protection Domains for VMs at remote sites, set up DR replicas at a DR site and restore directly to the remote site from the replicas.

Comtrade will begin trials for AFS backup with the expectation of adding it to HYCU 3.0 in early 2018. Sundaram said HYCU will also add parallel stream backups, spreading the backup load across up to eight Nutanix nodes.


November 10, 2017  10:27 AM

Quantum CEO Gacek’s a goner after poor quarter

Dave Raffo Dave Raffo Profile: Dave Raffo
Quantum

Quantum CEO Jon Gacek is out following poor sales results for the data protection and scale-out storage vendor last quarter.

The Quantum board named director Adalio Sanchez as interim CEO. Chairman Raghu Rau said he will head a search for a permanent CEO, with the help of an executive headhunter firm.

Quantum Thursday reported revenue of $107.1 million for last quarter, down from $135 million last year and more than $15 million below Wall Street expectations. Quantum lost $7.9 million in the quarter compared to a $4.1 million profit last year.

The results prompted the Quantum CEO change. Gacek joined Quantum through the 2006 acquisition of rival tape vendor ADIC. He had been ADIC’s CFO and assumed that role at Quantum. He was promoted to COO in 2010 and became the Quantum CEO the following year.

Chairman Rau described the quarter as a disappointment “that fell short of all our expectations” and a “very eventful” quarter for Quantum.

The Quantum CEO change was hardly shocking considering moves the company made over the past eight months. After years of up-and-down quarters, Quantam agreed with demands from investor VIEX Capital Advisors to change the board last March, and Rau joined then. IBM veteran Sanchez and Marc Rothman joined the board in May, pushing Gacek off the board. Rau became chairman in August. Quantum added VIEX’s Eric Singer to the board Thursday.

After Rau became chairman, he, Sanchez, Rothman and Alex Pinchev formed a committee to conduct a strategic review of Quantum.

Sanchez said his work on the review gave him a head start in as interim Quantum CEO.

“I am hitting the ground running,” said Sanchez, who spent 32 years at IBM and a year at Lenovo.

Rau said Quantum is “intensely focused on taking aggressive actions” to reduce cost and he predicted increased revenue and profitability over the next six months.

Sanchez said Quantum will cut around $35 million in costs over the next year. Quantum also secured $20 million in funding from TCW Direct Landing and PNC Bank to go with $170 million in funding from those lenders a year ago.

Sanchez said the board reviewed Quantum’s strategy, go-to-market model and cost structure. He described StorNext scale-out storage as Quantum’s growth engine and data protection as its profit engine. But while Quantum is looking for LTO-8 to give the tape products a boost, CFO Fuad Ahmad said the vendor must “reorient” its strategy for its DXi disk backup platform. He said Quantum will maintain its partnership with Veeam Software to integrate backup software on DXi and tape products, but will scale back development on the DXi data deduplication appliances.

“We are a small player in that market with less than three percent market share,” Ahmad said. “While it’s a fairly profitable business for us, it is not core to what we want to do long term.”

Sanchez said Quantum will build a software-defined storage business around StorNext and its Rook.io open source project to build cloud-native file, block and object storage.

“We will reposition our company over time as a modern software-defined provider as new products rollout,” Sanchez he said.

Sanchez described his priorities over the next 90 days as “re-ignite the sales engine,” reduce costs and “execution, execution, execution.”

Product revenue last quarter slipped to $63.6 million from $88.6 million last year. Overall Scale-out storage revenue of $33.8 million was down from $46.6 million. Disk backup fell from $18.7 million to $11.7 million, and tape automation slipped from $59.7 million a year ago to $52.2 million.

Quantum executives blamed the poor results partly on industry conditions and the failure to close large deals before the end of the quarter. They expect a bit of improvement this quarter but will still fall below last year’s results. For this quarter, Quantum forecast revenue of $120 million to $125 million compared to $133 million last year. Its six-month guidance is for revenue of $250 million to $260 million.


November 9, 2017  5:26 PM

SMB ransomware report: Attacks frequent, backups key piece

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Ransomware

Ransomware attacks on SMBs have increased, according to a recent survey, but backup and disaster recovery platforms can calm data protection fears.

An estimated 5% of SMBs worldwide fell victim to a ransomware attack from the second quarter of 2016 to the second quarter of 2017, according to the “State of the Channel Ransomware Report” released by backup and recovery vendor Datto. About 1,700 managed service providers (MSPs) serving more than 100,000 SMBs provided data for the ransomware report.

Ninety-seven percent of the MSPs report ransomware is becoming more frequent and 99% predict the frequency of attacks will continue to increase over the next two years.

Anxiety is rising. Among MSPs, 90% say they are “highly concerned” about ransomware, up from 88% in 2016, while 38% of SMBs say the same, up from 34% in 2016.

“There’s more of an awareness of ransomware and it being an epidemic,” Datto CTO Robert Gibbons said. However, the gap between SMBs’ perception and MSPs’ awareness is too far on the side of SMBs being under aware, he said.

While CryptoLocker remains one of the top ransomware strains, the Bad Rabbit virus caused problems globally in the last month.

SMBs need to understand that the downtime is often the worst element of an attack on a business. Seventy-five percent of MSPs report their clients experienced “business-threatening downtime” after an attack.

On another positive note, though, reporting is increasing. SMB victims reported about one in three ransomware attacks to authorities, up from one in four incidents reported in 2016.

And less SMBs are paying the ransom, according to the report. In 2017, 35% of MSPs report SMBs paid the ransom, down from 41% in 2016. Of those that paid the ransom, 15% never recovered their data, according to the ransomware report.

“The word is getting out that if you pay the ransom, sometimes you get your data, sometimes you don’t,” Gibbons said.

A ‘multilayered portfolio’ of protection includes backup

Ransomware is getting smarter. About 30% of MSPs report a virus remained on an SMB’s system after the initial attack and hit again later. And one in three MSPs report ransomware encrypted an SMB’s backup.

So what are SMBs to do?

First of all, backup systems vary in complexity and strength. Copying files to a USB drive is one method, but not a great one. Having a comprehensive backup and recovery platform, following a “3-2-1” system of three copies of data, on two different media, with one copy off-line, is much more secure.

Backup and disaster recovery is the most effective protection, according to MSPs in the ransomware report, followed by employee cybersecurity training, anti-virus software, email/spam filters, patching applications and ad/pop-up blockers.

If backup and recovery is in place, 96% of MSPs report SMBs fully recover from ransomware, according to the report. And 95% of MSPs said they feel more prepared to respond to an SMB infection.

But ransomware protection goes beyond having just one safety element in place. For example, 94% of MSPs report ransomware successfully bypassed anti-virus software.

“As no single solution is guaranteed to prevent ransomware attacks, a multilayered portfolio is highly recommended,” the report said.

MSPs blamed a lack of cybersecurity training as the leading reason for a successful ransomware attack, followed by phishing emails and malicious websites/ads.

“Employees today are largely unprepared to defend themselves against these attacks,” the ransomware report said.

Gibbons said in one type of education, a company will send out a fake phishing scam and anyone who clicks in the email will get diverted to ransomware training. Just one employee who clicks on a bad link — in a company of hundreds — can cause a business possibly irreparable harm from a ransomware attack.

“There are more tools available to up your minimum game,” Gibbons said.

SMBs need to stay on top of the issue, because attacks are constantly evolving. For example, in 2017, 26% of MSPs reported ransomware infections in cloud applications. Gibbons said he thinks cracking Salesforce is at the top of the attackers’ radar in their continuing quest to best wreak havoc among SMBs.


November 8, 2017  6:26 AM

Nutanix Acropolis services expand for cloud, developer needs

Dave Raffo Dave Raffo Profile: Dave Raffo
Hyper-convergence, Storage

Like many storage and data center vendors, hyper-converged vendor Nutanix is taking the next steps to give its platform multi-cloud capabilities.

Nutanix today laid out its plans to add services for developers to its Enterprise Cloud OS software. These include a Nutanix Acropolis Object Storage Service and Acropolis Cloud Compute. The hyper-converged pioneer will also add a Nutanix App Marketplace to its Calm cloud application and orchestration service.

“The Nutanix roadmap is evolving, looking at public cloud services as a deployment model for applications,” said Greg Smith, Nutanix vice president of product and technical marketing. “We want our customers’ data center to operate like a public cloud. This is a continuation of the Nutanix journey to build an enterprise cloud that provides much of the same capabilities that customers expect from public cloud services, but in their own data centers.”

The new Nutanix Acropolis features will not be available until 2018. Smith said the marketplace will start in 2017 with 20 validated pre-defined app blueprints, and add “a significant number” soon after.

Nutanix will provide an Amazon Web Services S3-compatible API to help application development teams use Nutanix storage for on-demand object storage as they would use the public cloud. Smith said the Nutanix Acropolis Object Storage Service can store billions of objects in a single namespace.

“People want to write to S3 through a standard API,” Smith said. “We’ve embraced that interface. Now the Nutanix Cloud Storage OS can store and manage all those large unstructured data files with a single namespace.”

Nutanix Acropolis Cloud Compute (AC2) consists of compute-only nodes that can run in a Nutanix cluster. AC2 nodes are for CPU-intensive applications such as in-memory analytics, large-scale web services, and Citrix XenApp. Most hyper-converged nodes include storage and compute. Nutanix does already offer capacity-only storage nodes but has not had compute-only nodes.

Smith said Nutanix will have several AC2 configuration options, and customers will still require a minimum of three storage nodes in a cluster. AC2 is built on Nutanix’s AHV hypervisor and will initially be available only on Nutanix-branded appliances. Smith said Nutanix hopes its OEM hardware partners Dell EMC and Lenovo will eventually make compute-only nodes available.

“This is to provide additional compute resources to support apps and services that require a lot of CPU but not storage with it,” Smith said. “The new compute resources will benefit application developers as well as infrastructure managers.”

The Nutanix App Marketplace will include applications defined via standards-based blueprints that developers can quickly consume in self-service fashion. These published validated blueprints will include developer tools such as Kubernetes, Hadoop, MySQL, Jenkins and Puppet. Nutanix customers can also publish apps on the marketplace to share them internally.


November 7, 2017  9:10 AM

Pivot3 Acuity jukes HCI sales, aims at cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged vendor Pivot3 said its Acuity appliance is significantly expanding its enterprise footprint, with one-third of its revenue coming from deals of $500,000 or more last quarter.

The private company today said its average sales price increased 25% and overall revenue increased 50% from the previous quarter. Pivot3 reported a record in million-dollar orders in the quarter. Now it seeks to expand deeper into the enterprise by tailoring its HCI software for cloud implementations and broadening its distribution strategy with partners Lenovo and Arrow Electronics.

The Pivot3 Acuity platform launched in April, supporting NVMe solid-state drives for performance and incorporating quality of service the vendor acquired in 2016 from NexGen Storage.

Along with boosting performance on Pivot3 Acuity with NVMe, the vendor is concentrating on solving the problems of moving data in and out of the cloud. Pivot3 Acuity’s quality of service is designed to run multiple applications, which will help cloud customers. Pivot3 said deals supporting multiple use cases on its appliances more than doubled last quarter.

But data movement is another issue.

“It’s a long process, but there’s a massive economic gain if we get it right,” Pivot3 CEO Ron Nash said. “The cloud’s not monolithic. There are many clouds with many different characteristics.”

He said mastering the cloud requires a policy management engine, an orchestration engine and analytics engine. Pivot3 has the policy management and has started with orchestration to move data in and out of the cloud. The analytics will determine if the policy decisions are working.

“It’s easy to say, ‘There is the goal line, that’s where we want to get to. Then let’s lay out the steps,’” Nash said. “If clouds let you spin up and spin down quickly and take peak of peaks type demand, that’s a valuable service and something you are willing to pay a lot for.”

Nash said it will take a few years to get all the pieces down, but he considers Pivot3 ahead of the other HCI vendors In the meantime, Pivot3 is expanding its distribution process.

Pivot3’s branded appliances run on Dell servers, but it also has an OEM deal with Lenovo and a channel partnership with Cisco. Pivot3 chief marketing officer Bruce Milne said Pivot3 considers Lenovo its key partner, and last month signed a distribution deal with Arrow Electronics to sell Pivot3 Acuity software on Lenovo. Milne said around 15% of Pivot3 revenue comes from software-only deals.

Pivot3 has a similar go-to-market strategy as Nutanix. Nutanix sells its own branded appliances complemented by OEM deals with Dell EMC and Lenovo and meet-in-the-channel deals with Cisco and Hewlett Packard Enterprise resellers. These types of partnerships make hyper-converged full of coopetition. Dell EMC, Cisco and HPE all sell hyper-converged appliances with their own software, too. Dell EMC uses VMware vSAN software on PowerEdge servers, Cisco acquired Springpath for its UCS-based HyperFlex appliances, and HPE bought hyper-converged startup SimpliVity for its software.

“We see Cisco making a lot of noise, but no accounts except for the Cisco base,” Milne said. “HPE is starting to make noise, trying to differentiate its hardware by embedding SimpliVity software in ProLiant servers. Dell EMC is coming on strong, which I’m sure concerns Nutanix. I can’t count on a competitor as a supplier on my platforms.”


November 2, 2017  2:06 PM

SwiftStack objects to object storage label

Dave Raffo Dave Raffo Profile: Dave Raffo
Object storage, SwiftStack

SwiftStack’s leadership team appreciated being selected a visionary in Gartner’s recent Magic Quadrant for distributed file systems and object storage. The vendor even put out a press release celebrating Gartner’s inclusion of SwiftStack object storage in the report.

But SwiftStack’s chief marketer said the object storage neighborhood is not where the vendor wants to live anymore.

“We’re a software company, not an object company,” SwiftStack VP of marketing Mario Blandini said. “We just happen to have our own object storage system on premises.”

So Blandini admits SwiftStack’s cloud storage software based on the OpenStack Swift object project is indeed object. He just thinks that’s not chic. Or at least SwiftStack customers and potential customers don’t think SwiftStack object storage has the right ring to it.

“People say to us, ‘How can an object storage company be doing something cool? Don’t object storage companies all suck?’” Blandini said. “No one loves object storage because it doesn’t do enough to be transformative. There’s nothing wrong with NAS, why replace it?”

Blandini also wants to distance SwiftStack object storage from OpenStack, even if SwiftStack is the main contributor to OpenStack. “SwiftStack is too often known as an OpenStack company,” he said. “In reality, we are a data management platform across multiple clouds.”

The multiple cloud part, more commonly known as multi-cloud storage, is what’s cool now. Ask just about any storage vendor out there, because we’re hearing that phrase a lot more these days.

“We like the concept of multi-cloud,”Blandini said. “Do you want to be locked into one cloud provider, or be able to put your data closer to the user?”

SwiftStack object storage includes a file system gateway on top of its native object storage software. Earlier this year SwiftStack beefed up its Cloud Sync feature for moving data in and out of Amazon Web Services and the Google Cloud Platform as part of the multi-cloud plan.

This week SwiftStack added policy-based auto tiering, the ability to use erasure coding across regions, client capability to access objects in private clouds as if they were on-premises, and more granular policies for determining which nodes and regions data should reside.

Blandini said these features are merely setting up a significant product release coming around AWS re:Invent this month. He’s keeping quiet on details for now, but you can expect it to center around managing data on multiple clouds (just a wild guess).

“We architected our product to be cloud from the beginning,” he said. “I’m not saying this will kill NAS and there will be no more Fiber  Channel. There always will be those things. But there’s always room for new types of storage.”

Or new labels for storage, anyway. So say goodbye to SwiftStack object storage and hello to SwiftStack multi-cloud storage.


November 1, 2017  8:28 PM

Elastifile scores OEM deal with Dell EMC

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Cloud NAS vendor Elastifile has struck an OEM deal with Dell EMC, one of the startup’s first investors.

Dell EMC will integrate Elastifile Cloud File System and CloudConnect cloud transfer and object storage tiering software on PowerEdge servers. Dell EMC will sell the appliances as part of the Dell EMC OEM Solutions program, with Elastifile providing software support. The appliances include the Elastifile license and three years of support. The vendors will formally disclose the deal Thursday.

Elastifile came out of stealth in April with its scale-out file system designed for flash hardware that spans on-premises and cloud storage. Until now, it has sold its software standalone. Andy Fenselau, Elastifile vice president of marketing, said the OEM deal makes sense because many of the startup’s early customers are using PowerEdge servers.

“You can put it on any standard server,” Fenselau said of Elastifile’s software. “But we were finding as we were ramping our business that many customers and many of our partners are joint customers and joint partners with Dell. They really wanted the Easy Button. They wanted a pre-integrated, pre-bundled solution for their on-prem deployments that they could buy from their standard OEM, in this case Dell.”

The Dell EMC Elastifile appliances scale from four to 100, with performance and capacity models available. The startup claims the performance model supports from 800,000 to 26 million IOPs with bandwidth ranging from 3.6 GB per second to 120 GBps. Capacity models range from 100 TB to 3.5 TB.

Fenselau said the performance optimized node street price starts at 13 cents per IOPS, and the capacity optimized node costs $2 per raw GB.

The PowerEdge appliances are flash-only.

Elastifile’s  Cloud File System handles active data and performance-oriented workloads. CloudConnect provides access to Amazon S3-compliant cloud services, moving inactive data for archiving or analytics into an object tier.

EMC invested in Elastifile’s first funding round in 2014, before the Dell-EMC merger. The companies’ ties go back even farther. Elastifile founder and CTO Sharar Frank also founded all-flash array startup XtremIO and was chief architect at scale-out NAS vendor Exanet. EMC acquired XtremIO in 2012 and Dell bought Exanet in 2010.

Storage hardware vendors Cisco, Lenovo and Western Digital also have strategic investments in Elastifile. Those relationships could lead to more appliance partnerships.

“As customers standardize on other servers, we will work to give them what they want,” Fenselau said when asked about the possibility of working with other strategic partners.

 


November 1, 2017  7:47 AM

Arcserve acquisition plans heating up

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Arcserve is on the hunt to buy more companies.

The data protection and recovery company already has made two acquisitions since becoming independent from CA Technologies in 2014. It bought cloud provider Zetta last summer, giving it a larger cloud footprint and disaster recovery as a service (DRaaS) offering. That followed the Arcserve acquisition of email archiving specialist FastArchiver in April for on premises or public cloud long-term data retention.

Both Zetta and FastArchiver target the midmarket. New CEO Tom Signorello is planning the next Arcserve acquisition, with possible targets ranging from analytics to information management to data management or even security.

“We will be looking at targets again in the coming quarters,” Signorello said. “We are actively working with our owners on what the next logical bolt-on. The opportunities are broad, so I don’t want to be specific.”

Signorello took over as CEO in early October, after Arcserve’s first CEO Mike Crest left to head IT services firm Optanix.

Arcserve has returned as an independent vendor at a time the data protection market is evolving into another area, the overall data management space. Companies such as Veritas Technologies and Commvault Systems are building these over-arching data management platforms that do data indexing, search, analytics, copy data management, governance, security and data mobility.

Arcserve acquisition integrations expand cloud footprint

Signorello said that the integration with Zetta is “well underway.”  The latest Arcserve acquisition gave it cloud data centers in the West Coast and New Jersey, and Arcserve plans to open one in the United Kingdom. The Arcserve acquisition also broadens its relationship with cloud providers Amazon Web Services (AWS) and Microsoft Azure.

Arcserve already had a relationship with AWS before the Zetta buy. The Arcserve UDP appliance allows customers to use the AWS cloud as a remote disaster recovery site. They can replicate recovery points to a local Windows-based Recovery Point Server (RPS) in the AWS cloud and launch an Amazon Elastic Compute Cloud (EC2) and copy full recovery points to Amazon Simple Storage Service (Amazon S3).

Zetta also gave Arcserve a direct-to-cloud offering but the company also is investing more in the hybrid cloud approach.

“There are going to be enhancements in the hybrid area in the next couple of quarters,” Signorello said. “Our clients need it.  They need the flexibility. All the managed services providers (MSPs) and VARs are moving in that direction.”

Before joining Arcserve, Signorello was CEO of OnX Enterprise Solutions and held vice president positions at Diebold and Unisys.


October 31, 2017  8:55 AM

IBM Cloud Object Storage tweaked for on-prem, small footprints

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

With the latest upgrades to its object storage platform, IBM recognizes that not everyone who uses object storage does so in the public cloud.  They don’t all start with hundreds of terabytes of capacity either.

IBM Cloud Object Storage System now offers “compliance-enabled vaults” so customers can create on-premises data vaults that protects data from deletion or modification for unstructured data subject to regulations like SEC Rule 17a-4 and the Financial Industry Regulatory Authority (FINRA).

IBM also is offering a new capacity level for customers that want to get started with cloud deployments. The IBM Cloud Object Storage System now starts with a 72 TB capacity level with a concentrated dispersal mode capability that allows for smaller footprint systems to scale to larger ones.  Despite the name, IBM Cloud Object Storage System – based on technology acquired from Cleversafe – is available for on-premises use or in a dedicated environment in the IBM Cloud.

IBM hopes the compliance vaults will open up opportunities for the applications market in on-premises object storage, while the lower-capacity option is about getting more customers started with an on-site cloud with the expectation they will move to the public cloud.

“This is a new space that they can compete in,” said Scott Sinclair, a storage analyst with Enterprise Strategy Group. “We have run studies that show on-premises object storage is less expensive than cloud-based object storage. The bottom line is, public cloud is not always cheaper.

“Some workloads are more expensive in the cloud,” Sinclair said.

There is evidence customers are learning not all workloads are meant for the public cloud, and that unexpected costs and security are becoming two red flags.

The ESG 2017 Storage Trends: Challenges and Spending report found 39% of storage decision makers using off-premises cloud resources had moved at least one workload back to on premises.

“There is evidence that organizations need to apply a pragmatic approach with regards to the location of applications and data, whether on- or off-premises,” Sinclair. “Moving a workload to the public cloud is a big decision and should be treated as such.

“Additionally, what workloads are coming back? Why? Is it cost? Is it security? Is it availability? And how are these movements changing the cloud strategy within the organizations?”

He said  ESG is researching those issues  now.

Robert McCammon, IBM’s leader for IBM Cloud Object Storage, said the vendor is tackling security concerns with its compliance vault feature.

“This is a new software feature of our existing object storage product that is available with our software release in December,” McCammon said. “This is a new type of vault that prevents operations that are unacceptable on one of these compliance environments.”

McCammon said when it comes to compliance rules, “customers don’t have a lot of choices for storage and they have even fewer choices for object storage.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: