November 18, 2009 9:34 PM
Posted by: Beth Pariseau
Mozy says a high volume of traffic is to blame for a backup monitoring glitch flagged by a MozyHome user on his blog, but the online backup service says it has not lost any customer data.
Dan Frith, a technical consultant in Australia, wrote on his blog Penguin Punk earlier this week that he was seeing some quirkiness in Mozy’s monitoring interface. He’s been using MozyHome to backup an iMac with approximately 32 GB of data, he wrote, but last weekend the Mozy interface was only showing him 10.3 GB backed up at its data center.
Frith’s posts (which include screenshots) also detail his interactions with Mozy support to get the problem sorted out, including the full text of an email Mozy sent him saying that if he initiates a full backup again, Mozy will “re-associate” the full data set with his account.
Frith indicated he’s unimpressed with the workaround Mozy has suggested. “The point is that if I needed to recover data from Mozy today I would only be able to get back 10.x GB,” he wrote. “That seems uncool. Very uncool.”
Mozy responded to my request for comment with a statement through a spokesperson:
recently, we experienced a high volume of data center traffic that prevented the Mozy client from adequately identifying files that were previously backed up. As a result, Mozy is sending third or fourth copies of the same files to our data centers.
Our development team is working right now to address the issue and expects to have this fixed soon. We want our customers to know, however, that we have not lost any of their information.
This is not the first complaint to surface about Mozy recently. In September, backup expert W. Curtis Preston also blogged about how his Mozy agent didn’t notify him it wasn’t backing up data. Last year, commercial users of MozyPro also said they were frustrated by long restore times with the service.
November 17, 2009 7:18 PM
Posted by: Beth Pariseau
Another industry executive has defected to Xiotech Corp., this time from IBM, a little over a month after the company hired a former EMC executive as CEO.
Xiotech issued a press release today announcing the appointment of Brian Reagan as senior vice president of marketing and business development. Reagan spent eight years at EMC before his time at Arsenal Digital and then IBM:
Reagan comes to Xiotech from IBM, where he served as global strategy and portfolio executive for the company’s $1 billion Business Continuity and Resiliency Services division. In this role, he was a primary voice in IBM’s cloud strategy and was responsible for accelerating sales of the global portfolio, aligning investments and resources to strategic business goals and collaborating with other corporate leaders worldwide. Reagan joined IBM as part of its acquisition of Arsenal Digital Solutions, where he served for three years as executive vice president and chief marketing officer. Reagan led all marketing functions at Arsenal, and is credited with transforming the company’s position in the industry in that time.
Xiotech CEO Alan Atkinson has now hired two top exectuives with EMC backgrounds. Atkinson brought aboard Jim McDonald as chief strategy officer in October. McDonald joined EMC with Atkinson from WysDM Software, where he was chieft technical officer. He also served in a CTO capacity with EMC. Atkinson’s and McDonald’s resumes also include jobs at StorageNetworks and Goldman Sachs before WysDM.
Reagan is quoted in the press release saying the company is setting up a marketing blitz around its ISE self-healing disk array. “”We’re going to crank it up a few notches and make some noise out there.” You can read more on this strategy in our Q&A with Atkinson when he was hired as CEO in September.
November 16, 2009 5:18 PM
Posted by: Dave Raffo
storage networking; high performance computing
SuperComputing 2009 is underway this week at Portland, and that means lots of InfiniBand news around products and OEM deals.
QLogic said today it has scored a string of OEM deals for its 40 Gbps InfiniBand devices, bringing its 7300 Series host channel adapters (HCAs) and 1200 Director switches to the IBM System Cluster 1350, the HP Unified Cluster Portfolio, Dell PowerEdge servers and Precision workstations and SGI CloudRack Servers.
Voltaire disclosed its Vantage 8500 10-Gigabit Ethernet switch, Grid Director 4700 40 Gbps InfiniBand switch, and Unified Fabric Manager software are also available from HP as part of the HP Unified Cluster Portfolio.
Mellanox Technologies said its ConnectX-2 40 GBps InfiniBand HCAs are available now for HP ProLiant BL, DL and SL series servers, as well as HP BladeSystem c-Class enclosures. Mellanox also said it would have a 120 Gbps InfiniBand switch platform in 2010.
There’s a reason so much InfiniBand news is concentrated around SuperComputing – InfiniBand remains a niche interconnect for high performance computing due to its low latency and high bandwidth. That doesn’t mean the InfiniBand market won’t grow, but it’s unlikely to encroach on the turf of Fibre Channel, Ethernet or any converged networks in the data center.
QLogic director of corporate communications Steve Zivanic said his vendor has customers using InfiniBand, Fibre Channel and Ethernet, but InfiniBand stands on its own while the other protocols are beginning to converge around Fibre Channel over Ethernet (FCoE).
For instance, he says, financial institutions are looking to save power in the data center by consolidating with VMware, high density blade servers and FCoE. Some of those same firms use InfiniBand to run their Monte Carlo simulation programs on a separate network.
“People using InfiniBand don’t want to share anything,” Zivanic said. “They want dedicated bandwidth, 40 gigs per second. For high performance applications, people are increasing their presence of InfiniBand. On the other side of the house, for general business applications, that’s where we see FCoE brought in to reduce costs.”
November 12, 2009 7:08 PM
Posted by: Beth Pariseau
Claiming its approach to enterprise data security key management will assure users of reliability, CA this week launched a new Encryption Key Manager (EKM) software offering that runs on z/OS mainframe and can manage keys for CA Tape Encryption as well as IBM tape formats.
Stefan Kochishan, director of storage product marketing for CA, said a lack of key management standards for encryption at the various points it’s deployed in the enterprise has hindered encryption adoption. But, he argued, many customers are also concerned with the reliability of open-systems based encryption key managers, since without keys to access it, encrypted data can be lost.
The new z/OS based product will manage IBM and CA tape encryption instances and automatically mirror keys among mainframes at up to three sites, including replication over SSL and digital certification for data integrity. This method allows keys to be re-created from an alternate location should the primary key manager fail, a key is accidentally deleted, or if the primary site is lost in a disaster. Users can also backup the key store to mitigate the threat of rolling corruption in the replication system.
“This is the first step in a strategy where we want to be the key manager for other encryption solutions,” Kochishan said. CA is considering managing Sun/StorageTek tape encryption next, thoiugh it doesn’t have plans for LTO.
But isn’t the mainframe and IBM focus making another silo for enterprise key management? What about non-mainframe shops? Stefan argues the enterprises most likely to be concerned with advanced key management are financial services companies and banks, which tend to still be running mainframes. Mainframe is also in CA’s DNA.
“It has to be mainframe based,” Kochishan said. “Some companies take distributed systems data and upload it to the mainframe, and have it backed up and tracked through mainframe applications…the mainframe has great reliability and availability which will address customer concerns for high availability and eliminating a single point of failure.”
What about business partners of mainframe-having customers who want to receive encrypted data? Kochishan said customers have a choice of methods to send public keys to business partners. They can send keys on a tape encrypted by CA Tape Encryption, on a natively-encrypted IBM TS1130 tape, or over SSL via replication from the mainframe.
Why not use IBM’s Enterprise Key Manager if you’re already running a z/OS mainframe and an IBM tape library? “IBM EKM has key management in the name but it’s not truly that,” Kochishan. He says IBM “doesn’t perform auditing, tracking, backup, recovery and expiration” of keys. IBM also has Tivoli Lifecycle Key Manager, but it’s “an extra cost item.” Speaking of cost items, CA’s starting price is $16,377 and an unlimited usage license starts at $54,590.
Kochishan acknowledged key management standards will still be, er, key to encryption adoption, even if CA’s approach has succeeded in allaying users’ reliability concerns. One of CA’s technical architects is on the board of the OASIS standards body working on a standard as we speak. “That is a complaint among customers,” Kochishan said.
November 12, 2009 3:08 PM
Posted by: Dave Raffo
Although Hewlett-Packard spent $2.7 billion to buy 3Com Wednesday largely to make it more competitive with Cisco on the Ethernet switching front, the deal will also have implications for Cisco’s main storage competitor Brocade.
First, the deal means HP won’t be buying Brocade – at least not any time soon. HP was considered the most likely company to buy Brocade after word leaked that Brocade was looking for a buyer last month. It appears that HP did consider it – HP executive VP Dave Donatelli said on a webcast explaining the 3Com acquisition that HP looked at all networking options – but decided 3Com’s Ethernet switches and routers were a better fit than the products that Brocade picked up from Foundry.
The 3Com acquisition also means HP won’t follow the lead of IBM and Dell and sign an OEM deal with Brocade for its Ethernet switches. With 3Com’s products and its own ProCurve platform, HP should have enough to fill out its Ethernet lineup.
The deal won’t impact Brocade’s core business – selling Fibre Channel switches. With Cisco and Brocade as its only options, HP will likely continue to lean heavily on Brocade for storage connectivity.
Still, the HP-3Com deal is seen as bad news for Brocade. Several Wall Street analysts downgraded its stock price today, and its shares dropped more than a dollar in early trading from Wednesday’s closing price of $9.25.
While talk of Brocade getting acquired has diminished, Wedbush Securities analyst Kaushik Roy raised the possibility that networking vendor Juniper might want Brocade to create “an even more formidable competitor to Cisco.”
“We think that neither Dell nor IBM would be interested in buying Brocade due to Brocade’s OEM model,” Roy wrote today in a note to clients. “Any purchase by one of the server vendors would lead to loss of revenue streams from the other server OEM vendors. We, however, think that Brocade might be a good acquisition target for Juniper.”
November 10, 2009 9:58 PM
Posted by: Dave Raffo
storage virtualization; thin provisioning
3PA actually beefed up its storage virtualization and provisioning features today without using the word “thin.” The vendor that pioneered thin provisioning rolled out three applications for automating storage management that work in connection with its thin provisioning but are not directly involved with making or keeping arrays “thin.”
Those applications are 3PAR Autonomic Groups, 3PAR Scheduler, and 3PAR Host Explorer. The first two are part of 3PAR’s InForm Operating System and Host Explorer runs on Windows, Linux and Solaris 10 hosts – all without charge.
Autonomic Groups lets administrators automatically provision clusters. When an administrator adds a volume to an Autonomic Group, the volume gets exported automatically to all hosts in the group. 3PAR vice president of marketing Craig Nunes says Autonomic Groups let administrators provision clusters in three clicks: one click to add a cluster of hosts, a second click to create and group volumes, and the third to provision a volume group to a host group. The application automatically provisions all LUNs on the volumes in the group. Using Autonomic Groups, admins can also create a Virtual Copy snapshot of all volumes with one command.
3PAR Scheduler automates creation and deletion of Virtual Copy snaps, and 3PAR Host Explorer is an agent that sites on a server and discovers information such as Fibre Channel World Wide Name (WWNs) and host multipath data to 3PAR InServer arrays.
“Clusters are knocking on the door of every data center now with virtual deployments,” Nunes says. “If we’re going to get to the place where the data center is a shared virtualization resource as opposed to a bunch of boxes, you have to start hiding some of the underlying technology.”
Enterprise Strategy Group analyst Tony Palmer says 3PAR has gone farther than just adding wizards for provisioning storage. “It’s really all about virtualizing the storage,” he said. “Thin provisioning is a piece of that, but if you virtualize storage out from under the servers and hosts that use it, you can do anything you want with it. You can thin provision it, create snapshots, and re-provision on the fly while applications are still running.”
Meanwhile, IBM made a “thin” addition today in pursuit of 3PAR. IBM added instant space reclamation to its XIV arrays as part of a package of upgrades that includes general availability of asynchronous mirroring that Big Blue announced in July.
XIV already supported space reclamation – the ability to detect, zero out and release unused storage to the storage pool of volumes that were thin provisioned. What’s new is this can be done instantly without physically scrubbing. But this requires Symantec’s Veritas Storage Foundation, which has an API for vendors to write to.
This is similar to the 3PAR Thin Reclamation for Veritas Storage Foundation capability added last month for InServ arrays. 3PAR and IBM are the only vendors to take advantage of Symantec’s Thin Reclamation API so far.
November 10, 2009 9:21 PM
Posted by: Beth Pariseau
solid state drives
, Strategic storage vendors
SSD supplier STEC’s stock price has taken a dive since the vendor reported last Tuesday that EMC will carry over its 2009 inventory of Flash drives into 2010. Shares have fallen almost $9.00 to $13.18 at today’s close. According to a report from MarketWatch:
Much of the carryover involves STEC’s Zeus IOPS SSD products. EMC makes up about 90% of STEC’s business for the Zeus IOPS drives, and had placed an order for $120 million of the drives for the second half of this year.
STEC officials said that about $55 million of that order has been delivered, and the rest would be shipped before the end of year.
A flurry of class action lawsuits have been filed accusing STEC executives of misleading investors before making their revelation last Tuesday. This all leads me to wonder if the industry has been wrong about SSD adoption overall.
All EMC would say in a statement released through a spokesperson was “EMC is pleased with its SSD demand and growth. In Q4, EMC will introduce unique FAST (fully automated storage tiering) capabilities, which are expected to increase SSD growth and demand even further.”
Does this inventory carryover send a signal about wider SSD adoption in the market, given how dominant STEC’s share is (and EMC dominates its business)? I asked a couple of analysts for their opinions.
“Well, what I have been hearing is that EMC is giving SSD away for free to try to spur adoption, but so far it doesn’t seem to be working — it’s too costly, and too wasteful without some type of FAST capability,” Forrester Research analyst Andrew Reichman responded in an email. “SSD as a performance add-on is not popular in this economy… It’s interesting to see that STEC can’t make a go of this business even though they have a number of the major storage vendors signed up as partners. That says to me that it’s not competition, but the whole category being slow so far.”
Added Taneja Group analyst Jeff Boles, in another email, “while we’re in the midst of an unusual market that likely over-penalizes STEC for perceived risk, while over-endorsing other companies for perceived value, I remain cautious about the speed of SSD adoption.”
But, he added, the newness of SSD could be creating a vicious cycle of perceived risk.
“What the market needs is a good round of commoditization, brought on by integration of some of this intelligence into the storage system itself,” Boles wrote. “At that point, obsolescence will start to look a bit more unusual, and the roadmap for future devices a little more predictable. After all, if your XYZ array had solid state intelligence in it, and you were buying highly commoditized drives that only changed with the density and performance of the flash memory itself, then there seems to be less risk that your flash investment could be rapidly outdated by the next rev of a drive controller.”
As always, the peanut gallery is invited to weigh in.