At a Storage Foundation and Veritas Cluster Server roadmap session at Vision on Thursday, a Symantec exec revealed it will be coming out with its own clustered NAS system, based on the next generation of its Storage Foundation Scalable File Server. This will be accomplished by layering a NAS personality onto Symantec’s existing clustered file system.
“We’re going to leverage our file system know-how to deliver next generation object storage for cloud computing,” said Rob Soderbery, senior vice president of the storage and availability management group.
The system will mostly be used as the back end for Symantec Protection Network SaaS offerings, but will also be available to service-provider customers, according to Soderbery. Currently called Symantec Secure Scalable Storage (S4), the new system is slated for an alpha later this year, beta early next year and live availability for SaaS in mid-2009.
By putting S4 behind its backup SaaS, Soderbery said, Symantec would be able to offer users online access to files backed up through SPN or the Backup Exec SPN integration. “The backup use case would blur with the storage SaaS use case,” he said.
Other roadmap items for Storage Foundation and Veritas Cluster Server highlighted in the presentation:
- Heterogeneous clustering between server types, OSes and physical and virtual servers with the rollout of the new VCS One product later this year. This builds on an early adopter version of the product released last year called Veritas Application Director. VCS One will use a policy master, and the goal is to support up to 256 mixed OS nodes for multi-tiered application-based HA and DR.
- Change management through Command Central Storage, also due out later this year. In addition to both proactive and reactive change management analysis for the primary storage environment, the product will also track the impact of changes to the DR plan, and allow for policy-based enforcement of configuration standards.
- Symantec will also be rolling out Veritas Operations Services. These services are Web-based configuration management offerings. SFPrep, a utility that checks OS versions, patches, etc. when Storage Foundation is installed, is in beta testing. It will also allow users to submit “goal builds” for review by Symantec’s engineers, who will tell them whether they’ll work or not, and offer remediation if they won’t. “We want to cut out the cycle of deploy, problem, fix,” Soderbery said. He added that out of 800 configurations that had been submitted so far, 25% were problematic.
Ease of use and solving compatibility issues were themes among the roadmap/user feedback sessions at the conference Thursday. Users in a NetBackup roadmap session asked for common management tools to be made avialable for NetBackup and Backup Exec in mixed environments, and for integration with Active Directory and LDAP.
In another session on upgrades to NetBackup 6.5, however, users and analysts praised a newly avialable upgrade process for media servers through LiveUpdate. Previously, updates were only made to the master server through the utility. However, users said they hoped that in future, LiveUpdate could be delivered as a preconfigured virtual appliance, rather than requiring users to set up a separate physical or virtual host on their own to run it.
After Symantec confirmed its acquisition of backup SaaS partner SwapDrive on Wednesday, I sent out some questions to Symantec. Here are the responses I got back from a spokesperson:
How will Symantec integrate SwapDrive into Symantec Protection Network?
SwapDrive offerings are focused on consumers while the Symantec Protection Network is focused on the needs of small and medium businesses. Symantec will continue to offer both. The needs of consumers and businesses can be quite different. We expect, however, that consumers, businesses, and partners will all benefit from knowledge sharing that will take place between the SwapDrive and SPN teams.
Is it true that SwapDrive doesn’t backup open files?
SwapDrive accommodates open files differently across the various implementations. SwapDrive is designed to back up files by automatically shutting them down – backing them up, and then re-opening them transparently to the end user. Further, SwapDrive will implement other “open file” backup techniques as partners request them.
Will SwapDrive add a Mac client?
SwapDrive’s web-based applications, such as the SwapDrive File Sharing and WhaleMail work on all major platforms – including Mac. For example, there are many Mac WhaleMail users.
SwapDrive’s pricing for 2 GB / year is $50–EMC’s Mozy offers this amount for free. Any plans to change that pricing?
SwapDrive’s current online pricing will keep pace with the market and the value derived. Our service is more robust and redundant than many others offered in the market today. We will constantly innovate and price for the market and value we provide. Services included in some of our products (ex. WhaleMail for sending large files) are not offered by other low cost providers.
SwapDrive also supports numerous partners who offer storage to consumers via different arrangements. For example, Norton 360 includes 2GB of storage as part of the purchase price product. (MSRP $79.99)
In keynotes and 1:1 executive briefings at Symantec Vision this week, Symantec officials have opened the kimono about plans for future integration of their products, including integration of software pieces from storage and security units.
It won’t be product integration per se, as has been done with individual backup products NetBackup and PureDisk because changes to product-level code across different disciplines of IT could be an impediment to adoption for users, according to CTO Mark Bregman. “That kind of integration suggests things bolted together, and our approach will be to let the separate products talk to each other,” he said.
Symantec will use IP acquired in its Altiris acquisition, extracting data through Web Services standards and APIs for legacy applications. Some products are already shipping with the necessary Web Services support today, such as Symantec Endpoint Protection and Backup Exec System Recovery (BESR). The overall integration platform, which will be included free in products going forward, is referred to as the Open Collaboration Architecture, or OCA, within Symantec. A small team of engineers has been assigned to build the connectors to it for each product. Currently, applications that support it like BESR can issue reports through the architecture, but taking action is still a ways off, Bregman said.
One of the use cases for OCA discussed by execs is endpoint virtualization using a combination of IP from Symantec’s AppStreams and Vontu. The technology is tangential to storage, but will be Symantec’s way of addressing what it calls the consumerization of IT–i.e. the use of mobile devices for work and personal computing. According to Enrique Salem, AppStreams’ application streaming software would make an application and its data temporarily available on a mobile device. To keep corporate data from floating around on personal devices, Vontu’s data loss prevention software would track data created on the mobile device and clean up the data once the AppStreams session is over. Vontu’s software could also prevent users from forwarding sensitive corporate material elsewhere. OCA underpins all of this, and Salem said Symantec is rolling out the pieces today. It remains unclear if Symantec will prepackage this as a storage security application for mobile devices, but Salem said it can be put together today for users want it through Symantec’s professional services.
Users around the show say it’s a nice idea, but there are improvements they’d like to see to Symantec’s existing frameworks first. Users of Symantec’s OpenStorage API integration with Data Domain say there’s room for improvement there–with the current version, users must manually select a replicated copy of data from a secondary site in the event of an outage.
Elsewhere, a storage director for a financial company said he’d like to see summary reports from Symantec’s management products that put data into business terms like RTO and RPO (Symantec CEO John Thompson said Symantec would prefer to hook OCA into third party reporting tools like Crystal Reports, because “one organization’s great report is another organization’s pain in the butt.”). This user also said he’d like to see more product-level integration. “Anything with catalog integration is high on our wish list,” he said.
According to a statement issued by Symantec today,
Symantec has acquired SwapDrive, a privately-held online storage company to strengthen the services offerings in the Norton consumer portfolio and to help consumers manage data across their devices. This was a small, targeted acquisition and is a very natural move for us because of our close two-year OEM relationship and existing product partnership on Norton 360.
If reports elsewhere are to be believed, however, “small” and “targeted” are relative terms–the deal is reportedly worth $123 million.
EMC, which made a similar acquisition of Mozy for $76 million last year, is already firing off counter punches, via an emailed statement to press pointing out competitive differences. SwapDrive doesn’t backup open files, for example, doesn’t have a Mac client, and charges $50 per year for 2 GB of backup while Mozy gives that away for free. On the other hand, Symantec already had a field-tested infrastructure for SaaS in its Symantec Protection Network (originally used for hosting accounts in the security division); began as a security company while EMC has had to assimilate RSA; and arleady has an established brand in the consumer/SMB space that EMC is trying to penetrate. Symantec previously partnered with SwapDrive for its Norton 360 backup SaaS, while EMC has had to integrate Mozy.
Aside from these positioning differences, I haven’t been able to help notice that EMC and Symantec are looking a lot alike these days. While at EMC World and Symantec Vision, both in Vegas just a few weeks apart, at times it’s been eerie just how similar the company lines have begun to sound from these rivals. Both CEOs are keen to talk about the “consumerization of IT.” Both are interested in supporting access to data from mobile devices as part of that shift (though Symantec appears a bit ahead there with application streaming and data loss protection software that’s shipping today to offer that kind of service, while EMC is still cultivating Pi Corp.’s IP). Both have large, multifaceted backup portfolios they say they plan to integrate at the management layer (EMC also says it plans to integrate repositories while Symantec argues that they should remain separate), and both have used the same “one size doesn’t fit all” line to describe their backup portfolios and strategies.
It’s clear from the way Symantec execs react to questions about these overlaps that they still see EMC as a newcomer in the software space. When I pointed out that both companies are talking about the consumerization of IT in similar terms, Symantec CEO John Thompson interrupted me to retort, “and what consumer experience do they have?” He made a similar comment about backup and archiving integration, pointing out Enterpise Vault archives certain data types already while EMC is still working on integrating file, database and email archiving through Documentum. It’s clear these companies are in each other’s heads, and that their competition is growing fiercer than even the turf wars between EMC and NetApp. With this acquisition, SaaS will be added to the list of their increasingly contentious battlegrounds.
While IBM is upgrading its SAN Volume Controller (SVC) virtualization platform, Incipient is taking one of the major features of its storage virtualization product — data migration — and spinning it off into a separate application.
Incipient today rolled out Incipient Automated Data Migration (iADM), which is designed to automate and manage large data migration projects.
Migration is supported in Incipient Network Storage Platform (iNSP), switch-based virtualization software that competes with EMC Invista and LSI’s StoreAge SVM. It’s no secret that switch-based block virtualization has been a bust compared to IBM’s network-based SVC and Hitachi Data Systems’ array-based virtualization. Incipient isn’t waiting for the market to come around.
[iADM] “is not storage virtualization, it’s process automation for storage,” Incipient marketing VP Rob Infantino said. “This is used with or without storage virtualization. Some customers think storage virtualization is still risky because we’re adding an abstraction layer between the host and storage, but they need data migration capability.”
iADM doesn’t permanently take control of LUNs. While iNSP requires a Fibre Channel switch, iADM sits on a Windows server and connects to the storage fabric via Ethernet. It performs the discovery of devices on a SAN, maps hosts to source LUNs, and does the LUN masking, auto zoning and host reconfiguration. iADM works with array-based data movers, such as EMC’s SRDF and SANCopy. The idea is to make large migration projects go faster and save companies from having to pay huge services fees. Incipient is aiming iADM at shops with more than a petbyte of storage and looking to migrate hundreds of terabytes. Licenses typically run from $2,000 to $2,500 per TB, depending on how many TBs are migrated.
Since the migration requires a few hours of downtime for hosts to recognize the new targets, it’s usually done on weekends. After the migration, customers can pull out the iADM software.
Infantino says the new product doesn’t mean that Incipient is backing off switch-based virtualization. He won’t reveal how many customers Incipient has, but says it does have some $1 million-plus deals with major financial institutions using iNSP. Now it’s working on upgrading iNSP to add remote replication, support for software from VMware, NetApp’s Onaro and Akorri, and greater scalability.
It’s been a while since I’ve been able to spend some quality time behind the keyboard, I’ve been suffering from a gigantic “honey-do” list and it was difficult for me to use some huge work emergency to weasel out of it! In the time I haven’t been blogging there’s been some major storage news: Sun says that SSDs are ready for arrays while Western Digital is reportedly developing 20,000 rpm drives.
I’ve been chomping at the bit to get my hands on an SSD for my desktop. After going SAS, I’m open to the prospect of even higher performance for my desktop disk subsystem, and it’s something I think I’m going to be chasing from now on.
We’re currently rolling out SSDs in a limited deployment for highly available single hard disk bay blades (say that three times fast). IBM has managed to fit a RAID 1 setup in a single drive bay for their line of blades and we like the performance numbers as well as the idea of no moving parts at all in a blade with onboard storage. Not only will we have the higher MTBF of the SSDs but the read performance is crazy!
Three months ago I was talking about how SAS would spell doom for SATA. Well, now I’m ready to eat some crow because in no way did I expect SSDs to become this close to affordable this quickly. Take a look at the non-server market: Lenovo and Apple are already offering laptop models with SSD exclusively.
SSDs have a lower power and heat footprint and have great read speeds. Write speeds aren’t as good as the read speeds, but slap a couple together in RAID 0 and that issue becomes moot. SSD looks like a shoe-in to be the next big thing. Or does it. . . ?
My take on the possibility of a 20,000 rpm drive is that Western Digital might not like the idea of the next big thing being something that isn’t, well, theirs. They also just released the 10,000 rpm Velociraptor SATA drive, which in itself something spectacular, since it brings the performance of higher-rpm SAS down to the cheap, ubiquitous SATA controller.
Details are sketchy when it comes to the potential heat and power of the alleged 20,000 rpm drive. It may not even make it to market, and there might not be much place for it with solid state drives delivering even faster performance with less in the way of power and cooling requirements. But me, I’m interested to see a knife fight between traditional disks (and maybe hybrids) and SSDs, since it can only result in me getting a faster storage subsystem, and it may lower prices even more.
In fact, I’m a little annoyed that it’s taken the disk industry this bleeding long to come up with an additional 5,000 rpm. I’m sure some of you out there are in the hard drive industry and have a list of reasons why it’s a hard thing to do. To which I, the jaded technologist/consumer, say, “So?” We live in an age where we have teraflop chips on video cards, where chips in mp3 players have more computing power than the first Space Shuttle, where cars park themselves and where we can see full color photos beamed back from Mars. MARS!! If you ask me, 15,000 rpm has been the ceiling for waaaaaaaay too long.
IDC confirmed Dell’s claim that it has moved into the No. 1 spot in iSCSI SAN market share following its $1.4 billion acquisition of EqualLogic. According to the IDC quarterly storage numbers for the first quarter released today, Dell passed EMC and NetApp to take the lead with 27.7 percent of the iSCSI market. Overall, IDC pegged iSCSI as accounting for 6.1 percent of the overall external disk storage revenue — up from 4.1 percent a year ago and 5.1 percent in the fourth quarter of 2007.
It appears that a good chunk of the market share Dell gained came at the expense of its storage partner EMC. EMC, which co-markets Clariion systems with Dell, slipped from 18.2 percent to 15 percent of iSCSI market share in one quarter. Does that mean Dell customers are buying EqualLogic systems instead of Clariion AX iSCSI boxes? Maybe, although Dell still accounts for about one-third of overall Clariion sales.
NetApp increased its iSCSI share from 18 percent to 20.5 percent to move ahead of EMC while slipping behind Dell. No other major vendor has more than 4.7 percent of the iSCSI market, with the “other” (all vendors except for Dell, EMC, NetApp, HP, IBM, Hitachi, and Sun) category at 29.3 percent – nearly twice as much as “others” have of the Fibre Channel market. The others’ iSCSI share actually came down – it was 40.1 percent in the fourth quarter — reflecting the shift of EqualLogic’s revenue from others to Dell.
It’s a good bet that LeftHand Networks sits in fourth place overall with a large piece of the others’ share. LeftHand was considered a close second to EqualLogic among private iSCSI vendors before Dell scooped up EqualLogic. LeftHand remains private and we don’t know its financials, but marketing VP Larry Cormier said it is picking up 200 to 300 customers a quarter. . .some of those because EqualLogic is no longer independent. “Some shops just won’t buy Dell,” he said.
In any case, with iSCSI drivers such as server virtualization and the eventual emergence of 10-Gig Ethernet fueling interest and the vendor landscape changing, the iSCSI space will be interesting to watch over the next year. . .just as converged Fibre Channel/Ethernet networks may blunt iSCSI’s encroachment into the enterprise.
“Fibre Channel over Ethernet is like a fast car,” said consultant Howard Goldstein of Howard Goldstein Accociates Thursday on his session about FCoE at Storage Decisions Toronto. “It looks great, but it probably won’t run as well as you thought or be as cheap as they say it’s going to be.”
Goldstein’s point basically boiled down to: if Ethernet’s good enough to be the transport layer, why bother layering an FC protocol on top of it? He dismissed the common answer to that question, which is that mixing FC and Ethernet will allow users to maintain existing investments in FC systems, saying it’s a myth. “You’re going to have to buy brand new HBAs and Fibre Channel switches to support FCoE,” he said. “Is this really the time to reinvest in Fibre Channel infrastructure?”
Instead, Goldstein pointed out that FC services such as log-in, address assignment and name server, to name a few, could be done in software. “Those services don’t have to be in the switch–Fibre Channel allows them in the server,” he said. He also questioned the need for a revamping of the Ethernet specification for “Data Center Ethernet” capabilities. “Is converged Ethernet a real requirement or a theoretical requirement?” he said. He also questioned whether or not storage traffic was really fundamentally different from network traffic.
However, users at the show said FCoE is still so new they weren’t sure whether or not to agree with Goldstein. “It’s too immature to say right now,” said Maple Leaf Foods enterprise support analyst Ricki Biala. He also pointed out an all-too-true fact: in the end, such technology decisions will be based on equal parts politics to technology. “It’s easier to convince management to buy in if you’re going the way the rest of the market’s going,” he said.
Your thoughts and quibbles on FCoE are welcome as always in the comments.
Solid state isn’t the only thing looming on the horizon in the enterprise storage drive space. Drive makers say small-factor (2.5-inch) SAS is poised to encroach on 3.5-inch Fibre Channel’s turf in storage arrays.
Seagate is eyeing enterprise storage arrays with drives such as the Savvio 10k.3 that it launched this week. At 300GB, the self-encrypting drive offers more than twice the capacity of Seagate’s previous SAS drives. It also supports the SAS 2 interface. SAS 2 includes 6 Gbit/s speed and other enterprise features likely to show up in storage systems by next year.
“300-gig drives will be more attractive to storage vendors, and they’re starting to find the small form factor drives more compelling,” said Henry Fabian, executive director of marketing for Seagate’s enterprise business. “You’ll start to see the small form factor ship in the second half of the year in storage arrays because of higher capacity and lower power requirements.”
Joel Hagberg, VP of business development for Seagate rival Fujitsu Computer Products of America, also sees small form factor SAS coming on strong in enterprise storage. “The storage vendors all recognize there is a shift coming as we get to 300 gigs or 600 gigs in the next couple of years in the 2.5-inch package,” he said. “We’re cutting power in half and the green initiative in storage is increasing.”
As for Fibre Channel, the drive makers agree you won’t see hard drives going above the current 4-Gbit/s bandwidth level.
“Four-gig is the end of the road for Fibre Channel on the device level,” Hagberg said. “All the external storage vendors are looking to migrate to SAS.”
By the way, Hagberg says Fujitsu isn’t buying into the solid state hype for enterprise storage yet. He considers solid state to be a few years away from taking off in storage arrays.
“There’s a lot of industry buzz on solid state, and I have to chuckle,” he said. “I meet with engineers of all storage vendors and talk about the hype versus reality on solid state drives. Every notebook vendor released solid state in the last year. Are any of those vendors happy with those products? The answer is no. The specs of solid state performance look tremendous on paper, but a lot less is delivered in operation.”
Google has revamped its business search site, and rechristened it Google Site Search (it was previously called Custom Search Business Edition). It’s the SaaS version of the Google Search Appliance, but it’s limited to website data because the hosted software can only see public data.
So essentially it’s custom search for e-commerce websites. Almost completely unrelated to storage. . .except when it came to one of Google’s customer references for Site Search: EMC’s Insignia website, which sells some of EMC’s lower-end products online. Prior to implementing the site search, apparently the Insignia site had no search functionality. Visitors had to page through the site manually–including when it came time to look for support documents or troubleshooting tips.
EMC’s webmaster Layla Rudy was quoted in Google’s collateral as saying that sales have gone up 20% since they added search to the site. Moreover, according to her statement, there has been an 85% decrease in customer-requested refunds now that they can find the correct product in the first place as well as its associated support documents. What’s especially amazing about this to me is that Insignia is a relatively new business unit, rolled out by EMC within the last three years–it’s not like it was the 90’s when Google was relatively unknown or site search a “nice to have” feature of most websites.
Of course, I don’t know how long the Insignia site was up without search, or what the absolute numbers are when it comes to the refund decrease–85% can be 8.5 out of 10 or 85 out of 100. (EMC hasn’t returned my calls on this).
Meanwhile, with EMC getting into cloud computing, I wonder what kind of search — if any — it makes available on backup/recovery or archiving SaaS websites. Right now Google claims its Search Appliance can index onsite and offsite repositories and, unlike the SaaS version, can search protected private data. While there are no plans to make this a feature of the hosted version, service providers can offer hosted search by managing their own appliance in the cloud. Whatever they chose, hopefully it was a Day One feature.