Seagate sent its top two executives packing this morning, and they’ll be followed by 10% of the U.S. staff by the end of the month.
The surprising moves are the latest sign that all is not well with the disk drive maker, which already cut its revenue forecast for last quarter from $3.05 billion to $2.85 billion.
Former CEO and current chairman Stephen Luczo is replacing CEO Bill Watkins, who will stay on as an adviser to Luczo according to Seagate’s news release. What Seagate didn’t put in its release – but added in its SEC filing – is that president Dave Wickersham resigned and will be replaced by current CTO Bob Whitmore.
The SEC filing also confirmed the layoff of 10% of the U.S. workforce, saying the cuts will “impact a broad range of departments, including research and development” and are the results of the troubled economy. Seagate will probably give more details of the executive changes and layoffs when it reports earnings Jan. 21.
Financial analyst Aaron Rakers of Stifel Financial Corp. says the changes show that things might be even worse than anybody thought at Seagate. He says while it’s a good sign that Seagate is making the tough decisions to realign the company after recent struggles, the shakeup could be “a signal that more meaningful negatives are going on within the company.”
Luczo was an investment banker for Bear, Stearns & Co., before becoming Seagate’s CEO from 1998-2004. During that period, Seagate went private in 2000 before re-emerging as a public company in 2002. He is also on the board of storage system vendor Xiotech, which Seagate spun out during Luczo’s term as CEO.
Sun’s Chief Identity Strategist Sachin Nayyar and I had an interesting discussion today about Sun’s plans to bring together role-based access management with storage provisioning this year.
Nayyar, who was CEO of identity management software maker Vaau when Sun acquired it in late 2007, said that Sun is now looking to integrate role-based identity management software with storage provisioning. So, for example, when a new employee joins a company, provisioning of storage on a shared device could be triggered by a call from the software registering that employee’s identity on the network. When that employee leaves the company, the identity management software could also remove the employee’s data from production storage, migrating it to archival storage or making it a part of the employee’s supervisor’s storage capacity.
Nayyar said the identity management software has some data migration capabilities, so that it could handle that process, or it could integrate with other elements in the environment. Policies could also be set to migrate an employee’s data to archival storage when a project they’re involved with finishes, or a department they’re in is restructured.
“It’s something we already do today with Outlook,” Nayyar said. “We’re not sure on the details with the open storage software, if it would provide some of the migration capability, but our identity software has the ability to move content.”
There are always political ramifications within a data center’s staff when one piece of software from a certain discipline ( identity management is generally part of the security infrastructure) looks to control a task or device in another (in this case, provisioning storage). However, Nayyar pointed out users across data centers are already integrating with access management software such as Microsoft’s Active Directory. “Every provisioning process has set of approvals and the storage admin has to sign off before anything is triggered,” he said. “It’s similar to what’s done today when an account is created with Active Directory–the administrator has to approve it. It’s not a big jump in the identity space.”
Given the challenges that are facing Sun of late and the fact that the idea is still in the “discussion phase” within Sun, as Nayyar put it, it’s probably best to take it with a grain of salt, but as a concept I found it interesting. I wouldn’t be surprised to see similar offerings emerge from other companies with storage and security IP, like EMC and IBM. During a conversation I had with EMC CTO Jeff Nick last month, he emphasized the importance of linking data across repositories to individual users.
I can also see this potentially playing a role in multi-tenant cloud environments, particularly in the consumer and SOHO space, where storage needs to be organized according to an individual client’s identity. The automated process that would be involved is also supposed to generally appeal in sprawling cloud data centers. Meanwhile, Sun yesterday purchased a Belgian company called Q-Layer, whose software automates the deployment and management of public and private clouds.
Going against the grain of tech companies, EMC says it met its revenue forecasts last quarter. Still, EMC is joining the legions of corporations who are laying off workers.
EMC made both revelations in a release Wednesday after the market closed. Despite its sunny results, it added to the unemployment gloom by saying it would cut around 2,400 jobs – 7 % of its headcount – by October. The layoffs are part of a restructuring that EMC hopes will save it $350 million this year and $500 million next year. According to the release, the restructuring will “consolidate back office functions, field and campus offices; rebalance investments towards higher-growth products and markets; reduce management layers; and further reduce indirect spend on contractors, third-party services and travel.”
It will be interesting to see what impact this will have on EMC products that don’t fall into that higher-growth category.
As for the fourth quarter, EMC said it expects to report revenues of around $4 billion, which would be around an 4% increase from last year and a bit higher than financial analysts expected. It gave no details on what products performed well, but the release comes at a time when storage and overall IT sales forecasts look dim and follows announcements from LSI and Emulex that they failed to meet expectations last quarter. EMC will give more details when it officially reports its earnings Jan. 27.
In a note to clients today, financial analyst Kaushik Roy of Pacific Growth Equities wrote that he believes storage vendors Brocade and QLogic also lived up to expectations last quarter, but “we are cautious about NetApp.” Roy also wrote that EMC’s results show “that the end-demand for storage remains relatively healthy. … The storage market appears to be holding up better than other sectors of IT. Compliance and disaster recovery/business continuity remains a big driver for storage spending.”
Like clouds themselves, cloud storage is available in many shapes and sizes. The latest shape is the size of an AC power adapter that networks PCs to online storage.
CTERA came out of stealth today by unveiling its CloudPlug device that connects to an Ethernet router and USB device to provide backup and file sharing for small offices.
CloudPlug sends data online, and backed up data appears as shared drives on local PCs. While the first version is for the prosumer market or offices up to five people, CTERA CEO Liran Eshel says larger devices will eventually be available that are better suited for businesses.
Eshel calls CTERA’s platform Cloud Attached Storage, and he plans to sell it through service providers. The providers will add their own cloud storage or use third-party services such as Amazon Web Services.
“To use online services you need an application,” Eshel said. “But we put sharing and backup in one device. If you share on a local network, you don’t want to go back and forth with Amazon, it’s very slow. We do the sharing locally, and use Amazon for backup.”
CTERA also offers a management portal for service providers. But these are still early days for Cloud Attached Storage. CTERA has no service providers lined up yet. Eshel isn’t giving pricing details yet and all he’ll say about availability is he expects it sometime this year.
Iron Mountain’s Connected PC backup product will be available for Mac users starting in March, according to a press release the company issued today at Macworld 2009. As with the PC version, Connected for Mac offers automated backup and centralized management of desktops and laptops both inside and outside the corporate firewall, and users can restore their own files directly without helpdesk intervention.
The offering follows EMC Corp.’s Mozy (now part of new EMC subsidiary Decho Corp.) into the market; the online backup service provider launched consumer and Pro versions of its Mac backup over the last year.
Another differentiator for MozyPro in the prosumer / SOHO market has been the ability to centrally manage storage for multiple workstations. Now, another company, Rebit, will also offer PC-based shared backup to an external hard drives for up to six clients. And so the consumer / prosumer storage space continues to move along at warp speed compared with the enterprise.
In other Mac storage news, ATTO Inc. announced that its iSCSI software initiator for Mac OS X servers, Xtend SAN, has new features, including HeaderDigest and DataDigest with error level 1 processing, used to guarantee the integrity of iSCSI header data. The software already supports features such as Challenge-Handshake Authentication (CHAP), Internet Storage Name Service (iSNS), Login Redirect and iSCSI error handling and recovery.
At the rate we’re seeing vendors burrow into the prosumer and / or Mac markets, I’m expecting to see somebody start offering all of the above, plus new features, by Friday.
Europe-based engineering conglomerate Siemens AG lost a patent-infringement claim against Seagate in US District Court in California a little over a week ago, Bloomberg reported. According to court documents, the judge in the case, James V. Selna, determined prior to the jury trial that Seagate had in fact infringed the patent, No. 5,686,838. According to the Bloomberg report, the patent covers “a key component in a sensor-layer system that measures contacts on hard-disk drives.” The court documents refer to it only as a “type of sensor.”
Selna said that while he had ruled that patent infringement had taken place, he had made no ruling as to whether the patent was “valid, or whether it is enforceable, or whether any infringement by Seagate was willful, or what damages, if any, Siemens is entitled to.”
The jury found that the patent was not enforceable due to prior art by IBM, with which Seagate has a licensing agreement in place. Siemens had sought damages of $160 million, which were not granted.
The last two years have seen plenty of patent litigation among storage companies, including a battle between Sun Microsystems and NetApp Inc. that is still ongoing. However, other patent lawsuits that have made a splash in the storage industry, such as Quantum’s suit against Riverbed, have been settled out of court or otherwise fizzled like this one. In the Sun case, at least one of the patents cited by NetApp in suing Sun has been taken off the table by the U.S. Patent and Trademark Office due to similar enforcement issues. So far the lawsuits are looking like key talking points for those who argue the patent system in general badly needs reform.
On the heels of the addition of a change management tool to Symantec’s CommandCentral SRM software, NetApp last week released version 5.0 of SANScreen, the change management application it acquired by buying Onaro.
Among the updates, according to an email from Steve Cohen, Product Marketing Manager for SANscreen:
- More Heterogeneous Support – HDS USP, and EMC CLARiiON (performance) and EMC Celerra (discovery). SANscreen was built to provide heterogeneous visibility and this concept remains core to our values
- Data Warehouse and embedded business intelligence engine based on IBM’s Cognos. This allows rollup of multiple SANscreen sites into a single Data Warehouse for global storage infrastructures, and offers a possible interface to SQL-accessible 3rd party apps including change management databases (CMDBs) for the overall data center. New custom reporting from data warehouse includes storage chargeback.
- VMware vCenter Plug-in – SANscreen data is now available within vCenter, so server teams can use one tool to manage their environment.
- iSCSI Support - Single console to monitor multi-protocol environments.
While reporting tools have been steadily adding features and are now available from major vendors, this category of products remains a tough sell, according to analysts. For more on that, see our coverage of Symantec’s CommandCentral announcement.
I have just spoken with backup guru Curtis Preston, who confirmed that GlassHouse let him go at 9:30 this morning. He said there was talk of a drive toward profitability and cutting costs.
But I’m wondering if there’s something else behind it. I’m amazed GlassHouse would let someone of Preston’s stature go the week before Christmas, and – according to him — with no advanced notice. When I posted the news on Twitter, his former GlassHouse colleague Stephen Foskett also seemed more than mildly surprised.
That said, I am sure Preston will land on his feet, and soon. His visibility in the industry is huge. One of my fellow TechTarget storage bloggers, Taylor Allis, formerly of Sun and now with Capstone Technology Solutions, immediately tweeted back to me saying Curtis should call him if he’s looking. I’m sure that’s only the beginning.
I don’t hear from Avid much, mainly because the vendor is mostly focused on video-editing software. So I took their invitation to tour their demo data center at their Tewksbury headquarters as yet another sign that storage has gone far more mainstream than it was when I first joined the industry.
That’s a pattern I’ve been noticing a lot lately–storage has emerged at the center of everything, and become much more interdisciplinary. It’s partly because people are starting to think more about the overall data center than its separate spheres, but also because the continued digitization of data and the enrichment of data formats, has naturally brought more attention to the problem of where we’re going to put it.
Avid storage senior product manager Bill Moren agreed when I met with him yesterday. “It’s not so much an accessory as it used to be,” he said. “It’s a big part of the end to end workflow that users are concerned about.”
The video processing world in particular has been among the verticals that have seen data absolutely explode, thanks not only to new mechanisms for digital delivery but the advent of ever-bigger HD formats for film and video. Avid’s networked storage system, Unity ISIS, was updated with the goal of addressing ballooning capacity Dec. 1 with support for a 10 GbE connection directly to the desktop workstation for faster access (previous versions supported 10 GbE to the switch), support for 1 TB drives which bring the total system capacity to 384 TB from 192 TB, improved internal bandwidth of 400 MBps per storage chassis (up from 250-300 MBps), and LDAP/ActiveDirectory integration.
The ISIS architecture consists of chassis called Storage Engines. Within each engine is a series of blades containing CPU, memory, and two disk drives. I’m sure you can see where this is going: the blades are parallelized, and divide up pieces of files among them according to software algorithms. Data is reassembled for playback using keys issued by a metadata server. Moren said the metadata server is usually a piece of white-box commodity hardware. Some in the industry argue that the metadata node can become a bottleneck. I asked Moren about that, and it kicked off a conversation about what Avid’s trying to do vs. what the data center vendors I usually talk to are aiming for.
For people looking to work on realtime media files, Moren said, the amount of storage you can cram into one box is less important than it is for users running big shared repositories of files in the enterprise. What’s more important isn’t the burst-rate speed or peak performance of the box either, but rather the its ability to deliver data in an almost metered fashion so frames of video aren’t dropped. Moren described the process of Avid’s parallel file system feeding data from the parallelized blades through the metadata node to the client as being “like a metronome.” It’s the predictable steadiness, not the raw speed, that’s most important, he said.
I found that interesting, especially because of all the storage vendors out there looking to capture the video content delivery market. Many of those vendors, however, reference customers in the 3D rendering space, or special effects houses for live-action films, which often work on single frames rather than live streams. This isn’t a distinction I’ve considered before.
While products that incorporate the standard are still down the road, pNFS is expected to speed data transfer, eliminate bottlenecks and increase the scalability of clustered NAS products. Parallel NFS provides a specification for placing a metadata server outside the data path of servers attached to a multinode storage system. Storage nodes can be held together with another clustered file system, while pNFS exposes the block mapping of files and objects to the client. The client then receives those blocks through multiple parallel network channels and reassembles them for presentation to the user.
Some storage administrators said they hope VMware will offer a client for pNFS to help overcome storage I/O issues with the server virtualization software. Eisler also blogged in August with some further clarifications regarding that idea.