Storage Soup


September 18, 2007  10:51 AM

Web 2.0 companies get deeper into data storage, email SaaS

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Yahoo bought Zimbra Inc.  today for $350 million. The New York Times reports that the acquisition is meant to help Yahoo better compete with Google, and its GMail service, which Google has begun to offer to businesses this year.

We’ve covered GMail quite a bit, both on the main news site and this blog. We covered the launch of Google Apps for Enterprise and its 10 GB inbox, and then spoke with one early adopter of the Apps on how he’d used it to save time and money on email storage. We’ve also discussed some of the “gotchas” with software as a service for enterprises, and fielded Google’s response to those points. Finally, we’ve seen Yahoo peeking over Google’s shoulder a bit, with its announcement of unlimited inbox capacity for its webmail.

Shortly after enterprise storage experts started questioning the security and compliance of Google’s offering, the company went out and bought an enterprise archiving player, Postini. By that time, analysts were remarking that software as a service, particularly for email archiving and backup, is officially back. “People now will say, ‘oh, no one’s going to get rid of Exchange’,” ESG analyst Brian Babineau said at the time. “But it’s a generational thing — newly graduated employees are joining businesses from college already standardized on Gmail, and many corporations are saying, why manage Exchange when employees are already used to this Web-based, outsourced interface?”

It appears Yahoo sees the same writing on the wall. Today, like Google, they purchased another company to bring in some enterprise-level expertise for their email SaaS.

The storage market is already a little bit familiar with Zimbra. It’s an open-source messaging system, so most of the product’s features have little to do with storage, per se. But Zimbra also began announcing archiving customers recently, including an ISP in Dallas, Texas, that plans to offer Zimbra archiving to its customers in local K-12 school districts.

As with Postini, the company that Google acquired, Zimbra’s software has most of the features on the enterprise email archiving checklist, including automatic .pst file discovery and migration, and the ability to index, search and export messages or mailboxes for e-discovery and compliance purposes.

Where it differs from other products is the fact that it can support multiple email systems, including Exchange, Lotus, Domino and GroupWise, as well as its own email application, and can support messages from any combination of those applications in the same repository, the company claims. A disadvantage for Yahooo, meanwhile, is that Zimbra’s archiving product is relatively unproven in the market, having just become generally available July 23.

Interestingly enough, Google had an announcement of its own today–the release of Google Presentations, a Web-based competitor to PowerPoint. Clearly, Google is going after Microsoft hard, but over in this little corner of the IT market, I’m having a bit of a chuckle today–in responding to the “gotchas” in a Q&A with Storage Soup back in April, Google Enterprise product manager Rajen Sheth told me the following:

We’re definitely not trying to duplicate Microsoft Office. The way I would think of it is that Office is very well designed for individual productivity–an individual preparing something to present to a group of people. We’re focusing Google Docs and Spreadsheets on collaborative use case scenarios.

We’ll never know if Google saw an opportunity and changed its mind or if the market taking off influenced its decision to release Presentations. But one thing is clear as this trend continues, with reports also out this week that Facebook is contemplating throwing its hat into the application-storage ring as well: Web 2.0 giants are fast becoming the successors to Microsoft and IBM as the dominant force in computing of the 21st century. The more news I see like this, the more I’m inclined to side with Babineau–the times, they are a-changin’. 

September 14, 2007  7:49 AM

New data backup SaaS players emerge

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Once upon a time in the storage market, storage service providers were all the rage. Then, the tech bubble burst and most of them went the way of the dodo bird.

But with storage growth in recent years forcing companies to consider new strategies for managing data, storage service providers are making a comeback. Within that market space, meanwhile, backup and recovery is the most popular area, as users struggle with the cost of protecting more and more data, the distraction of backup and recovery management from core business and IT operations, and ever-increasing regulation.

Naturally, this is the market where the lion’s share of new players are springing up. EMC Corp. and Symantec Corp. are among the heavy hitters that say they’re planning backup SaaS. But there are also some new and emerging vendors that are gaining attention in the market with the return of interest in outsourcing.

One of the companies that’s made its presence known in recent weeks is Nirvanix Inc., which is aiming to be a business-to-business outsourcer for large companies. It’s come out of the gate overtly challenging Amazon’s S3 service, saying it can overcome the performance issues that have been reported by some large S3 users. The service is also offering a 99.9% uptime SLA to customers.

Nirvanix claims it can offer better performance because it is constructing a global network of storage “nodes” based on the way Web content delivery systems work — by moving the storage and processing power closer to the user, cutting down on network latency.

Within each of Nirvanix’s storage nodes is a global namespace layered over a clustered file system, running on Dell servers residing in colocation facilities. These nodes also perform automatic load balancing by making multiple copies of “popular” files and spreading them over different servers within the cluster. With this storage infrastructure, the company is claiming that it can offer users a wider pipe as well as a faster one, allowing file transfers of up to 256 GB. Moving forward, according to CEO Patrick Harr, the company plans to offer an archival node for “cold storage” within 6 months.

One potential issue for the company in comparison to S3 is a lack of financial clout to match Amazon. Building out the storage node infrastructure will be an expensive proposition in comparison to creating software and running a typical data center, and so far, the company says it has received just $12 million in funding, some from venture capital firms and some from research grants. However, it also says 25 customers have already signed up for beta testing, and says one of those customers is supporting 50 million end users.

Base pricing for the service is 18 cents per stored gigabyte per month, a “slight premium” over Amazon’s price according to Harr. The company is hoping that it can increase sales volume and drive down the price.

Meanwhile, on the consumer/SMB side, a company called Intronis LLC is souping up its features in the hopes of gaining traction in the low end of the storage market. Version 3.0 of its eSureIT backup service will allow users to create a tapelike rotation scheme for files, creating backup sets and setting policies for data retention on a weekly, monthly and yearly basis. The company has added a plugin it calls Before and After, which will allow users to create scripts dictating what their computer systems should do before, during or after engaging with the Intronis service — for example, the script can have the user’s machine shut down applications prior to backup and restart them after backup has finished. Another new plugin will allow mailbox and message-level backups and restores of Exchange databases, and adds a text search for email repositories.

But the biggest new development, and the one that’s taken it the better part of two years to develop, according to Sam Gutmann, co-founder and CEO, is a feature the company is calling Intelliblox, which like other enterprise-level backup services such as Asigra, backs up only changed blocks over the wire. The feature uses a set of checksum and hashing algorithms to identify blocks and keep them together with their corresponding files (an existing feature of Intronis’s service is total separation between the company’s admins and users’ data — each user is given an encryption key to access its storage at Intronis’s data center, and Intronis says it has no way of reading any of its customer data).

This use of hashing algorithms also has this blogger wondering if they might also be able to offer fixed-content archiving down the road.


September 13, 2007  2:58 PM

The Bill Gates Dream House and your IT career

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

This post marks the beginning of a new category here at Storage Soup: Around the Watercooler, a catchall for the technology stories that have nothing to do with storage that we think you’ll find interesting anyway.

 Today there was a press release on the wires entitled “Using the Television to Wash Your Clothes.” According to the release,

The washing machine and the refrigerator are going to start “talking” to the television thanks to a new standard about to be published by the Geneva-based IEC. This new ability to network traditional household appliances with personal computers and audio-visual equipment will offer such possibilities as your television screen displaying the fact that the washing machine has finished washing your clothes or turning on an air conditioner from your personal computer.

One of my closest friends is a big-time tech geek, not only an IT guy in his own right but a sharp follower of consumer technology. The kind of guy who had a TiVo and a Netflix account years and years before they were cool or even well-known. You know the type–many of you out there in readerland are probably the same way.

This friend’s subscribes to the theory that the personal computer will one day become an appliance in the home like a water heater or an HVAC system–the computer itself, like one of those devices, would sit in the basement out of sight and out of mind to the homeowner (and like a water heater or air conditioning unit, would require specialized servicepeople to administer and fix). Also like a water heater or AC, the computer would connect to all the systems in the house, automating and personalizing every other appliance from the toaster to whatever screen / keyboard / hologram combination comes to represent an Web browsing portal.

My friend is the first person I heard describe this home of the future, but he definitely wasn’t first with the idea. In fact, there’s already a prototype of this type of home being lived in daily in our country, and it belongs to none other than Bill Gates.

Of course, Gates is a guy rich enough to have tropical sand barged in to Washington state for his private beach, so some of this is just Cribs-esque because-I-can excess. But Gates has also publicly announced his intention to market versions of his home technology for the masses. One example of technology that could fit into the water-heater computer of the future is described in a Wikipedia article: “visitors are surveyed upon entrance and are given a microchip that sends signals throughout the house to adjust temperature and other conditions according to preset user preferences.”

The ratification of a standard for this type of technology suggests that other people are thinking along the same lines. And in even as little as another half-decade, we could be looking back on the “digital home” of this era as child’s play.

Think, also, of the possible career opportunities this represents for those in the IT field.  Putting this much technology into the home could bring the IT guy out of his traditional data center, transforming him into the plumber of the 21st century. It’s already happening to some extent with businesses like Best Buy’s Geek Squad. And hey, before you scoff at that, I know some plumbers too, and they actually tend to make a very nice living.

P.S. Can’t miss pointing out just one more detail from the Wikipedia article on Gates’s house:

The number of building permits needed completely overwhelmed the Medina county clerk’s office, necessitating the move to a new Linux-based computer infrastructure to deal with the volume.


September 10, 2007  1:03 PM

NetApp vs. Sun debate rages on the Web

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In case you haven’t heard, NetApp has filed suit against Sun, saying Sun’s ZFS violates its patents. And it’s clear we’ve reached a new age in media when one high-profile company sues another, and a good portion of the sniping and posturing back and forth is coming from CEOs writing on corporate blogs, while having their words propagated and dissected via other Web 2.0 sources.

Dave Hitz, co-founder of NetApp, got things kicked off with his blog post Sept. 5, which was posted two minutes after NetApp’s official conference call began. Hitz’s blog post was also referenced in NetApp’s press call and in NetApp’s supporting materials for the announcement of the suit, and in many cases goes into greater detail than any of NetApp’s supporting documents save for the official copy of the complaints it filed in court. Hitz is also in a unique position to write about the case since he is a co-founder of the company, and one of the actual holders of some of the patents in question.   

After NetApp’s initial splash, though, Sun set tongues wagging anew late Wednesday when it responded to NetApp’s announcement with a counterattack of its own. Once again, there was a formal statement released through the usual channels, but CEO Jonathan Schwartz also posted at length on his blog, providing color similar to that submitted by Hitz.

Of course you know what’s coming next: the rebuttal by Hitz to Sun’s counterclaims.

At some point, the legal wrangling and public posturing go beyond what we in the tech-focused world can or really should puzzle out. Clearly, there are two mutually exclusive sides to the story here, or there wouldn’t be a court case.

But that hasn’t stopped the court of public opinion from swinging into action, and for good reason: the ultimate outcome for this case, while currently beyond anyone’s prediction, has implications for the storage industry from who owns snapshot technology to which product you, the storage user, will choose to deploy in your shop–ZFS, or a NetApp filer?You know what they say about opinions. And what’s most interesting about this fight’s transfer to the blogosphere is the freedom users, industry experts and even interested parties have to weigh in on the situation, either because it’s a less formal forum or because they can hide behind a pseudonym online.

A sampling of the debate begins, of course, with the commentary on the blog posts. “Why can’t you and Dave Hitz just sit down across a table with a couple of beers (and/or lawyers) and hash this out?” a commenter on Schwartz’s post asks. “Sniping at each other via your blogs isn’t going to impress any customers.”

Other observers are doubtful about Schwartz’s claims that he didn’t know about the suit until after NetApp made its announcement. “You cannot expect anyone to believe that, in your position, you were unaware of NetApp’s suit until a shareholder pointed it out during questioning at today’s analyst event…if it is true, it doesn’t speak well for communication within Sun. I’d hate to think you’d play us all for fools,” wrote Joseph Martins, analyst with the Data Mobility Group, also on Schwartz’s post.

Then there are the comments, largely taking place on other forums like Slashdot, which sketch out the primary positions on this case, since the claims by each company are so contradictory it’s not possible to find a middle view. “It seems as though NetApp was rather nice about this whole patent thing from the get go,” wrote a NetApp supporter on the Slashdot comment thread. “It wasn’t until Sun threatened them that they acted and again acted fairly preferring a cross licensing deal rather than any cash payout in either direction.”

“Sun guy [sic] contradicts himself,” writes another armchair litigant. “‘Never demanded anything’  and ‘always been willing to license’ do not fit together. Licensing means demanding fee. NetApp says they do not use technology covered in Sun patents, still Sun is ‘always willing to license’ it.”

Other readers, however, side with Sun. “In England,” writes another Slashdot user, “What NetApp appears to be doing is called shouting Get Your Tanks Off My Lawn.”

As the initial discussions died down, however, new ideas about the suit, the agendas on both sides, and its effect on the market have emerged. Questions being raised include: How can open source technologies be regulated? What is the ultimate relationship between proprietary and open source products in the industry? Is a patent suit the best way to address them? The result of these discussions has been the beginning of a backlash against both companies.

“The only people that get hurt [are] the consumer[s], who [have] to pay all these pathetic lawyers and their pathetic clients gazillions, either in protection money against this racket, or in court battles over ridiculous things like linked-list file systems and outrageously vague one-click patents,” writes one poster who calling themselves MightyMartian.

Another responds, ” I’ve been looking at NAS/SAN boxes, mainly the StoreVault S500, or the higher-end NetApp 270, or a lower end Sun StorageTek 52xx for my work…I hate patents, love ZFS, but not sure which one to order now! Guess I’ll have to give Equallogic another call…”

Now that the initial excitement has died down, I’ve begun to wonder if, for all the bluster, the end result will be a cross-licensing agreement between the two companies. Some previous alliances have been forged out of two companies lining up against one another, realizing what they have in common (including common enemies) and reaching an agreement.  The parties involved here certainly sound sincere, and it seems unlikely that if there were a way to resolve this privately that they wouldn’t have taken it public after 18 months of negotiation.

But the skeptical side of me definitely wouldn’t be surprised to learn that the end result of all this pomp and circumstance is that it has drummed up attention for the eventual partnership or even acquisition of IP between the companies. Whether or not that was the plan all along will only ever be known to a few people, and otherwise will be, like the rest of this case, in the eye of the beholder.


September 10, 2007  10:02 AM

Much ado about Microsoft VSS

Beth Pariseau Maggie Wright Profile: mwright16

When Microsoft Windows Server 2003 was released about five years ago, there was much ado about its new Volume Copy Shadow Service (VSS) framework. This feature allows administrators to take snapshots of Windows volumes and then restore data from the snapshots. At the time, Microsoft claimed that it provided the backup infrastructure for Microsoft Windows XP and Windows Server 2003 servers, but companies have to date seen only negligible benefits from this technology. That is about to change.

The three-sided triangle of providers, requestors and writers which comprise the VSS architecture are coming together to provide Windows administrators a powerful new alternative to back up and recover Microsoft servers using snapshot technology. This option is possibly as or more powerful than the much hyped virtual tape libraries (VTLs) and data deduplication technology and may be an option companies already own.

The first side of the VSS triangle, the provider, is the application component. Applications such as Microsoft Exchange and SQL Server now support external calls from third-party software to make a call to the application and acquiesce it. However, these calls only work if the second side of the VSS triangle, the requestor, supports it.

A requestor is an application such as backup software which controls the entire snapshot process which includes pausing the application, initiating the snapshot, restarting the application and then backing up the newly created snapshot. Most backup software products now support VSS and organizations may have this feature lying dormant in their backup software or can obtain it for an additional licensing fee.

The writer, which is the third side of the VSS triangle, actually generates the snapshot. Though snapshots can occur on Windows XP and Windows Server 2003, VSS compatible backup software can also initiate snapshot on storage systems from most vendors including incumbents like EMC, HDS, HP, IBM and NetApp or, with some scripting, from storage newcomers like Compellent, EqualLogic and Lefthand Networks.

Moving the creation of the snapshots from the Windows server to the storage system can also remove the server overhead normally associated with backup. Since the volume created by the snapshot can be presented to another server, the other server can then backup the data to tape.

The maturation of VSS technology is significant because, with all of hype about VTLs and the data deduplication, everyone seems to have forgotten about this low-cost or potentially free option that users may have available to them. Users willing to invest a little extra time to explore VSS may find that they can pay a fraction of the price of VTLs and data deduplication technology and achieve comparable or better results.


September 10, 2007  9:15 AM

Protecting millions of small files

cgibney Carolyn E.M. Gibney Profile: cgibney

Every week, I visit IT professionals and I often hear the same complaint about dealing with a file server environment that has grown out of control. The problem is that these file servers have millions of small files and customers are looking for ways to better protect this file data.

Second, disk-based archiving truly fixes areas of the backup that most D2D solutions do not. Customers are highly frustrated with backup applications stumbling over what I call the “millions of small files issue.” This is primarily caused by the never-ending growth of a standard file server’s data. Most backup applications struggle with this millions of files scenario. Customers are counting on D2D to help, and it will… a little. The target disk may be faster, but mostly it is much more forgiving than tape. Tape needs to stream, or be fed a constant flow of data, in order to reach maximum write performance. Millions of small files make it difficult for those tape drives to be fed consistently. Disk backup, on the other hand, will maintain the same write performance no matter how inconsistent the data feed is.

That solves half the backup problem. The other half of the performance problem with millions of small files backup is that the backup software still needs to walk those millions of small files, identifying which ones need to be backed up. This file system walk can be very time consuming. Then, the backup software needs to update its own database that tracks what files were backed up and where. Imagine adding millions of records to a database every night, as fast as possible. That database gets HUGE in a hurry, can easily be corrupted and again, even if everything goes right, is very time consuming. Lastly, with most D2D backup solutions you still need to send the entire data load across the network. Even with deduplication solutions, the entire data payload needs to get to the appliance before deduping happens. All of this consumes network bandwidth. Disk-based archiving may circumvent or delay the need to upgrade network bandwidth by clearing this old data out of the way.

Disk-based archiving eliminates the problem of moving most of these millions of files. With disk-based archiving, the “old” files are stored on the archive and no longer need to be backed up. They are safer on disk than they are on tape (data integrity checking and replication) and they are out of the way. The backup software no longer needs to walk those files to find which ones need to be protected, send the files across the wire to be backed up and they do not consume disk space on file server or the D2D backup target. Additionally, since the archive is disk and not tape, you can be more aggressive with what is archived.

With a classic tape-based archive, customers will wait for data to get very old before moving it to tape. In addition, they will invest in elaborate data movers to provide transparent access to tape. Lastly, data that has stopped changing but is still being referenced or viewed cannot move to tape at all. With a disk-based archive, the delivery back to the user is relatively fast, so you can be more aggressive with your move to archive disk storage and there is less of a need to build elaborate access schemes. Most disk-based archives simply show up as a share on the network and you can archive reference data, further eliminating the data that needs to be protected by traditional backup methods.

A disk-based archive is the perfect compliment to D2D backup. It will reduce the investment in disk needed for backup and an archive strategy may pay for its self on this reduction alone. This is because a disk-based archive will clear out the fixed data (data that has stopped changing), making the investment in the software modules required by most backup applications for D2D cheaper (since they charge on stored capacity) and disk-based archives reduce the disk capacity of the disk backup as well as on the primary (expensive) disk needed on the file server.

What does this look like in hard costs savings? Disk-based archiving can reduce primary storage requirements (at least 10X dollar saving: $4 vs. $43/GB) and they can reduce backup requirements (fixed information is said to occupy, on average 50% or most enterprise primary disk capacity) saving them an additional $6/GB.

For more information please email me at georgeacrump@mac.com or visit the Storage Switzerland Web site at: http://web.mac.com/georgeacrump.


September 6, 2007  7:38 AM

1 TB on your desk for $350?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

First, a disclaimer: no one here has personally evaluated this product, tested its features, or been able to talk to someone who has (yet). But at face value, an announcement recently came across our desk that could easily get lost in all the sturm und drang of storage news this week, that we thought was at least worth a closer look.

MicroNet’s Fantom Drives G-Force MegaDisk NAS appliance is a 1 TB desktop USB disk enclosure with USB expansion ports for two additional disks. For the $350 starting price, it comes with 1 TB capacity in either RAID-0, RAID-1 or JBOD, and MicroNet’s management software, which allows the MegaDisk to act as an iTunes server (an update also recently added to its higher-end PlatinumNAS product line) or a print server. The software also has a feature that allows the product to work as an unattended download manager for BitTorrent and other large Web-based content management services. Finally, MicroNet is bundling in NTI’s Shadow backup software, which crawls the system looking for file changes in the background without user intervention.

Sound too good to be true? We thought so, too, especially at that price tag.  According to Joe Trupiano, VP of marketing for MicroNet, if you want up to 3 TB capacity (with 1 TB SATA expansion disks, that is), it’ll be between $600 and $900.

Still, Trupiano says that what you see is what you get for the 1 TB/$350 starting price. He explained the price by pointing out that MicroNet is a consumer storage company with many other products in its portfolio, and it ships around 30,000 hard drives per month. That adds up to some deep discounts on disks. “If you went down to the store and wanted a 1 TB disk without all these features, it would run you about $400,” he said. “But disk drive makers practically pay us to ship their product.”

In the grand scheme of things, $300 per TB isn’t the ratio enterprise managers are used to, which along with the 1 TB capacity puts this squarely in the consumer marketplace, especially since it’s difficult to expand the box much further, even with the USB drives (the expansion disks cannot be made into a single volume with the capacity of the main enclosure.)

But, we know how storage admins like to play with gadgets on their own time, and thought this one would be of interest to the gadget geeks among our readership. If you know of any other interesting consumer storage products, fire away in the comments.


August 31, 2007  12:32 PM

IBM reports atomic storage progress

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

IBM has reported two discoveries in its ongoing work in nanotechnology, both of which have implications for data storage of almost unimaginably tiny proportions.

According to a Reuters report, scientists at IBM reported yesterday that they have determined how to move the magnetic orientation of an atom, a key step toward using atoms as tiny storage devices. Each atom has a magnetic field that needs to be stabilized somehow before it can be used as the basis for a storage system where “bits” are not magnetized particles, as they are today, but atoms themselves.

IBM scientists in Zurich also announced that they have successfully “switched” the polarity of molecules, another key to computing on an atomic level. Currently, computer systems rely on the ability to “flip” magnetic particles to represent the ones and zeros of binary code. Being able to do the same with molecules, or even atoms, could eventually lead to microscopic computers, and breakthroughs in density on the order of 30,000 movies on an iPod, according to the researchers.

IBM’s not the only organization currently working on theoretical physics that’ll give you an ice-cream headache. Research revealed in July from the University of California Santa Cruz could lead to similar breakthroughs in the stabilization of magnetic fields on conventionally-constructed disk drives, the better to prevent data corruption. Industry experts pointed out that this research is most likely to be used for near-term density breakthroughs.

This research also represents a crucial piece of the puzzle for IBM’s atomic equations — figuring out the polarity of the individual bits or atoms is only half the battle. How those atoms or molecules or bits behave as a group on the surface of a drive is also a hurdle to be overcome before you can carry around a whole Blockbuster Video in your iPod shuffle.


August 31, 2007  12:20 PM

Storage vendors planning to bask in VMworld spotlight

Beth Pariseau Maggie Wright Profile: mwright16

It is not often that a non-storage conference distracts from the normal round of storage conferences such as Storage Decisions and Storage Networking World, but that may be the case when VMworld kicks off on September 11. What makes this event unique is that multiple storage vendors are planning to use VMworld as their venue for new product announcements or, in the case of startups, as their coming out party.

Selecting VMworld as a product or company launching point does not so much diminish the value of  other storage conferences, as it reflects the growing importance that VMware is taking on in corporate boardrooms. Storage vendors know that companies are going to need more virtualization technologies, not less, if they adopt VMware, so these vendors see VMworld as a perfect opportunity to share in VMware’s spotlight.

There are only a couple of small problems with storage vendors piggybacking on the VMware express. VMware as a company is already nine years old, founded in 1998. Also, VMware has had a functioning product since 1999 with VMware’s recent ESX server operating system product release, now in its 3rd generation.

IT managers should exercise some caution because, while VMware offers for savings in server consolidation, IT managers can not automatically extend VMware’s savings and benefits to complimentary storage technologies. VMware has spent years developing its technology and building a user and knowledge base. Products from these storage companies may not have reached the same level of maturity.

The good news is that storage virtualization went through a similar round of hype about five to six years ago. Some of the companies that survived that round, such as DataCore Software and FalconStor Software, now have much more mature products that are still around and in use in mission-critical environments.

VMworld is acting as a demarcation point in the future of storage management. Virtualization is no longer something that companies can ignore or minimize – it is critical to the future of enterprise storage management and storage vendors recognize that sharing in VMware’s spotlight will likely pay huge dividends for them in the coming years. However, IT Managers still need to verify, before they spend big money on complimentary storage virtualization technologies, that these products can technically and financially deliver on their promised benefits.


August 29, 2007  11:31 AM

Seagate: We are not for sale

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A recent New York Times report touched off speculation last week that Seagate was about to be bought out by a Chinese company, rumored to be Lenovo, “raising concerns among American government officials about the risks to national security in transferring high technology to China,” according to the Times report.

The Times report was based on an interview with Seagate CEO William D. Watkins, in which Watkins is quoted saying there are no plans to sell the company, but that ” if a high enough premium was offered to shareholders it would be difficult to stop.”

Since then, Seagate has released a statement through news wires clarifying (repeating?) that there are no plans to sell the company, to a Chinese buyer or anyone else. Ironically, this now has some in the industry eyeing Western Digital as the possible acquisition target for a Chinese company. Might it have been Watkins speaking generally or hypothetically?

I have to admit I’m scratching my head a little about the supposed security threat–even in the original Times report, two contradictory statements about it follow one another. An anonymous industry executive is quoted as saying “I do not think anyone in the U.S. wants the Chinese to have access to the controller chips for a disk drive. One never knows what the Chinese could do to instrument the drive.” But a paragraph later, it’s noted that “China, however, still lags in basic manufacturing skills like semiconductor design and manufacturing.” So…do they have the means to commit dastardly acts of international espionage or not?

Even if it’s not this acquisition, this time, everyone knows China is a fast-rising power in the global economy. And they already make quite a large proportion of the products Americans use every day. It seems from my view that at least one instance of this type of acquisition is inevitable. Also, from my point of view–which I will admit is not one of experience in constructing foreign policy–it’s probably better to learn how to work with the situation than against it.

What are your thoughts?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: