“Fibre Channel over Ethernet is like a fast car,” said consultant Howard Goldstein of Howard Goldstein Accociates Thursday on his session about FCoE at Storage Decisions Toronto. “It looks great, but it probably won’t run as well as you thought or be as cheap as they say it’s going to be.”
Goldstein’s point basically boiled down to: if Ethernet’s good enough to be the transport layer, why bother layering an FC protocol on top of it? He dismissed the common answer to that question, which is that mixing FC and Ethernet will allow users to maintain existing investments in FC systems, saying it’s a myth. “You’re going to have to buy brand new HBAs and Fibre Channel switches to support FCoE,” he said. “Is this really the time to reinvest in Fibre Channel infrastructure?”
Instead, Goldstein pointed out that FC services such as log-in, address assignment and name server, to name a few, could be done in software. “Those services don’t have to be in the switch–Fibre Channel allows them in the server,” he said. He also questioned the need for a revamping of the Ethernet specification for “Data Center Ethernet” capabilities. “Is converged Ethernet a real requirement or a theoretical requirement?” he said. He also questioned whether or not storage traffic was really fundamentally different from network traffic.
However, users at the show said FCoE is still so new they weren’t sure whether or not to agree with Goldstein. “It’s too immature to say right now,” said Maple Leaf Foods enterprise support analyst Ricki Biala. He also pointed out an all-too-true fact: in the end, such technology decisions will be based on equal parts politics to technology. “It’s easier to convince management to buy in if you’re going the way the rest of the market’s going,” he said.
Your thoughts and quibbles on FCoE are welcome as always in the comments.
Solid state isn’t the only thing looming on the horizon in the enterprise storage drive space. Drive makers say small-factor (2.5-inch) SAS is poised to encroach on 3.5-inch Fibre Channel’s turf in storage arrays.
Seagate is eyeing enterprise storage arrays with drives such as the Savvio 10k.3 that it launched this week. At 300GB, the self-encrypting drive offers more than twice the capacity of Seagate’s previous SAS drives. It also supports the SAS 2 interface. SAS 2 includes 6 Gbit/s speed and other enterprise features likely to show up in storage systems by next year.
“300-gig drives will be more attractive to storage vendors, and they’re starting to find the small form factor drives more compelling,” said Henry Fabian, executive director of marketing for Seagate’s enterprise business. “You’ll start to see the small form factor ship in the second half of the year in storage arrays because of higher capacity and lower power requirements.”
Joel Hagberg, VP of business development for Seagate rival Fujitsu Computer Products of America, also sees small form factor SAS coming on strong in enterprise storage. “The storage vendors all recognize there is a shift coming as we get to 300 gigs or 600 gigs in the next couple of years in the 2.5-inch package,” he said. “We’re cutting power in half and the green initiative in storage is increasing.”
As for Fibre Channel, the drive makers agree you won’t see hard drives going above the current 4-Gbit/s bandwidth level.
“Four-gig is the end of the road for Fibre Channel on the device level,” Hagberg said. “All the external storage vendors are looking to migrate to SAS.”
By the way, Hagberg says Fujitsu isn’t buying into the solid state hype for enterprise storage yet. He considers solid state to be a few years away from taking off in storage arrays.
“There’s a lot of industry buzz on solid state, and I have to chuckle,” he said. “I meet with engineers of all storage vendors and talk about the hype versus reality on solid state drives. Every notebook vendor released solid state in the last year. Are any of those vendors happy with those products? The answer is no. The specs of solid state performance look tremendous on paper, but a lot less is delivered in operation.”
Google has revamped its business search site, and rechristened it Google Site Search (it was previously called Custom Search Business Edition). It’s the SaaS version of the Google Search Appliance, but it’s limited to website data because the hosted software can only see public data.
So essentially it’s custom search for e-commerce websites. Almost completely unrelated to storage. . .except when it came to one of Google’s customer references for Site Search: EMC’s Insignia website, which sells some of EMC’s lower-end products online. Prior to implementing the site search, apparently the Insignia site had no search functionality. Visitors had to page through the site manually–including when it came time to look for support documents or troubleshooting tips.
EMC’s webmaster Layla Rudy was quoted in Google’s collateral as saying that sales have gone up 20% since they added search to the site. Moreover, according to her statement, there has been an 85% decrease in customer-requested refunds now that they can find the correct product in the first place as well as its associated support documents. What’s especially amazing about this to me is that Insignia is a relatively new business unit, rolled out by EMC within the last three years–it’s not like it was the 90′s when Google was relatively unknown or site search a “nice to have” feature of most websites.
Of course, I don’t know how long the Insignia site was up without search, or what the absolute numbers are when it comes to the refund decrease–85% can be 8.5 out of 10 or 85 out of 100. (EMC hasn’t returned my calls on this).
Meanwhile, with EMC getting into cloud computing, I wonder what kind of search — if any — it makes available on backup/recovery or archiving SaaS websites. Right now Google claims its Search Appliance can index onsite and offsite repositories and, unlike the SaaS version, can search protected private data. While there are no plans to make this a feature of the hosted version, service providers can offer hosted search by managing their own appliance in the cloud. Whatever they chose, hopefully it was a Day One feature.
I was intrigued when a colleague sent me a link to an article by Henry Newman referring to a “firestorm” touched off by some remarks he recently made in another article he wrote. The first article addressed the scalability of standalone Linux file systems vs. standalone symmetric multiprocessing (SMP) file systems such as IBM’s GPFS or Sun’s ZFS. His point was that in high-performance computing environments requiring a single file system to handle large files and provide streaming performance, an SMP file system that pools CPU and memory components yields the best performance.
Newman begins his followup piece by writing, “My article three weeks ago on Linux file systems set off a firestorm unlike any other I’ve written in the decade I’ve been writing on storage and technology issues.” He refers later on to “emotional responses and personal attacks.” I’m no stranger to such responses myself, so it’s not that I doubt they occurred, but in poking around on message boards and the various places Newman’s article was syndicated I haven’t been able to uncover any of that controversy in a public place. And I’m not sure why there would be any firestorm.
I hit up StorageMojo blogger and Data Mobility Group analyst Robin Harris yesterday for an opinion on whether what Newman wrote was really that incendiary. Harris answered that while he disagreed with Newman’s contention that Linux was invented as a desktop replacement for Windows, he didn’t see what was so crazy about Newman’s ultimate point: a single, standalone Linux file system (Newman is explicit in the article that he is not referring to file systems clustered by another application) does not offer the characteristics ideal for a high-performance computing environment. “It seems he made a reasonable statement about a particular use case,” was Harris’s take. “I’m kind of surprised at the response that he says he got.”
That said, how do you define the use case Newman is referring to–what exactly is HPC, and how do you draw the line between HPC and high-end OLTP environments in the enterprise? Harris conceded that those lines are blurring, and that moreover, image processing in general is something more and more companies are discovering in various fields that didn’t consider such applications 15 years ago, like medicine. So isn’t the problem Newman is describing headed for the enterprise anyway?
“Not necessarily,” Harris said, because Newman is also referring to applications requiring a single standalone large file system. “The business of aggregating individual bricks with individual file systems is a fine way to build reliable systems,” he said.
But what about another point Newman raised–that general-purpose Linux file systems often have difficulty with large numbers of file requests? Just a little while ago I was speaking with a user who was looking for streaming performance from a file system, and an overload of small random requests brought an XFS system down. “Well, someone worried about small files does have a problem,” Harris said, though it’s a tangential point to the original point Newman raised. “But everybody has this problem–there is no single instance file system that does everything for everybody.” He added, “this may be an earea where Flash drives have a particular impact going forward.”
Despite rumors that pop up from time to time, Cisco has not taken its Wide Area Application Services (WAAS) product off the market. Cisco is still developing and selling WAAS, and its storage partner EMC sells it too, although there has been a change there. Instead of selling WAAS directly, EMC now sells it through its professional services group.
So, essentially, WAAS becomes another tool in the toolbelt for EMC’s Professional Services. The services folks decide when it fits in an environment and they do the deployment, rather than the customer themselves. According to an EMC spokesman in an email to SearchStorage.com:
In April, EMC modified its go-to-market approach around Cisco WAAS based on customer feedback and to provide differentiated value. EMC was originally selling it without professional services. We received feedback from customers asking us to pair it with professional services so that they could take full advantage of the technology — which they realized had a lot of potential for transforming application delivery, infrastructure through consolidation, etc. Before customers made the investment, they often asked for an assesment to see if it was right for them. Once they purchased it, customers were asking for help implementing it. As a result, we developed a specialized practice within EMC Global Services called EMC Data Center Networking Practice — which is where Cisco WAAS is now offered — and it includes comprehensive professional services.
That same spokesman stressed that it was just a shift in the delivery of the product, rather than a reflection on the product itself. But why are customers ask for help deploying and understanding the use cases for it? There are plenty of products, especially in EMC’s portfolio, that require professional services engagements to get the best results, and companies shift the delivery of products all the time in an effort to boost sales. Still, a shift in delivery method mid-stream, which is directly attributed to customers’ difficulties with understanding and deploying a product, doesn’t sound like good news for WAAS.
The disk vs. tape debate that has been going on for years is heating up again, given technologies like data deduplication that are bringing disk costs into line with tape.
Or, at least, so some people believe.
The Clipper Group released a report today sponsored by the LTO Program which compared five-year total cost of ownership (TCO) for data in tiered disk-to-disk-to-tape versus disk-to-disk-to-disk configurations. The conclusion?
“After factoring in acquisition costs of equipment and media, as well as electricity and data center floor space, Clipper found that the total cost of SATA disk archiving solutions were up to 23 times more expensive than tape solutions for archiving. When calculating energy costs for the competing approaches, the costs for disk were up to 290 times that of tape.”
Let’s see. . .sponsored by the LTO trade group. . .conclusion is that tape is superior to disk. In Boston, we would say, “SHOCKA.”
This didn’t get by “Mr. Backup,” Curtis Preston, either, who gave the whitepaper a thorough fisking on his blog today. His point-by-point criticism should be read in its entirety, but he seems primarily outraged by the omission of data deduplication and compression from the equation on the disk side.
How can you release a white paper today that talks about the relative TCO of disk and tape, and not talk about deduplication? Here’s the really hilarious part: one of the assumptions that the paper makes is both disk and tape solutions will have the first 13 weeks on disk, and the TCO analysis only looks at the additional disk and/or tape needed for long term backup storage. If you do that AND you include deduplication, dedupe has a major advantage, as the additional storage needed to store the quarterly fulls will be barely incremental. The only additional storage each quarterly full backup will require is the amount needed to store the unique new blocks in that backup. So, instead of needing enough disk for 20 full backups, we’ll probably need about 2-20% of that, depending on how much new data is in each full.
TCO also can’t be done so generally, as pricing is all over the board. I’d say there’s a 1000% difference from the least to the most expensive systems I look at. That’s why you have to compare the cost of system A to system B to system C, not use numbers like “disk cost $10/GB.”
Jon Toigo isn’t exactly impressed, either:
Perhaps the LTO guys thought we needed some handy stats to reference. I guess the tape industry will be all over this one and referencing the report to bolster their white papers and other leave behinds just as the replace-disk-with-tape have been leveraging the counter white papers from Gartner and Forrester that give stats on tape failures that are bought and paid for by their sponsors.
Neither Preston nor Toigo disagrees with the conclusion that tape has a lower TCO than disk. But for Preston, it’s a matter of how much. “Tape is still winning — by a much smaller margin than it used to — but it’s not 23x or 250x cheaper,” he writes.
For Toigo, the study doesn’t overlook what he sees as a bigger issue when it comes to tape adoption:
The problem with tape is that it has become the whipping boy in many IT shops. Mostly, that’s because it is used incorrectly – LTO should not be applied when 24 X 7 duty cycles are required, for example…Sanity is needed in this discussion…
Even when analysts agree in general, they argue.
The Chinese technology market isn’t up-and-coming anymore, it’s already here. And with a billion-plus people looking to participate in a capitalist experiment, combined with cheap (though rising) labor costs and a less regulated manufacturing industry, it’s a force to be reckoned with.
American tech companies these days have few choices when it comes to contending with this market. They can be acquired by Chinese companies, as in the case of Huawei-3Com (or nearly the case with Iomega); they can look to get a slice of the Chinese market by selling products there; or they can have their lunch eaten by the rising superpower. If you’re strictly thinking in business terms, and you’re a sufficiently large company, my guess is option two would be the most appealing.
But the problem is that whenever big companies look to open their technology to the Chinese market, there’s political fallout here at home. Our queasy, at times hypocritical, relationship with China is a tangled web. On the one hand, we are dependent on China for manufactured goods as well as a large market for our own raw materials. On the other hand, some of China’s social and political policies make many Americans cringe.
Increasingly, technology is at the center of these tricky issues. Rolling Stone recently did an interesting story about China’s Golden Shield, an integrated network of physical-security and surveillance technologies being implemented chiefly in Shenzhen to keep an eye on citizens. What was creepy about this article was the part where I started to feel less like I was reading a political article and more like I was listening to a technology briefing:
[Surveilance] cameras…are only part of the massive experiment in population control that is under way here. “The big picture,” Zhang tells me in his office at the factory, “is integration.” That means linking cameras with other forms of surveillance: the Internet, phones, facial-recognition software and GPS monitoring.
Just last week I listened to EMC’s Mark Lewis expound on similar integration between content repositories in the American workplace. But while U.S. businesses clearly see a lucrative market for these technologies, I don’t believe technology vendors set out to let people use their technologies unethically. If anything, I believe they approach it from a decidedly amoral standpoint–neither condoning or condemning China’s policies, and focusing on the bottom line.
The problem, as companies beginning with Yahoo! and Google found out, is that some Americans — particularly politicians — are saying not so fast. Many a company with its eyes on this emerging-market prize has received negative press and even government inquest as a result.
Cisco is the most recent example. It was called in along with Yahoo and Google last week for a grilling before a Congressional committee on business practices in China, following the leak of some internal slides that suggest “it appeared to be willing to assist the Chinese Ministry of Public Security in its goal of “combating Falun Gong evil cult and other hostile elements,” according to a story in the San Jose Mercury News.
In an AP followup story, Cisco’s director of corporate communications insisted the documents were taken out of context. “Those statements were included in the presentation to reflect the Chinese government’s position,” [Terry] Alberstein said. “They do not represent Cisco’s views, principles or its sales and marketing strategy or approach. They were merely inserted in that presentation to capture the goals of the Chinese government in that specific project, which was one of many discussed in that 2002 presentation.”
This position has its supporters, including Seeking Alpha columnist Kevin Maney:
This was made worse for Cisco by an unfortunate PowerPoint slide that some employee — probably 123 levels down from CEO John Chambers — used in a pitch that implied that Cisco is cheering for Chinese censorship. Such is the danger of technology that anyone can use.This was made worse for Cisco by an unfortunate PowerPoint slide that some employee — probably 123 levels down from CEO John Chambers — used in a pitch that implied that Cisco is cheering for Chinese censorship. Such is the danger of technology that anyone can use…
The political grandstanding and berating executives helps nothing. If U.S. tech companies are going to sell their goods around the world, some of it is going to be used in ways many Americans don’t like. So do we want the business — and the jobs and income? Or do we want to make a point? Let’s decide.
Yes, let’s. We must. This will continue to be an unavoidable issue as time goes on. While it’s easy to point out that by the same standards of responsibility, gun manufacturers might be out of business, a colleague of mine also pointed out last week that gun manufacturers (ostensibly) don’t market their products with the express intention that they be used unethically. Some argue the Cisco PowerPoint shows such intentions.
Personally, I like to think there’s some middle ground here. There’s got to be some way to tap into this burgeoning market without participating actively in some of the more nefarious practices (such as voluntarily divulging information on political dissidents to government authorities or expressly designing technologies to be used to target a particular group, including the Falun Gong); there has to be a way we can draw a line between the pursuit of profit and breaches of our fundamental national principles. At least ostensibly, we do it all the time with the same technologies in U.S. hands, and the European Union is even further along in establishing privacy standards along with new data retention practices. There must also be ways to balance these competing interests when it comes to China.
EMC has made a habit of opening the kimono lately, especially at this year’s EMC World where execs divulged detailed roadmap information around backup and archiving software consolidation.
They also had an exhibit set up as part of the show floor called the Innovation Showcase. The showcase displayed products on tap for much farther down the line, including Project Futon, a consumer appliance being developed at EMC’s R&D facilities in China to store digital photos.
Eventually, the Futon software would automatically upload photos to the “Futon Cloud,” according to EMC senior engineer Hongbin Yin. Yin was one of several engineers on hand to demonstrate their prototypes. The goal would be to make small home appliances a “local cache” for multimedia, with long-term storage taking place in the cloud. Futon is also being developed to automatically collect metadata, including the location photos that were taken, and to integrate that information into other applications like Google Maps.
Also on display was a diagram of “Centera with Data Lineage,” which, according to Burt Kaliski, senior director of EMC’s Innovation Network, would allow Centera to archive not just elements of a workflow but the workflow itself, and link documents to related data in spreadsheets and databases.
Senior consulting software engineer Sorin Faibish was showing off his own diagram of “Application-Aware Intelligent Storage.” This would combine artificial intelligence software capable of being “trained” with hardware-embedded VMware ESX servers to automatically spawn services like data migration, encryption and replication to data as it comes into the cache on a storage array. The embedded ESX host would run EMC’s RecoverPoint CDP inside, logging and catalogging I/O, indexing data for input into a modeling engine, which would then decide on the proper way to store and protect the data before flushing it to disk.
No time frame was given on any of the prototypes. Faibish’s project would require development of advanced artificial intelligence software using concepts like neural networks, fuzzy-logic modeling and genetic-based learning. “It could prevent commoditization of the storage array,” he said. “But it’s still a dream.”
Busy week for archiving. While EMC was detailing its archiving roadmap in Las Vegas this week at EMC World, IBM was opening an archiving solutions center in Mexico, and U.K.-based Plasmon was launching its latest UDO-based system.
Plasmon’s Enterprise Active Archive (EAA) allows Plasmon’s UDO systems to work with disk for long-term archiving flexibility. EAA gives Plasmon’s archive appliance the ability to search disk and UDO media together, as well as index, classify, migrate and replicate files across systems.
Mike Koclanes, Plasmon’s chief strategy officer, said it wasn’t enough to have UDO drives that can last for 50 years to store data. “It didn’t fit into the IT ecosystem,” he said. “So we had to come up with an archiving appliance. The first thing we had to do was virtualize access to the application so it’s writing as if it’s a file system.” Koclanes said Plasmon intends to support solid state and holographic storage when they become widely available. He sees those technologies as well as SATA drives as complementary to UDO rather than competitive.
“When you store something on UDO, you have a permanent copy,” he said. “You have a UDO copy for DR and you don’t have to do backup any more. We’re not saying don’t use disk, we’re saying you don’t need three or four copies and have to replicate it around and use all that power.”
Meanwhile, IBM today opened a $10 million executive briefing center in Guadalajara, Mexico, dedicated to its archiving practice. IBM intends to use the Global Archive Solutions Center to help customers with their strategies for long-term data retention.
“Customers can come in and learn about best practices and do simulations of archiving with our products and partners’ products,” said Charlie Andrews, worldwide marketing manager for IBM storage.
“We’ve done a lot of research on what it means to have long-term retention of data. Does the media last long enough, how expensive is it, and when you talk about really long term — over 10 years — what happens with applications?” Andrews added. “Sometimes when you switch applications, you can’t read documents.”
Andrews said the archiving center is IBM’s 11th global center but the first to address a “specific solution area” instead of a product line.
Why Guadalajara? “Because we have a strong presence there,” Andrews said. “We’ve been there since 1927. It’s now a very rich high-tech area, it’s called the ‘Silicon Valley of Mexico.’ Also we believe growth in Latin America is significant for us.”
Hewlett-Packard’s storage division has taken its share of heat in recent years. It has often underachieved from a revenue standpoint, been knocked by competitors and analysts for lack of innovation, and been reorganized internally. Now it’s led by Dave Roberson, former Hitachi Data Systems (HDS) CEO and currently VP of HP’s enterprise storage business.
So what was the reaction of the storage group and upper management to a sharp hike in revenue from storage last quarter? Not what you might think.
“If there’s anybody taking a lap around the building, I sure haven’t seen him because we’ve got a lot more work to do to be a participant in the way that HP ought to be a participant in the storage market,” CEO Mark Hurd told analysts on the earnings conference call Wednesday night.
Ok, so no laps around the building. But HP did report storage revenue increased 14 percent year over year to just over $1 billion, led by a 17 percent gain in its midrange EVA and 21 percent improvement with its high-end XP [rebranded HDS] arrays. Still, Hurd says that’s only the start. He expects to attach storage to a higher percentage of ProLiant servers going forward, stronger adoption of new HP storage products and a return on some of the money it’s laid out on storage acquisitions in recent years.
“Listen, we can just do better than this,” he said. “We’ve got new product we’ve announced into the market during the quarter in the storage space that we are excited about. We’ve begun to bring together some of the acquisitions we’ve done and align some of our Storage Essentials software with our platforms. We are doing more work in the channel and it turned into good growth. I mean, for us to get mid-double-digit growth or 14% growth in the quarter is better than we’ve seen.”
I’m guessing the new product Hurd referred to was the ExDS9100 Web 2.0/NAS system HP unveiled with much fanfare this month although it also launched a new EVA in February. In any case, Hurd emphasized there is still work to be done, and he expects to see it get done: “I don’t want you for one minute to think we are satisfied with it [storage success].”
Hurd has accomplished many of his goals since he arrived at HP three years ago. Now we’ll see if he can complete a storage turnaround.