Even my friends who don’t normally follow the storage business are atwitter over an Engadget report that Buffalo has unleashed a 100 GB behemoth flash drive upon the world. Geeks everywhere are probably salivating to take the thing apart (yes, I’m looking at you, Tory) … unfortunately, they’ll have to wait. The catch is that Buffalo is only releasing the product for now in its home country of Japan.
According to company reps, the $1,000 asking price for the credit-card sized USB accessory makes it less than cost-effective to import right now. (If you just can’t get enough flash memory, there are 64 GB monsters roaming North America.)
The Engadget comments section also contains an interesting discussion of the merits of such a large flash drive. In the Engadget screenshot, the card looks like a behemoth, but the post says it’s about the size of a credit card. Still, it launched a spirited discussion that I think asks some pertinent questions, namely, “would it not be more practical to just buy a $300 travel drive?”
At this juncture, and at this price point, certainly. But Moore’s law waits for no man, and the price of a 100 GB card will come down. Hence the other questions that this announcement begs: at what capacity and price point does a mechanical drive become more practical than a solid state drive? How will that equation change over time? It’s something we in the storage market are going to have to examine more closely in the coming year.
Fibre Channel vendors aren’t the only ones pushing the new Fibre Channel over Ethernet (FCoE) standard designed to help Fibre Channel devices take advantage of 10-gig Ethernet. Intel is also getting into the game, with an FCoE Linux initiator.
Intel this week released an open source FCoE initiator that it will maintain on http://www.open-fcoe.org/. The FCoE initiator will work the way iSCSI initiators wok on current IP SANs. By going open source instead of developing the initiators for its own products, Intel hopes to accelerate the availability of FCoE by getting feedback from the Linux community. Intel storage planner and technologist Jordan Plawner said the goal is for Linux servers to ship FCoE-ready, just as they ship with iSCSI inititators today.
“We believe 10-gig Ethernet provides an opportunity to converge SAN and LAN traffic,” Plawner said. “We’ll continue to support iSCSI, but FCoE makes it easier to connect Ethernet into Fibre Channel SANs.”
That’s the party line for Fibre Channel vendors, and one that iSCSI SAN proponents dispute. Like iSCSI vendors, Intel is looking at it from the Ethernet side – but Plawner said FCoE will be better suited than iSCSI to take advantage of the coming Enhanced Ethernet spec. Enhanced Ethernet is a new version in the works that boosts Ethernet’s performance to make it more suitable to run storage.
“It’s much easier to adopt FCoE for Enhanced Ethernet,” Plawner said. “iSCSI is Ethernet end to end, so you would need a completely new subnet because you need Enhanced Ethernet on every node. With FCoE, just the first server and first top-of-the-rack-switch needs Enhanced Ethernet.”
Intel is looking to put FCoE Linux initiators on adapter cards that will work with FCoE switches in 2008. Plawner says he expects FCoE-enabled switches from Brocade and Cisco in the second half of next year, and he thinks companies will deploy FCoE in their networks by the end of 2008.
Plawner’s time frame is even more optimistic than that of some Fibre Channel storage vendors backing FCoE. Brocade execs says they don’t expect adoption until 2009, and they don’t think widescale adoption will arrive before 2010. But Brocade pledges to support FCoE in the DCX backbone director it will launch next year. Cisco’s FCoE switches are expected from Nuova Systems, which is 80 percent owned by Cisco. Nuova has yet to give product details, but industry sources say it will likely have FCoE switches or cards that plug into Cisco MDS switches early next year.
Here is a more detailed explanation of how Fibre Channel and Ethernet can converge.
Up until now you (corporate IT) have not had to worry about video surveillance. That job was up to the security guys, those guys that wore uniforms and pretty much kept to themselves. But be prepared. If you are not already deeply involved in video surveillance equipment RFP creation, acquisition, installation and management, you will be very soon.
The world of video surveillance is changing so rapidly that the user and the traditional supplier are both in a state of frenzy. It is within this transformation that the role of IT is becoming increasingly critical. The reasons for the increase in video surveillance are pretty easy to understand. Post 9/11, enterprises as well as governments are all adding or increasing video surveillance to the security equation. Of course, casinos and banks have always been the leading users of video surveillance, but now everyone is in the game. On a typical day, a person living in a city may be videotaped five or more places, as he drives to work (and passes through specific traffic lights), parks his car in the company parking lot, enters the building, makes a trip to the bank at lunch, grabs a couple of items at the local K-mart and heads home. There are all kinds of privacy issues that can be debated, but I am staying away from that. At least for now. Right now, I am more interested in the technology and IT’s increasing role in video surveillance.
Traditional video surveillance equipment was not designed to deal with this onslaught and is gasping for air. It is being replaced almost completely with IP-based equipment. That’s where you come in. Until now, most video surveillance equipment was based on CCTV (closed circuit TV), which basically meant the cameras, which recorded analog video, were hooked up via coaxial cable to the central point, where the video was taped on VCRs. Later, DVRs converted the analog signal to digital at the central location before storing it. But, these technologies cannot deal with the onslaught of data from more and more cameras and the fact that cameras are increasingly adding higher resolutions.
The latest crop of cameras records video in a digital format, and compresses it using MJPEG or MPEG before transmitting it over standard IP network to a central location that stores the data on scalable disk arrays. Once in the realm of IP, all the goodies we are used to in IT become available to an industry that still thinks of guards manning physical structures. Centralized management become feasible, data can be accessed asymmetrically, from multiple locations, replicated when appropriate. Another level of sophistication is being added at the end points. Now cameras can be activated when they detect motion or switch into a higher resolution if certain criteria are met. Video analytics allow software to recognize facial characteristics. Searches can be conducted for specific objects or people. You get the idea. It is like James Bond gadgetry becoming available to regular folks. But, that is reflective of the world we live in.
I think you (IT) need to be prepared to play a major role in this transformation that is occurring. You are the resident experts in storage and, at this point, pretty well up on IP technologies as well. Video surveillance simply becomes another application you have to support. So, if you are not already deeply involved in the selection and day to day management of the video surveillance equipment, it is only a matter of time. Security people who used to make decisions on such purchases without any consultation, will now insist on your involvement. You should gladly offer to help.
Another important thing to realize is that the type of storage you end up selecting for these applications will very likely be different than storage for other applications. For video surveillance the attributes that matter for storage include cost effectiveness (dirt cheap), highly scalable across both capacity and performance (cannot afford to create islands of storage), low entry price point, cost effective availability (mirroring may be too expensive), protection from disk drive or nodal failure and, most importantly, it needs to IP-based. Everything else in this environment is IP based, so making storage IP-based makes it easier to understand and manage. FC storage would bring in a level of complexity that is unnecessary here. Also, legacy architectures that have grafted an IP (iSCSI) interface would not cut it here, because they would not meet the other requirements above. Storage players that I believe merit consideration include Pivot3, Intransa, LeftHand Networks and to a lesser degree, EqualLogic (their price point may be too high for this application). There are other inexpensive storage offerings, such as from Nexsan or Xyratex but if an architecture does not allow clustering and presentation of a single system image, as it scales, it misses a criterion that I consider absolutely necessary for this application. However, you may want them in the initial mix as you start the evaluation process. I am sure you have enough on your plate without adding yet another storage-hungry application. But the way the winds are blowing, you either pro-actively plan on this or you will get pulled in pushing and screaming.
Update 12-19-07: Data Mobility Group’s Robin Harris has a very interesting take of his own on this phenomenon over on his blog, StorageMojo.
Did you know Sun now has a Chief Gaming Officer? That EMC demonstrated its latest NAS product’s interface with the XBox 360 at its most recent EMC Innovation Day? That Cisco’s next big business plan involves not just the virtual data center but the digital home? That Seagate has its sights for expansion set on … automobiles?
SMBs have been a focus for still more storage companies, which have been busily overhauling low-end product lines. But they haven’t been all that successful. So why are all the big boys suddenly focused not only on small businesses, but on homes as well?
For those of you still following the NetApp / Sun soap opera with popcorn at the ready, here’s the latest: the whole case, including all suits and counter-suits between the two parties, will be heard not in Texas, where NetApp originally filed the suit, but by a “mutually agreed-upon judge” in Southern California, where both companies are based. The first matter before that judge will be a re-examination request on the patents NetApp claims Sun violated.
Here’s where blogs come in to play again, as they have, frequently and at times weirdly, throughout this case. According to a letter Sun sent to members of the press today:
Reexams have been filed on the NetApp WAFL patents that purportedly cover concepts such as copy on write, snapshot and writable snapshot. There is a significant amount of prior art describing this technology that was not in front of the US patent office when it first examined these patents. In just one example, the early innovation by Mendel Rosenblum and John Ousterhout on Log Structured File Systems, applauded in a NetApp blog: http://blogs.netapp.com/dave/2007/09/vmwares-founder.html as beinginspirational to the founders, was not considered by the patent office in the examination of the NetApp patents.
With this notice, Sun is hoping we’ll think they have NetApp right where they want them. But if there’s one irrefutable truth that’s come out of this whole saga so far, it’s that outside the offices of a few key people, nobody really knows what’s going on in this feud.
I will say, however, that the change of venue does seem like a concession on NetApp’s part, one that Sun, of course, isn’t letting slip by: “We are pleased that Network Appliance agreed to Sun’s request and retracted its imprudent choice of venue for this litigation.” (Prior to the change in venue, NetApp had pointed out that a number of cases like this one have been tried in Texas, including IBM’s suit against Amazon, regardless of where the parties in the suits have been based.)
Who knows what the new year will hold in store for these two. But we suggest stocking up on your Orville Redenbacher’s over the holidays.
When you buy a product or service how much of your purchasing decision is influenced by brand? And how much of that influence is because of that brand’s size?
I’ve taken a hard look at my purchasing habits and love to talk about brands w/ my friends. My eyes were opened as to how easily swayed I was by big names and the perceived strengths of a brand. How much did and does brand really mean to me? Well, I own a Sony TV even after they put rootkits on my cd’s … twice … and had a bunch of exploding batteries. I own a Microsoft Xbox and Xbox 360 even though they over heat and shut down (the red rings existed on the original Xbox too). I own two … yes two, Chevy vehicles (Corvette runs in the bloodlines, man, I’m tellin’ you!!!) So these and others were pointed out to me by people who call themselves my friends in about as brutal a way as they possibly could, you know what I mean if you’ve ever taken a friend clothes shopping.
With every effort not to sound egotistical, I consider myself an intelligent buyer, I make what I like to believe are intelligent well informed buying decisions (thank you Google!!) in my personal life, but more so in my professional life. I get paid to make intelligent decisions about vendors and products that get brought into our environment, not to mention I have to live with them while they are here, so I tend to take my professional recommendations very seriously. I research, test, research some more, and especially considering I’m suggesting that my salary, bonus or both be spent on said purchase I tend to be really rigorous.
A little background into my brand history, I purchased, ran, and actually liked IBM’s OS/2, I bought a Panasonic DVD-RAM drive. I own the Sega Dreamcast which was arguably a better machine than the Sony Playstation 2, not only that, Sega was a household name when it came to video games.
I can go on, remember the Apple Newton? Yup, I had one. How about Psion!? Had one of those too (Series 5mx). All of the above, on their own merits, were technically superior to their competition, they had a great brand name behind them and yet…they all got destroyed in the marketplace. The one that surprised me the most was Sega, do you remember all those commercials that screamed SEGA! at the end? They got it, they knew brand was key, then they blew it, I’ll save that one for another blog :). Meanwhile upstarts like Microsoft, Palm, Plextor and Sony went on to dominate their respective markets.
Why was that (aside from the fact that I owned one)?
And what does all this have to do with storage, you are probably asking yourselves. Well… how do you purchase storage products? Does brand play a part in your decision, or do you buy based strictly on technical specifications?
Another, even harder question, how do you sell an unknown brand to your supervisor? While you may be comfortable with buying from a relatively unheard-of brand, how does the person signing the check feel about it? Do you feel comfortable putting your reputation on the line, based on someone else’s reputation or lack thereof?
Take a look at the examples I mentioned above (Apple, IBM, Psion, Sony, Panasonic, and Sega) and how they all failed. They were the number one or two brand names, everyone knew who they were (Psion mostly in Europe), and yet their products failed miserably. Can we be so sure that the adage “you can’t get fired for buying IBM” is true?
I’ve heard comments like: “My boss never lets me buy the incumbent, because the number two player will work harder for our business,” or “I want one number to call, one throat to choke, buy all our stuff from one place whether it’s the best of breed or not, they didn’t get to be number one by accident”.
Personally speaking, brand still plays a role in my purchasing decision. I’m not nearly as obdurate (great SAT word submission) about brand as I used to be though. I do make a pointed effort to do research into what is available. I lean on my peers, I try to make it out to trade shows that are geared to the specific segment I’m looking at. I know that keeping up with all the new companies is pretty much a full time job on it’s own, but I do try to make that job a part of my job. With storage exploding the way it has been in the last few years it has become even more difficult to keep track of all the companies offering storage products and solutions, but equally important as they include more features in their products.
Currently we are evaluating vendors for multiple storage related projects. Whether or not all the projects will get funded is another issue, but the fact is, whittleing the field down is difficult when management doesn’t have a tight a grasp on the players. Gartner, Gomez, IDC and others do a decent job of providing executive level synopsis of who the vendors are and what segment and sub-segments they target, but that doesn’t go far enough to help me when I have to sell an unknown vendor in a market space that hasn’t even been fully fleshed out yet, much less reached any sort of maturity. I have to educate on not only the product but the vendor as well. Sometimes it is a fight there is no way to win.
Some will raise arguments about trust between upper management and their workforce, but we all know they only trust consultants ;-). I’ll elaborate more about the specific issues I faced with the various projects in a future post, but remember brand and perception are inexorably tied. Our industry depends heavily on perception. If you don’t believe me, throw this statement out to 3 different execs: (*I know I’m starting a bar fight somewhere*) “hard drives are a more reliable, cost effective solution than tape as a backup medium, fact or perception?”
I think a reference to The Matrix Reloaded (the second one) when Neo talked with the architect is a great mnemonic (pun intended), or maybe example is a better word. Either way the basic point is that everything is about choice, and convincing Neo to choose 23 people and making a new Zion is like choosing the incumbent instead of Trinity, the upstart vendor.
The glitz and glamour of new product releases tend to overshadow the rather mundane task of performing firmware upgrades on storage systems. However, administrators who take the time to keep their storage systems up-to-date with the latest and greatest patches for their storage system may find they can avoid some FC SAN “gotchas” as well as find some hidden gems that vendors are packaging in their latest firmware releases.
Prompting my thoughts on this topic was a recent conversation I had on this topic with a storage architect. He recently inherited a FC SAN where the firmware releases on the storage systems were two major releases back. The older code on these storage systems was becoming a problem since other devices on the SAN (switches, virtual tape libraries, and servers) had newer firmware with new features, but in order to take full advantage of these newer features, the storage systems also needed newer code.
I discussed this topic with EMC partly because the storage systems in question were EMC Clariion, but also because I know from personal experience that EMC releases firmware updates a fairly regular basis.
In the case of its Clariions, EMC comes out with a major release every 9 to 12 months that includes major new functions. For instance, its December 2006 code release for the Clariion included a new proactive hot spare feature for improved high availability and a Quality of Service feature as a licensable add-on. Its August 2007 Clariion major release added new security features as well as iSCSI enhancements like native replication.
Another interesting feature included in the update is the Software Assistant. This tool scans the Clariion prior to starting a firmware upgrade and provides recommendations as to which code an administrator should load on the system. The Software Assistant also does a high availability check prior to actually starting the upgrade to confirm that firmware upgrade can be completed without unexpectedly taking the system offline.
EMC recommends to customers that they install major firmware releases for its Clariions shortly after they are released (within 3 to 4 months).
However, there is a more pressing reason to ensure that firmware code is current. When doing firmware upgrades one must apply them sequentially. If a Clariion system is two generations old, customers may need to upgrade to the intermediate release before upgrading to the newest release. Though this is generally not a big deal, it does add to the length of the time needed to perform the firmware upgrade and makes it more difficult to back out of an upgrade should something go awry.
Over the last few weeks, storage insiders have been abuzz with speculation that a merger between HP and Symantec is imminent. Whether such talks are occurring, I can not definitively say, but if it does occur, the whole corporate world might as well kiss goodbye any hopes it had of creating and managing a heterogeneous storage environment.
Obviously, I’m exaggerating a bit. Kissing heterogeneity goodbye won’t happen the day such a deal is signed (if it occurs), and it probably won’t ever completely happen. HP and Symantec will likely both pledge that heterogeneous support will remain part of their product roadmaps. And, it’s likely that is true. However, one can almost bet that when it comes time to prioritize which storage products are tested first in conjunction with future releases of Symantec’s Veritas storage software that HP’s storage products will find their way to the head of the line.
More disconcerting is what Symantec’s acquisition by HP (or whoever they are acquired by or merge with) would mean for the future of heterogeneous storage environments in general. At one time, Symantec was on the vanguard of supporting an enterprise heterogeneous storage environment. Yet, now no one is really shocked or even appears overly concerned when Symantec is mentioned as a candidate for an acquisition or merger by what is traditionally considered a storage hardware vendor.
This mindset is testimony to changing user concerns and priorities. It used to be that storage hardware was the primary cost in user data center. Not anymore. Now, it is the management of the storage hardware — even if a user buys all of the hardware from the same storage vendor.
The complexity associated with managing storage hardware from multiple different vendors has become a mind-boggling exercise. While at one time it may have been worthwhile to spend the extra time and money to verify if an HP-UX server worked with an IBM storage system, now it is questionable if that is still the case. Instead I sense an increased willingness on the part of users to pay a premium to buy all of their storage hardware and software from one vendor and avoid checking multiple different support matrixes that using heterogeneous environments requires.
The looming acquisition or merger of Symantec, regardless of by who, signals the re-emergence of an old systems management philosophy. Companies no longer want a one trick pony for their storage management needs, even if that one trick pony manages heterogeneous storage environments. Instead more companies appear to want a return to simpler times where they buy all of their storage hardware and software from one vendor that all work nicely together. Let’s just hope that if companies have to revert back to this philosophy, that it works better this time than it did in the past.
This was our first Storage Decisions conference in the hilly city built on a fault line, and that meant a fresh crop of Storage Decisions attendees and happenings.
Sun held a “trends and innovation” dinner for press and analysts concurrent with the show on Monday night (it wasn’t affiliated). About two dozen Sun execs and their audience sat down to a gourmet repast at San Francisco’s trendy Absinthe restaurant. Execs and Sun reps including Chief Technology Officer and Executive Vice President of Research and Development Greg Papadopoulos, CIO Bob Worrall, Executive Vice President of Systems John Fowler, and distinguished engineer Subodh Bapat.
As always, Sun was articulating grand visions of the future. “The storage marketplace is about to undergo its most rapid set of changes possibly ever–it will change the economic fortunes of a number of companies,” Fowler predicted (Sun is hoping this will hold true in a positive direction for its storage products). Cost per capacity will be “one-tenth of what you see today.”
Like fellow large players IBM and EMC, both of whom have recently acquired storage service-provider companies, and Symantec, which is preparing a software-as-a-service (SaaS) backup offering, Sun is keen on outsourcing as well. Eventually, according to Bapat, there will only be a few “really big computers” in the world run by companies like Microsoft and Google in “mega data centers” like Google’s famed farm of PCs. Sun would also like to become a service provider itself, but their real focus is on selling equipment into those service-provider data centers. Sun was already part of a similar build-out in the telecom industry in past years, though it was also pointed out that companies like Google have already done their build-outs just fine without Sun.
Meanwhile, new “mega data centers” are beginning to spring up, including a new 500,000 square-foot, 50-megawatt behemoth being built for a national lab set to open next year, according to Bapat. “50 megawatts is bigger than a small city would consume,” Bapat said. “Utilities are going to become a real problem.”
Bapat also predicted that within the next year, a major data center failure will “cause major national effects, and bring forward the importance of data centers as national assets.”
Sun loves to look out 15 years, but ask about the next 15 months and it’s a trickier question. Sun’s recently announced partnership with Dell is part of its attempt to position itself better in the market; Sun will also be going after service provider customers such as SmugMug, according to Worrall, and developing server-farm products with its partners at research universities. How that’ll translate into specific products and sales remains largely unclear.
Sun is on to something when it comes to Fowler’s prediction about the pace of change over the next year, according to Taneja Group founder Arun Taneja. “We’re in such a vibrant market right now,” he said. “I have never seen so much change and innovation happening all at once, ever.”
Some visuals from the show floor (click to see larger versions, mouseover for descriptions)
Everybody’s favorite user-blogger Tory Skyers was Mr. Storage Decisions this year, presenting on the storage issues presented by new mobile devices and participating in a user panel on storage management. Skyers warned users not to overlook the trend toward iPhones and home servers. “An executive buys a home server, plugs in his laptop at home, and the home server asks, ‘wanna back it up?’ Then his kid comes home with the Trojan du jour and suddenly your company’s data is in the Eastern bloc somewhere.”
The flow of data leakages happen both ways in the mobile world, he added, with mobile devices blurring the line between personal and corporate data repositories. “So mp3s and AVIs and maybe even that Trojan find their way to the laptop, which finds its way to your data center, which finds its way to your SAN and your network.” Tory gave some how-tos on controlling some of that flow of information on both sides of the equation, including “using social networks in your work environment to enforce policy”–specifically, a “Page of Shame” for violators of company storage policies pertaining to mp3s etc. and strategically placed rumors of “someone getting busted” for violating policies. He recommended tools like Desktop Authority and Powerfuse for content filtering and executable monitoring for contraband files, using open-source and free Microsoft tools to create document templates for data classification, and Surfcontrol Mobile Filter to restrict access to Websites and protocols even when users are off the network and VPN on company machines. Desktop Authority and Powerfuse will also restrict which mobile devices can be plugged in to a corporate machine–a USB mouse will get through but not a thumb drive or iPod.
“This is a better alternative to sealing your USB ports with epoxy,” something Tory said he’d been asked to do before (by an exec who then realized he had no way to plug a mouse in to a $2500 machine).
In the course of his presentation, Tory also referenced the following tidbit from CNN: customs and border guards can confiscate anyone’s laptop without any grounds for suspicion and copy all the information held within it. Terrifying.
Some more visuals from around the conference:
On Wednesday users gathered for a peer discussion on virtualization that turned up some interesting things, including–be still our hearts–an actual, living, breathing, Invista user (we wanted to take his picture). Very few of those present have actually deployed storage virtualization and those considering storage virtualization tools were also in the minority among this group. “I’m wondering what the benefits are that other people have seen to virtualization, what the return is,” said one user.
The majority of users saying they’d begun virtualizing are using HDS. Almost all users with storage virtualization in place said they used it to front other arrays from the same vendor, with the exception of migrating data from decommissioned storage. “You just don’t want to get into finger-pointing with the different vendors,” according to one attendee.
Three months after filing to become a public company, NAS vendor BlueArc has pushed back its scheduled IPO until 2008 according to industry sources.
Citing SEC regulations, BlueArc declined to comment on its IPO schedule. But several industry and financial analysts familiar with the company say its bankers have decided to hold off on going public. BlueArc filed to for its IPO Sept. 7, and it normally takes a company about three months to begin trading shares as a public company. But BlueArc has yet to set its expected share range or go on its roadshow that precedes an IPO. There is usually a gap of at least two weeks between the share range and IPO, which means BlueArc would run smack into the holiday season if it decided to become public by the end of 2007.
It’s not clear why BlueArc decided to wait, but it’s likely that the company and its bankers anticipate a lower price for shares than they originally expected if they go public now.
“I’m sure market conditions haven’t helped,” said a banker for a securities firm who is not involved with BlueArc’s IPO.
A financial analyst who follows storage said investors “are doing more due diligences on IPOs now,” and said it could hurt BlueArc that the stock prices of storage systems companies Isilon and Compellent have dropped drastically since their recent IPOs. The analyst said BlueArc might want to show another quarter of solid growth to help its case. There is also the possibility of an acquisition. IP SAN vendor EqualLogic had filed for an IPO before Dell scooped it up for $1.4 billion last month. Storage insiders agree that Hitachi Data Systems is the most likely suitor because it has an OEM deal to sell BlueArc NAS systems and an equity stake in the company.
Even without BlueArc, 2007 was an active year for storage IPOs. Pure storage companies Compellent, 3PAR and Data Domain all went public, along with EMC’s partial spinoff of server virtualization company VMware and InfiniBand suppliers Mellanox and Voltaire.