After enjoying the last couple of hours of 2009 with my family, I thought how fitting it would be to end the year with a post!
I’ve been incredibly busy this year and my lack of posts really shows it, one would think I forgot my login or something. In that time, however, there has been no lack of great topics to talk about, and here are a couple that lit my candle in 2009:
Consumer computing is fast approaching levels of enterprise computing, making corporate citizens more computer savvy, and making IT management work harder to keep things humming along. Mark my words, you are going to see quite a bit of work-slash-home networking products come to the market in 2010, specifically around data protection and storage that are going to tout “office integration” or “workplace integration”.
The mobile computing and storage space and the rate at which consumer mobile devices are making inroads into the datacenter is something that I’m paying close attention to. Specifically, the Android OS and the Nexxus One and Droid hardware–these devices are significant to enterprise computing because they take the whole idea of a netbook to another level!!
If you remember the Toshiba Libretto, these new devices are what the Libretto could have been. The mobile phones are both fast and offer the ability to the savvy user to essentially replace their office with a hand-held device. And for those with super security conscious IT departments, there are companies like Good Technologies “Good for Enterprise” that allows an administrator to remotely wipe Exchange data from a Droid in a fully encrypted container so “security” can’t be used as a reason not to support the platform.
Take this a step further, I’m sure you’ve been asked at least once already to store backups of a user’s phone to tape, or better still seen a backup of a user’s phone on their shared drive. If you haven’t yet, you’d better get ready for it!
Virtualization has been rampant, and I predict it will be in my toaster within the year, allowing me to virtually toast multiple slices of bread simultaneously and store the trend info on how many times I’ve burned my Eggo’s on SSD. While I’m being flippant, we may actually see a hypervisor capable toaster or fridge or washer, and apparently I’m not the only one that thinks so–in an article on a New York times blog Sehat Sutardja has been quoted as saying: “[Virtualization] will become pervasive…It will be used in everything from TVs to IP phones to digital picture frames to washing machines.”
If Android is in a washing machine, then I have Linux and everything that is available to Linux in that washing machine … just think of the Folding at Home scores you can rack up if we linked the neighborhood washing machines up!! Think about all the data that will need to be stored when they start tracking wash cycles of a particular garment via RFID!!!
On a more serious note, the age of operating systems for small to midsized branch office network attached storage devices, as well as smarter switches and other infrastructure devices, is upon us. Microsoft is not standing still — Windows 7 is small and much faster than its predecessors (why do I feel like that is a paraphrase of the architect from the Matrix?) and is definitely a viable OS for these devices, so now we have raw Linux; Moblin is making its way onto the stage with Android, among others. And remember, all these things have one thing in common: they need somewhere to store the data they produce.
Speaking of virtualization, the march of development in the virtualization management software space is going to pick up steam in 2010, and there are going to be some casualties. The winner will be the one that allows truly heterogeneous management of my virtual data center from storage up, and after taking a look at Cisco’s offerings I’m going to be paying very close attention to what they do. I’ve been digging really deep into vSphere, and it’s jam packed with goodies. Orchestrator is a little gem — properly executed, it can add a good bit of speed and agility to any rapid provisioning initiative you may have, BUT be careful, with a poorly orchestrated (you knew that pun was coming didn’t you?) workflow that shiny new NAS with 400TB of storage will be gone in a day.
Enterprise Storage has continued to move forward at a blistering pace, with drives breaking the 2 TB mark, and some serious performance increases in the form of SSDs, Sata III and Fusion IO putting Flash directly on the bus. I look at price in this space. The price of SSDs will get lower and lower and the performance will continue to go up. We will see the proliferation of end-to-end solutions mixing the two, a la Exadata and the Sun 7000 line. Take a look at what Fusion IO is doing in the high end gaming market! It’s funny but the consumer machines of today are looking more and more like the specialized workstations and servers of yesterday.
I see some things that we really missed the ball on last year, too. Convergence really isn’t here yet. The drive to make a device the “media hub” and then backing all that stuff up is getting there, but hasn’t quite caught on yet. I think once it gets closer it could drive an entire wave of datacenter build outs to handle it. I can also see telcos getting into the act a little more aggressively, offering storage services at their major POPs to enable some of the consumer products to work properly. This has some unintended but positive side effects for the small to medium business because they will have ready access to fast, reliable online storage. Well, at least in theory. I’m still waiting for it to happen!
Cloud storage also hasn’t really shaped up to be the game changer I thought it was going to be. I like the idea of not owning infrastructure and I’m a really big fan of the rapid provisioning/de-provisioning model, but I just don’t see the bandwidth needed for that to work here in the US the way it really should. In Korea and various places who’ve deployed infrastructure recently I see cloud as a viable model, but not here.
With that, folks, I’m back and rarin’ to post!!!
(6:22) i365makes cloud data storage connection with CA Recovery Management
Also check out our Product of the Year Finalist List, published yesterday.
A website went up today taking registrations for a web conference being put on Jan. 26th by NetApp Inc., VMware Inc., and Cisco Systems Inc., a coalition that looks similar to the VCE alliance announced with VMware, Cisco and EMC Corp. last October. Except in this case the storage player is EMC’s archrival NetApp.
According to the site, the webcast will cover “what we’re introducing to help you imagine and achieve virtually anything with one elegant solution.” It will feature Tony Bates, senior vice president and general manager of Cisco’s Service Provider Group; Tom Georgens, CEO of NetApp; and Paul Maritz, CEO of VMware.
This looks like another in a line of “stacks” we’ve seen put together by large vendors and their partners in the last six months or so; just yesterday HP and Microsoft Wednesday disclosed alliance that will also focus on infrastructure bundles to support virtual servers.
The Cisco-NetApp-VMware troika also serves as a reminder that none of these alliances are exclusive, and we’ll see vendors making deals with enemies of their partners. When it comes to storage relationships, don’t expect monogamy.
There’s been some commotion this week about the announcement from Google that you can now use Google Docs to upload and manage any file type, with support for uploads up to 250 MB in size, total free storage space of 1 GB, and additional storage for $0.25 per GB per year. With those prices, Google may be offering the cheapest cloud storage capacity available anywhere.
Stephen Foskett, consulting director for enterprise cloud storage player Nirvanix, has tracked this closely and has a pretty good rundown of the discussion about whether or not this is actually Google’s long-rumored GDrive storage service. Personally I think the point is moot — whether it’s called GDrive or not (Google is adamant in public statements that it is not), this is still an online file storage service, complete with a file sync option through a partnership with Memeo. Same dif.
Foskett also points out for enterprise users to get support, the cost is $3.50 per GB per year, “much more in line with existing offerings” from Amazon, Rackspace and others.
So, what’s the big deal? For one thing, despite the fact that new cloud computing companies seem to be popping up like mushrooms, household brands go a long way in getting people’s attention. Google’s approach may not break new ground for cloud file storage, but it will gain cachet simply because it’s Google.
For consumers, the fact that this is being done through Google Docs and at such a cheap price may take some share away from Amazon’s S3, which requires either API integration or a third-party interface to provision its storage buckets and charges the same prices for capacity regardless of the type of user. S3 also charges for bandwidth while Google Docs doesn’t.
Dell is taking a bottom-up approach to 6 Gbps SAS, beginning with its low-end PowerVault MDS1200 and MD1220 direct attached storage (DAS) systems.
Dell launched the two 6-gig SAS systems today along with three 6-gig SAS controllers for storage and Dell servers.
The new generation of SAS systems have double the bandwidth of the 3 Gbps SAS that has been on the market since SAS began replacing parallel SCSI in 2005.
Last year, Hewlett-Packard rolled out its StorageWorks D2000 external arrays with 6 Gbps SAS.
Dell senior storage product manager Howard Shoobe wouldn’t say when he expects 6 Gbps SAS support for Dell’s EqualLogic iSCSI SAN or the Clariion storage arrays it co-markets with EMC, but many people in the industry believe 6-gig SAS will threaten Fibre Channel as the dominant high-end disk interface.
“With 6-gig, we see SAS becoming more compelling,” Shoobe said. “This is an important step and the foundation for the next generation of storage products.”
The MD1200 is a 2u box that holds 12 3.5-inch drives or a combination of 3.-5-inch and 2.5-inch drives. It expands to 96 drives with additional enclosures. Dell positions the MD1200 for applications such as disk backup, email, and streaming media.
The MD1220 is also a 2u system but holds 24 2.5-inch SAS drives and expands to 192 drives with eight additional enclosures. Dell sees the MD1220 being used for more I/O-intensive applications such as large databases and Web serving.
Both systems also support SAS interface SSDs from Pliant Technology. The PowerVault MD1220 costs $5,637 and the MD1200 is $5,145.
The MDS1200 and 1220 use the new PERC H800 6-gig SAS controller, which supports redundant pathing and I/O load balancing. With redundant pathing, both cables from the controller connect to the DAS system, so if one cable gets disconnected the system will continue to run. Dell is also bringing out PERC H700 and H200 controllers for 11G PowerEdge servers.
Last week, we wrote about some early storage industry predictions for 2010 from analysts and users, but more industry experts have since checked in with their outlooks for the coming year. Here are a few of the themes from these latest predictions from Symantec Corp., Enterprise Strategy Group (ESG) and Forrester Research:
SMB vs. Enterprise: opinions vary
One of the more interesting common topics explored by these reports is the outlook for technology adoption among different sizes of business. Beyond that similar focus, however, there are some significant differences in how different organizations see the market developing.
According to Symantec’s State of the Data Center survey, midsized enterprises are “leading the way” with adoption of new technologies like cloud storage. According to Matthew Lodge, senior director of product marketing for Symantec, the midsize enterprise data center, defined by Symantec at 2,000–9,999 employees, has a “more intense” data center than larger counterparts. “They’re deploying more applications [than larger companies] and expecting major changes in 2010,” Lodge said. He says new applications are driving more stringent availability requirements, while staff remains tight among these organizations.
ESG, meanwhile, has a different view of what makes a midsized business in its preliminary 2010 spending research, according to research director John McKnight. “Generally speaking…[midsized organizations according to ESG’s definition] almost always lag the enterprise in adoption of all new technologies,” McKnight wrote in an email to Storage Soup. “There’s a lot of conventional wisdom that cloud/SaaS adoption will be led smaller organizations. That may be true in the small business segment (i.e., the “S” in “SMB”) but our data has never shown much of a discrepancy between midmarket and enterprise when it comes to cloud.”
Pent-up demand? Capex vs. Opex
While ESG and Symantec surveyed enterprise storage managers, The Forrester report takes a higher-level view of the overall global IT market, predicting that
The US IT market will grow by 6.6% in 2010 (twice the 3.1% growth in nominal GDP), following a drop of 8.2% in 2009. The global IT market will rise in 2010 by 8.1% in U.S. dollars, and by 5.6% in local currencies. Growth will start slowly in 2010 but pick up steam later in the year, with computer equipment (especially PCs and storage) and software leading the way, and IT consulting services following.
Again, there are differences in outlook among 2010 storage outlook research when it comes to how much spending might rebound in 2010 from 2009 levels, as well as whether the rebound will be focused on capital expenditures (capex) or operational expenditures (opex). While IDC’s 2010 predictions reported on last week predicts IT will undergo “a shift away from capital cost efficiencies to operational cost efficiencies” and the development of “a business-level bias in most companies toward virtualized and/or services-oriented offerings for storage solutions,” from ESG’s perspective that transition happened last year.
While IDC predicts continued spending constraints leading to evaluatoin of services offerings among storage pros, ESG sees justifications for purchases this year trending to “more than just cost-cutting”, according to ESG analyst Mark Peters. “We’re seeing the beginning of a shift toward getting back to business in IT,” he said. “There’s a bit of a softening and cautious optimism.”
According to the preliminary ESG research report, of approximately 500 respondents, 52% said IT spending would increase from the previous year, as opposed to 43% in 2009.
Meanwhile, Symantec’s survey respondents still see budget constraints in 2010, particularly when it comes to operational costs like staffing. “Half of all enterprises are somewhat/extremely understaffed,” Symantec’s report reads. “Networking, virtualization and security [are] the most understaffed. [The] biggest issues [are] Budget and finding qualified applicants. 76% have the same or more open job requisitions this year.”
Unified or smart computing — another word for efficiency?
Forrester’s report identifies a set of concepts it calls Smart Computing as leading the next wave of tech innovation this year, described as “leading-edge technologies like service-oriented architecture (SOA), server and storage virtualization, videoconferencing, unified communications platforms, business intelligence and analytics, and new process apps built on digital business application principles.” Like ESG, Forrester forecasts a return to business priorities as opposed to simpler cost-cutting justifications for purchases:
As 2010 progresses and the memories of the 2009 downturn fade, CIOs will start to pay attention to the ways that a new generation of technology can help achieve better business results. In this period of tech innovation and growth around Smart Computing, reference clients and ”low totalcost- of-ownership” marketing become less important than media coverage and buzz in the blogosphere about how your solutions have helped companies achieve breakthrough business results. Companies will want technologies that put them ahead of competitors, not technologies that are the same as what competitors are using.
ESG uses the term “unified computing” to describe the server / software / storage IT stacks offered by larger vendors last year. According to McKnight’s email,
one of the only areas in our 2010 survey where the midmarket shows increased interest compared to enterprises (to tie this question to the one above) is in the area of unified computing – 17% of midmarket respondent put unified computing (i.e., integrated server/storage/networking stack) in their top 10 list of 2010 IT priorities compared to 11% of enterprises. Again, there’s a fair bit of conventional wisdom out there that says that these new unified computing platforms will be most attractive to large enterprises and service providers, but this data actually corroborates discussions ESG has had with a number of end-users, most of whom have indicated that they view the real niche for unified computing as being smaller organizations or smaller locations (i.e. ROBOs) of big firms.
Disaster Recovery–IT’s New Year’s Resolution
The social networking apps I use to communicate with friends have been full of complaints this month from regular gym-goers about the “New Year’s Resolution” crowd they expect to vanish by February. I’m reminded of that kind of recurring but short-lived New Year’s resolution when, every year at about this time for at least the last three years, the discussion has turned to disaster recovery.
Eighty percent of respondents to Symantec’s data center survey said they were confident in their DR plan, but as with Symantec’s SMB survey last September, that confidence isn’t necessarily supported by facts. The data center survey found that one-third of respondents do not have a documented DR plan, that DR plan hasn’t been re-evaluated in the last 12 months, and a significant proportion don’t address cloud computing (41%), remote offices (28%) or virtual servers (23%).
So why this perennial focus on disaster recovery seemingly without significant improvement or change in practice? “The inference we’re making is that it has to do with staffing problems,” Symantec’s Lodge said. “It increases expectations on the vendor side to help with [automating DR].”
“Most organizations don’t have a formal DR plan that’s regularly tested,” ESG’s McKnight adds. “There’s a difference between DR as a business priority as opposed to how it translates into specific formal processes and technologies. Having a plan and testing and improving on that plan are different questions.”
Emulex offered the first hard evidence that storage spending picked up at the end of 2009, a year in which storage spending overall dropped significantly. And its HBA rival QLogic served up the second piece of evidence.
Emulex Monday evening said revenue from the fourth quarter would be around $107 million to $108 million, way above its previous forecast of $88 million to $92 million. The new forecast is slightly below Emulex’s fourth-quarter 2008 revenue of $109 million and up about 26% from the third quarter of 2009. Emulex said its HBA business grew about 28% from the previous quarter and its embedded switches revenue increased more than 20 percent.
Because Emulex HBAs and switches are sold inside storage systems from large vendors – its biggest OEM customers are IBM, Hewlett-Packard and EMC – its better-than-expected sales are seen as a sign of an industry-wide uptick.
“It seems customers flushed whatever little budgets they had for 2009,” Wedbush Securities analyst Kaushik Roy wrote today in a note to clients. “Emulex revenues are highly correlated to spending on storage systems and storage area networks, and our checks lead us to believe that the strength was fairly even across server and storage systems vendors.”
Today, QLogic said it too exceeded its previous forecast last quarter. QLogic’s updated guidance of revenue in the range of $147 million to $149 million represents a 12% to 13% surge from the previous quarter, and is above its earlier forecast of $134 million to $140 million.
Even before the HBA vendors updated their forecasts, Wall Street analysts were predicting a thaw in storage budgets.
A note issued by RBC Capital Markets analyst Amit Daryanani last week forecasted a storage spending increase of around 7 percent or 8 percent this year – higher than overall IT spending – following an 8 percent drop in 2009.
“Given most organizations deferred much of the storage spend last year, we expect a relatively stronger 2010,” Daryanani wrote. “We expect IT storage spending to continue to shift toward networked storage at the expense of direct-attached storage. In addition, we believe network attached storage (NAS) will continue to grow at a faster clip than storage area network (SAN), given the relatively bigger surge in unstructured data creation.”
Dot Hill Systems has ambitious plans for software it will acquire from Cloverleaf Communications for $12 million.
Dot Hill executives discussed those plans during a conference call Thursday night. The short answer to what Dot Hill expects to do with Cloverleaf’s Intelligent Storage Network (iSN) is: Everything. Which is a stark difference from what Cloverleaf did with the software, which was next to nothing as far as sales go.
Dot Hill execs say iSN is capable of delivering heterogeneous storage virtualization, unified SAN/NAS storage, thin provisioning, synchronous/asynchronous replication, snapshots, CDP, automated storage tiering, and migration of data from any array to any array. Except for data deduplication, Dot Hill hails iSN as a solution for just about any data protection and management needs.
“This allows us to compete pretty well with just about anything in the industry,” Dot Hill CEO Dana Kammersgard said.
As for specific products, Kammersgard says Dot Hill will have an in-band storage management appliance, a bundled appliance with Dot Hill storage, and a data migration appliance based on iSN lite over the next six months to a year. It plans to follow with a unified storage appliance and standalone software package within one and two years, followed by fully integrated unified modular storage platforms.
Besides making Dot Hill more of a software play, Kammersgard says the acquisition will accelerate the vendor’s plans to enter the channel with appliances. Dot Hill currently sells storage arrays almost exclusively through OEM partners Fujitsu, Hewlett-Packard, NetApp, and Sun.
Dot Hill execs say it would’ve taken three years and cost $30 million to $40 million to develop similar software in-house. Buying Cloverleaf costs $2.5 million in cash and $9.5 million in stock. They expect the deal to close within a few weeks.
Kammersgard says iSN will make Dot Hill more competitive with storage systems vendor Compellent as well as management software vendors FalconStor, DataCore and LSI’s Storeage platform.
But by Dot Hill’s accounting, Cloverleaf was sputtering along with 25 customers and $1 million in revenue in 2009. If iSN is so good, why was it largely ignored in the market? Kammersgard says Cloverleaf devised the software for large enterprises and specialized customers such as the military defense industry, and that narrowed its market. He says Dot Hill will “scale the software down” to make it a better fit for mainstream storage users.
So now we know what Dot Hill will be up to over the coming months: its developers will scale down its new software while marketers scale up expectations.
Storage systems vendor Dot Hill acquired privately held storage virtualization software maker Cloverleaf Communications Inc. this week for $12 million in cash and stock as part of its plan to focus more on the management of its arrays.
Cloverleaf’s Intelligent Storage Network (iSN) products are designed to manage heterogeneous storage environments as well as disaster recovery under one user interface. The transaction will consist of $2.5 million in cash, plus $9.5 million in Dot Hill stock.
According to a Dot Hill press release, Cloverleaf was formed in 2001 out of Elta Systems/Israel Aerospace Industries and was VC funded to the tune of $43 million. Dot Hill CEO Dana Kammersgard said in the release that the move signals a shift for Dot Hill, up to this point primarily a provider of storage hardware systems to OEMs, to intensify its focus on the storage software market.
“This acquisition is in line with our previously stated objectives to transform Dot Hill into a storage software and solutions focused company,” Kammersgard was further quoted as saying in the release. “Cloverleaf’s products accelerate this plan by as much as two years and with the breadth of features they provide, we believe we can compete very well with both the virtualization appliance companies as well as with the newer storage companies who bundle similar features that operate solely on their own array products.”
Dot Hill sells its storage arrays through OEM deals with Fujitsu, Hewlett-Packard, NetApp, Sun, and others, so it probably senses an industry wide shift to add virtualization features to its systems. But you have to wonder how much of an opportunity Cloverleaf management and investors believed existed for their software. They sold out for barely one quarter of the amount invested, and most of what they got is tied into the value of Dot Hill stock.
Dot Hill will hold a conference call Thursday to discuss the acquisition, which is expected to close mid-month.
DataCore kicked off 2010 with updates to its SANSymphony and SANMelody storage virtualization software, adding support for logical volumes up to 1 PB and the Asymmetric Logical Unit Access (ALUA) standard.
The expansion of logical volume support from 2 TB to up to 1 PB is made possible by DataCore’s move to 64-bit support in the latest major release of its storage virtualization software. The 64-bit support lets software address much more storage capacity in a single volume than previous versions.
DataCore director of product marketing Augie Gonzalez said that as with last year’s 1 TB “mega-cache” support, the logical limit is beyond where most customers will be looking to stretch today. But the previous 2 TB limit had grown impractical for making RAID sets out of the latest 1 TB and 2 TB SATA disks.
“The logical volume expansion and thin provisioning allows users to say, ‘I don’t care how big the volume will be in the future’,” Gonzalez said. “Rather than defining LUNs up front and then having to make changes later, you can immediately set up a large volume and expand the storage with no applicvation or infrastructure changes.”
A DataCore service provider customers says adding ALUA support will improve management in his storage environment. Joseph Stedler, director of data center engineering for cloud computing and managed IT service provider OS33, said he uses DataCore’s SANSymphony software to host back-end storage for his SMB customers. Right now SANSymphony is running on IBM System x servers in front of IBM DS3400 arrays and Xiotech Emprise 5000 storage devices, mirroring between redundant sets of the tiered hardware. The logical volume expansion will be especially helpful in cutting down on backup administration overhead, Stedler said.
“With two terabyte volumes, we had to present things in 2 terabyte chunks to our Veeam [backup] server,” he said. “With a larger primary volume we could have fewer backup targets to manage.”
Stedler said the addition of ALUA support will be even more important for creating multipath I/O in OS33’s VMware environment. “ALUA solves a major pain for everybody running DataCore with VMware,” he said. “The way VMware understood it before was active-passive only. DataCore was able to do active-active failover but with VMware you’d have to run multipathing with the most recently used path. The new release is fully compliant with ALUA, so VMware can view it as active-active.”
Stedler said he’s looking forward to the addition of more granular scripting capabilities for the software in future releases.