The early message from EMC executives at EMC World is, “If you’re not using a private cloud yet, you’re late.”
That’s how EMC chief marketing officer Jeremy Burton opened the first day’s keynote session.
“If you have not begun the journey to the private cloud, what are you doing? You’re late,” said Burton, adding “cloud meets big data is what EMC World is all about.”
EMC CEO Joe Tucci declared “the cloud wave is the biggest and most disruptive change we’ve seen yet in IT” during the early moments of his keynote.
And just in case anybody gets the idea that EMC has demoted another of its lynchpin technologies, Tucci said “virtualization is the key to the cloud. This is the year when most, if not all, mission critical applications get virtualized and run on cloud topologies. This will be the year when all IT professionals will understand the opportunities [presented by the cloud].”
Tucci also predicted a hybrid cloud combining private and public clouds will become the “de facto enterprise model” because it offers, among other things, “incredible levels of agility.”
Tucci gave a brief preview of EMC’s product launches at the show, including a new VPlex active-active storage system with asynchronous mirroring, some Isilon scaleout NAS tweaks, Hadoop support for Greenplum appliances and something he called Project Lightning. Tucci called Project Lighting “a top secret” and said EMC president Pat Gelsinger would address it during his keynote later today. Sources say Project Lighting involves flash solid state storage.
NetApp closed its $480 million acquisition of LSI’s Engenio storage system division today and hit the ground running with three new systems based on Engenio technology.
NetApp rechristened Engenio storage as the NetApp E-Series aimed at organizations with large data sets and high performance block data needs. It launched the E5400 midrange system for OEMs as well as a Hadoop storage system for analytics and a full-motion video system for battlefield intelligence.
NetApp also said it has commitments from two of Engenio’s largest OEM customers – IBM and Terradata Corp. – to transfer those OEM relationships to NetApp.
The E5400 is a 4u 60-drive dual controller system that supports 120 TB raw capacity and 6 GBps of sustained throughput. It uses 3.-5-inch 2 TB nearline SAS drives. The E5400 is part of the family that IBM sells as its DS5000 through an OEM deal it had with LSI.
NetApp’s E-Series consists of the E2600 entry level storage, the E5400 midrange platform and the E7900 for high performance computing (HPC).
NetApp’s Hadoop system is aimed at the big analytics segment of the Big Data that vendors talk about so much today but can still confuse their customers. NetApp divides Big Data into Big Content and Big Analytics. The analytics segment deals with making complex queries on data, much of it in structured data warehouses. The Hadoop storage system is built on the NetApp E2600, which Engenio was selling before the NetApp acquisition. The E2600 has a base configuration of 16 nodes to 32 nodes in a shelf.
Hadoop is a free, Java-based programming framework that supports the processing of large distributed data sets. The E2600 is designed to help customers build Hadoop systems quickly to ingest large data blocks. “Hadoop tends to be the Grand Central Station of enterprise data warhehousing these days,” said Val Bercovici of NetApp’s CTO office.
Bercovici said NetApp chose the Engenio platform rather than NetApp’s FAS series because the Engenio systems are block-based and designed more for streaming I/O to low-cost SATA drives, making them a better fit for Hadoop use cases.
NetApp’s Hadoop launch comes on the same day that NetApp rival EMC rolled out its GreenplumHD Data Computing Appliance with Hadoop at EMC World.
The NetApp full motion video system is designed for government intelligence agencies that need to store data images from unnamed drones and satellites on the battlefield.
NetApp welcomes most Engenio employees, but not Permabit dedupe
NetApp VP of product and solutions marketing Chris Cummings said most of Engenio’s 1,100 employees will join NetApp but didn’t have an exact number.
It doesn’t appear that NetApp will carry over Engenio’s fledgling OEM deal with Permabit for primary data deduplication. Cummings said that deal was struck when LSI was looking to make its systems more focused for general IT use but NetApp intends to focus Engenio storage on customers with large bandwidth needs. He said NetApp will likely leave it to OEM partners to add data reduction and sell Engenio as part of solutions behind NetApp’s V-Series, which already provide primary dedupe.
Data archiving is often incorrectly assumed to be an extension of backup. That’s mainly because earlier technology limitations necessitated the use of backup software to store data on devices.
Storing information that is not frequently accessed but must be retained is archiving. An archiving system’s defining characteristics are speed of access, the cost of storing the information over decades, and the ability to access data in context by the application.
There are two primary archiving use cases. Many software programs are available to move and manage the information on an archive system, with features that take into account those two use cases.
The first of archiving use case is to store information that has reached a known point in a process or business. The known point could be when a project finishes, but the project information must be retained for a future requirement. An example of this could be a construction project where materials such as schedules, product information, and sub-contracts, are required to be retained after the project finishes. The information still needs to be available and may be accessed intermittently but it has a low probability of access.
There are many examples for this first archiving usage and they exhibit a similar operational characteristic: the information, typically in the form of a set of files, needs to be moved to the archiving system and protected with multiple copies when archived. Maintaining indexes and controls for access over the information in addition to the movement to the archiving system are part of the characteristics of archiving software used in this case.
The second of the two primary archiving usages is to move data that is no longer active off high performance expensive storage. Moving the data to an archive system frees up space for critical data on the primary storage tier and lessens the need to purchase additional capacity. Protecting the data at the time of archive according to the data protection requirements also takes that data out of the regular backup process, reducing backup windows.
Sophisticated software lets organizations automate the selection and movement of the inactive data while providing seamless access to the information on the archive system.
Both use cases for archiving provide valid justification for archiving storage systems and software. More information on archiving can be found on the Evaluator Group site.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Ready or not, 16 Gbps Fibre Channel is coming.
Emulex and Brocade this week said they have 16-gig FC devices being qualified by storage and server vendors, although there doesn’t appear to be any screaming demand for organizations looking for more than 8-gig FC support. The aggressive move to 16-gig FC is a sign that mass adoption of Fibre Channel over Ethernet (FCoE) is still a long ways off and pure FC has plenty of life left.
Emulex got the ball rolling Monday with its XE201 I/O Controller, a converged adapter that supports 16-gig FC along with FCoE, 10-Gigabit Ethernet (GbE) and 40 GbE. Emulex will demo the XE201’s 16-gig capability at EMC World next week.
Brocade Tuesday rolled out what it calls the first end-to-end 16-gig FC platform of products. They include the DCX8510 Backbone SAN switch with up to 384 16-gig ports at line-rate speeds, the 6510 edge switch with 24 or 48 ports, and the 1860 Fabric Adapter that supports FC, Ethernet and FCoE. Brocade is also adding 16-gig FC support to its Network Advisor 11.1 unified management software and Brocade Fabric Manager 7.0.
Brocade said the new switches and software will be available this quarter. However, its OEM partners probably won’t complete qualifications before August. EMC and Hewlett-Packard are already qualifying the 16-gig products with other storage vendors to follow soon.
Emulex VP of marketing Shawn Walsh said he expects OEM certification for the second half of this year for Emulex’s 16-gig adapter.
It’s no surprise that Brocade is pushing the faster FC. Brocade has always been less bullish on FCoE than its FC switch rival Cisco, and Brocade picked up significant market share gains by beating Cisco to 4 Gbps and 8 Gbps FC gear. During its tech summit day Tuesday, Brocade execs said the market agrees with their take on FC and FCoE.
“FCoE adoption has been modest,” said Jason Nolet, VP of Brodcade’s data center and enterprise networking group.
“Our customers say they want to stay with Fibre channel,” Brocade CTO Dave Stevens added.
Emulex’s Walsh agreed with that. “Ten-gig [Ethernet] adoption is happening fast, but there’s still discrete networks,” he said. “Customers are not going to throw away what they have today. One of the big questions we get is, ‘What is Emulex’s commitment to 16-gig Fibre Channel?’”
It’s almost certain that more vendors will have 16-gig FC products by the Fibre Channel Industry Association (FCIA)’s October 16-gig plugfest.
If history is any indication, Cisco will trail Brocade by from six months to a year with 16 Gbps FC. Emulex’s main adapter rival QLogic isn’t commenting on its 16-gig roadmap but is expected to support it this year.
Brocade execs point to virtualization and the cloud as drivers of the faster technology. However, there will be a price premium to move from 8-gig to 16-gig. That was also the case with the move from 4-gig to 8-gig, and that transition was slower than the moves from 1-gig to 2-gig and from 2-gig to 4-gig when there was no price hike for the higher bandwidth. While the first two transitions took about two years each for the higher bandwidth to become dominant, the move to 8-gig took about three years.
In a blog post this week, Wikibon senior analyst Stuart Miniman advised organizations to pursue 16-gig FC and converged networks on their internal schedules rather than according to vendor roadmaps.
“Most users can take a slow and deliberate approach to the adoption of new generations of speeds,” Miniman wrote. “ … customers can support both FC and Ethernet and consider the migration on internal schedules rather than on the pace that the vendor community may want to push or pull. For equipment refresh cycles that start in 2012 or later, consider looking for adapters that can support the latest of both FC and Ethernet.”
Hewlett-Packard today said it will launch its next-generation EVA midrange storage system in June, and denied that it will stop selling its Hitachi-manufactured P9500 XP enterprise platform.
For now, HP isn’t giving much detail on the P6000 EVA except to say it will have a 6 Gbps SAS back end, 2.5-inch SAS drives and 8 Gbps Fibre Channel connectivity. The vendor is offering an early access program to customers ahead of the official launch at the HP Discover user show June 6-10.
“We want to let folks know where we stand,” said Craig Nunes, marketing director for HP StorageWorks. “A quarter ago, there was a lot of speculation about when it [the next EVA] is going to come. We’re trying to be as proactive as possible.”
HP is also sending a message that it will continue to develop the EVA line as well as continue the P9500 that comes from an OEM deal with Hitachi Limited.
There has been speculation in the industry that HP would drop either the XP, EVA or both product platforms since it acquired 3PAR for $2.35 billion last year. HP executives have maintained they will keep both the XP and EVA, but StorageNewsletter.com posted an item Monday citing unnamed sources saying HP would stop selling the P9500 XP and replace it with high-end 3PAR arrays.
The Storage Newsletter story drew vehement denials from HP, with HP StorageWorks VP of marketing, strategy and operations Tom Joyce telling the newsletter “HP is in no way discontinuing the XP business relationship with Hitachi, Ltd. of Japan. The XP, and its newly named P9500 successor, are very successful mission-critical storage products for HP.”
HP’s storage blogger Calvin Zito added in a blog post Monday, “I saw a story on a small storage news website today claiming that HP would no longer OEM the XP Disk Array from HDS [Hitachi Data Systems].
“The story is wrong. Period.”
Joyce told StorageSoup that the industry speculation has created confusion with HP EVA and XP customers about the vendor’s roadmap.
“When you do something as publicly visible as making the investment HP made in 3PAR, it begs the question, ‘What does this mean to existing products?’” he said. “The thing about a free press is people are free to write what they want. But we’ve been consistent since we bought 3PAR that we would introduce a new EVA in the first half of 2011. We said the P9500 will stay a critical part of our product line. Over a period of time customers will like to have alternatives. Some customers will say ‘I’ll use 3PAR for something I used to use XP for,’ but 3PAR will never replace some of the things XP does well. We will never add mainframe connectivity for 3PAR.”
Joyce said EVA’s selling point is simplicity compared to other FC storage systems. “Folks buy EVA because of ease of use right out of the box,” he said. “It’s simple to use and you can run it with a lot less people.”
UPDATE: According to documents uncovered by SearchStorage ANZ’s Simon Sharwood, HP will deliver a P6300 and P6500 EVA. The P6500 will be a higher end version with faster processors, more cache, greater maximum capacity and so on. But the more interesting parts of the upgrade are the software and management features – mainly the reservationless dedicate-on-write thin provisioning that HP has in its 3PAR and LeftHand platforms. The new EVA will also have dynamic LUN migration, and new remote replication and clustering capabilities.
The popularity of storage systems that support tiering within the box, driven primarily by NAND Flash solid state drives (SSDs), has created confusion about the meaning of storage tiering. While speaking to directors and CIOs of IT operations at a recent event, questions came up that illustrate this confusion.
The common understanding of tiering for many in IT is what is called external tiering today. This includes different storage systems with different performance, capacity, and — more importantly — cost characteristics. These tiers are called tier 1, tier 2, tier 3, and sometimes even a tier 4. These tiers may include a SAN with Fibre Channel or high-performing SAS drives, another box with capacity SATA drives for archiving or backup, and even a tape library.
When DRAM solid state storage as an external device became more generally available, it was marketed as a tier 0 storage hierarchy device. Most of IT still has looks at tiering that way.
“Inside the box” tiering is not new but the use of NAND Flash SSD has crystallized the focus on this type of tiering. We’ve recently seen various types of approaches and characteristics that provide real differentiation among the product offerings here. Evaluator Group covers these tiering products and their differences on our web site. The materials from the vendors for these solutions highlight the value of the systems’ tiering but do not address the new way of looking at tiering. These systems can include SSDs, high-performance spinning disk and capacity disk in one box.
For those in the storage world, the difference is so well understood that vendors don’t often place it in context for the IT world in general.
This background of tiering within storage brings us back to the regular interactions with IT directors and CIO that I have. There is a lack of clarity in the terminology based on the evolution that has occurred. Not everyone is at the same level of understanding. Consequently, putting the discussion in context is necessary to ensure that the conversation is following the path with different people. It is probably too late to change the descriptive terminology used and given enough time, the base understanding will change to where this becomes less of a problem. In the meantime, the education sessions offered at conferences such as SNW and Storage Decisions and more in-depth classes by others organizations such as the Evaluator Group work toward raising this level of the awareness of the difference.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Last week was a dark one for cloud storage vendors. The week began with confirmation that Iron Mountain had closed its file and archiving cloud services and ended with news that startup Cirtas laid off most of its staff.
So is this a sign that cloud storage isn’t catching on, or the beginning of consolidation in the market? It’s certainly not yet a sign of doom for cloud storage. Every major storage vendor has public and/or private cloud products and services, and they’re pushing them hard. Iron Mountain and EMC – which closed Atmos Online last year – still have other cloud offerings.
And whenever several startups tackle a market, at least a few will fail. Cirtas is among a bunch of companies that began offering gateways to create hybrid storage clouds within the last year or so.
The CEOs of two of the cloud gateway vendors, Nasuni and StorSimple, say it was a coincidence that Iron Mountain and Cirtas failed in the same week. Nasuni’s Andres Rodriguez and StorSimple’s Ursheet Parikh agree that both failed for different reasons – Iron Mountain was too expensive at 65 cents a gigabyte and Cirtas’ SAN cloud gateway lacked enterprise SAN features.
People in the industry say some of the investors who sunk $22.5 million into Cirtas in January were unhappy with the lack of success of the product, and Cirtas is looking to either revamp its product or sell off its technology.
Rodriguez and Parikh said Iron Mountain was fighting a losing battle against larger service providers Amazon, Google, Microsoft and AT&T. Rodriguez predicted that other smaller cloud storage providers will also falter.
“No one buys technology, and the cloud is no different,” Rodriguez said. “To turn it into a real business, you better have something that works and offers a compelling value proposition. Iron Mountain learned the hard way that offering the back-end storage is a game for giants. It is simply not possible to compete with the economies of Amazon, Microsoft or Google. I believe it is just a matter of time before other service providers follow [Iron Mountain].”
Parikh said Iron Mountain had trouble adapting its core business model to the cloud.
“Iron Mountain knows how to manage tapes, but doesn’t know how to manage services,” StorSimple’s Parikh said. “Even at 65 cents a gig, it was losing money.”
As for Cirtas, Parikh said its Bluejet Cloud Storage Contoller lacked redundancy needed for a SAN product.
“You cannot deliver block storage without dual controllers,” he said. “If it were a file product like Riverbed [Whitewater], you can use it for backup. High availability and reliability has to be table stakes for block storage. No one gets fired for buying expensive storage. When a storage guy gets fired, it’s for losing data.”
Rodriguez said Cirtas took the wrong approach by trying to sell iSCI SAN storage in the cloud. “Blocks-to-the cloud is not technically sound,” he said. “From a business perspective, arrays are the hardest possible place to enter a data center with new technology. People are religious about their arrays. I don’t know exactly what happened, but I know [Cirtas] was having problems in the field.”
The Nasuni Filer is a virtual NAS appliance that turns commodity hardware into a NAS that moves data off to the cloud. The StorSimple Appliance is used mostly to move data from Windows file servers, Microsoft Exchange and SharePoint and VMware to the cloud. “We provide the appliance, and the customer buys the cloud directly with a separate contract with Amazon, Microsoft or A&&T,” Parikh said. “We reduce the cloud provider bill, and that more than pays for our appliance.”
Rodriguez said cloud storage gateways need to be reliable and low cost to succeed.
“This is a business not unlike the harddrive business,” he said. “It’s a total commodity play. It is about quality at a low cost. Nasuni repackages that raw, commodity into something businesses can run on.
Seagate’s $1.375 billion acquisition of Samsung’s hard drive business today also strengthens Seagate’s solid state drive (SSD) hand by extending the NAND flash partnership between the two vendors.
Samsung sells hard drives for PCs and consumer electronics, so the SSD part of the deal is the key piece for enterprises.
“This provides Seagate with an important source of leading-edge NAND supply and early visibility into the next-generation of NAND,” Seagate CEO Steve Luczo said on a conference call to discuss the deal.
Seagate and Samsung announced a NAND partnership last August. Seagate launched its first enterprise SSDs in March with its Pulsar .2 mult-layer cell (MLC) and Pulsar XT.2 single-level cell (SLC) SSD using Samsung NAND chips. Seagate was slow entering the SSD market, but Luczo said those Pulsar products should be coming into the market through storage system OEM partners soon and Seagate is on schedule for its next generation of SSDs in partnership with Samsung. Still, Seagate’s OEM customers wanted assurances that the Seagate-Samsung arrangement would remain intact.
“This addresses an issue that customers have raised,” he said. “While they have a lot of confidence for Samsung and Seagate to design flash products, there always has been a bit of a concern that without a formal supply agreement, what was the whole package going to look like?”
Seagate also sells hybrid systems with SSDs and hard drives.
The Samsung deal, which Seagate expects to close around the end of the year, is the second major disk drive merger in barely a month. Western Digital said in March it intends to buy disk drive rival Hitachi Global Storage Technologies (HGST) for $4.3 billion.
Luczo said the consolidation reflects the growth of storage capacity worldwide and the need for investment in new storage technologies.
“Demand for storage is accelerating,” he said. “Petabyte growth is very strong and has been for the last six to eight quarters, even in an economy that has been lackluster.”
Recent storage system launches – such as EMC’s Isilon roll-out last week – included new generations of Intel processors that provide greater processing power and increased performance. This has become a familiar pattern — the latest generations of processors become available, and soon after storage system announcements follow with the incorporation of the new processor technology.
That marks a major change in storage system architectures. When I started in development for storage systems, custom microprocessors were designed that had special characteristics for handling data movement. The assembler level languages were written for each custom processor, and the surrounding data flow was developed to create the heart of the storage system. It was a badge of honor to have created the custom processor and the programming language for a generation of storage systems.
Over time, the use of special purpose but standard processors became common, as did more general purpose compilers and debuggers. These processors evolved into a separate business with offerings such as the AMD 29000. This took the custom processor design and programming languages and tools away from storage system development, but left the storage application software and the logic around moving the data and the interfaces.
As the progression continued, the use of commodity storage processors along with the support logic chips became prevalent. This has reduced the design demands and provided greater economies for components. Many systems today even use standard or nearly standard motherboards of the type found in servers for the underlying hardware in storage systems. That leaves storage system design to the application software and the hardware configuration. In some cases, the storage application can run within a virtual machine on a physical server.
The move towards commodity hardware was enabled by tremendous advances in the processor speed and functionality. The investment in the server/PC technology can be leveraged in storage, which alone could never sustain the investment required to make those advances. The other side of the coin is that the continual advancement requires the storage system hardware to change along with the new server change, which means that a particular hardware configuration will be offered only as long as the server/PC hardware offering is in new production. That’s about a 12-month cycle at best. New (meaning updated) versions of the storage system hardware will occur on a fairly regular basis by necessity.
This is mostly good for the IT customer. Newer, faster, and less expensive storage systems will continue to be delivered. But it also means that support of the storage systems will have a finite life. There will be a time when it is necessary to upgrade the storage system when a replacement controller is no longer available because the spares stock has been depleted and no more are being manufactured. This causes a bigger concern when planning the lifespan for those systems, the amortization of the investment, and the problematic migration of data.
The updated processors in storage systems such as the one from EMC Isilon are usually accompanied with other advances such as new or improved features delivered with the storage application and newer interface support. So despite the shorter hardware lifecycle, the IT customer does benefit compared to the costs and progression of technology from custom designs of the past.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Storage Networking World always provides a chance to see what vendors want to inform the world about from their product perspective. It also includes presentations focused on education and general information for storage managers. These two perspectives often contrast, showing that the adoption rates of new technologies in data centers often runs at a different pace than delivery of new technology.
The vendor offerings are interesting to look at from a quantitative perspective: what is popular as a focus area? At last week’s SNW in Santa Clara, Calif., there were two distinct areas of popularity (quantitatively): solid state devices (SSDs) and anything to do with “cloud.” The cloud is being liberally attributed to many different products and operational environments. This does not mean there were no storage systems and management software solutions presented, but they were greatly outnumbered by SSD and cloud offerings.
These products represented the popular ones at the event and certainly many of the offerings were new. The “next” from vendors were some novel offerings that may become popular products or solutions of the future. More of the new technologies seem to be unveiled at SNW in the fall rather than the spring.
Fitting into the new category but not really representing particularly new products were integrated or coordinated solutions meant to replace or help manage point products. This is recognition that there is another level of opportunity beyond delivering a new or novel product. That opportunity is in integrating solutions into the operational workflow in an IT environment.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).