May 31, 2011 1:28 PM
Posted by: Randy Kerns
, removable media
Archiving has the potential to change individual behavior, depending on how it is implemented and introduced to data owners. An archiving system can prompt data owners to alter the way they handle their data, and this could unintentionally circumvent the reasons for the archiving system.
To illustrate this, let’s take a real example from a company that will not be identified here, and a specific user in that company. The company instituted a new policy that data stored on primary storage systems would be archived if it had not been accessed in six months. After the data is moved, the user would have to submit a request form to access it. There was no time guarantee for the restoration, but the first retrieval for others had taken two weeks. This included the time it took to process the request and retrieve the data. (The “processing” of the request was suspected to be the time consuming area here).
This is where behavior modification came in. Realizing it was not easy to get data back, the user turned into a data hoarder. Some data believed to be vital to refer to in the future was stashed elsewhere – on storage areas that were not scanned for archiving. Other data was copied to removable media and put in a desk drawer. This was done to avoid the delay in getting data back when needed, and the potential inability to get the data at all due to technical or administrative breakdowns.
Obviously, data hoarding defeated some of the reasons for archiving. The data did get moved off primary storage but was placed on other tiers besides the archiving tier. Other data was copied and put into a desk drawer, which could become a security risk.
The real issue is the way archiving systems work and how the process is presented to the data owner. The archiving systems need to allow the owner to see his data and access it when necessary in a reasonable amount of time. There has to be some confidence that data just won’t go away.
Creating a successful archiving environment requires systems that meet the need and an approach that works for the company and users. If not, behavioral change is a probability. Data hoarding is a basic instinct.
There’s more on archiving here.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
May 26, 2011 12:52 PM
Posted by: Dave Raffo
netapp; emc; unified storage; flash
NetApp CEO Tom Georgens says the battle for storage supremacy is increasingly becoming a two-horse race between his company and EMC.
During NetApp’s earnings conference call Wednesday, Georgens painted large acquisitions such as Hewlett-Packard’s buying 3PAR and Dell grabbing Compellent last year as attempts by those vendors to catch up on innovation. Despite those deals, NetApp and EMC continue to take market share.
NetApp’s product revenue for last quarter increased 27% year over year – at least twice the storage revenue growth of server vendors HP, Dell and IBM. EMC, which has more overall revenue than NetApp, grew its storage product revenue around 17.5% over last year in its most recent quarter.
“Nobody buys storage from a server vendor unless they also buy their servers,” Georgens said. “And they’ve all lost a lot of ground, so they’re effectively re-loading by acquiring companies out there that actually have innovated.
“Companies like NetApp and EMC – as much as a hate to say it – are actually going to be the innovators because I think we’re going to be the only guys who are able to sustain the level of investment to stay competitive.”
He did say some customers still prefer best of breed, and that’s why NetApp is working on integrated stacks such as its FlexPod architecture with Cisco and VMware, and partnering with others such as Microsoft, Citrix, SAP and Accenture.
Georgens said EMC’s recent VNX unified storage system hasn’t changed the competitive dynamic between EMC and NetApp, but “it actually creates separation of us and EMC from the rest of the pack … it’s going to be more and more difficult for the rest of the storage industry to keep pace.”
He said EMC’s recent Project Lightning announcement involving server-side flash in a PCIe card underscores the importance of flash in storage, a technology NetApp has embraced with its storage-side Flash Cache card.
“I expect there will be flash in large quantities deployed in servers,” he said. “And I think there will be various ways of utilizing it, some simply as cache, some as permanent storage. I expect flash to be a big deal, and I think the Flash Cache has been very effective for us. And we’ve got aspirations to do more with it over the next couple of years.”
One server vendor NetApp is watching closely is IBM, because it is both a partner and competitor. IBM sells NetApp’s flagship FAS storage systems as well as the Engenio storage line that NetApp acquired from LSI this month for $480 million. But IBM also internally develops storage products that compete with NetApp’s.
“IBM has aspirations to have products in this space, and they’ve had that all along,” Georgens said. “The desire by their internal groups to make their own products makes the positioning very complicated. And are we happy with the positioning? No. On the other hand, our engagement with IBM’s customer facing groups, the people who actually have to put solutions in front of customers, is actually exceptionally strong. If we continue to out-innovate them and introduce products to market faster, then we’ll preserve the business.”
As far as more acquisitions, Georgens said NetApp is more likely to make smaller ones such as it did with Akorri and Bycast over the past year than large deals such as Engenio, but “when it’s the right transaction at the right price and appropriately strategic, then we’ll move ahead.”
Georgens said NetApp would refrain from selling Engenio products through its own channels that compete with those systems sold by OEM partners such as IBM and Dell. He said the relationship between NetApp and Engenio’s OEM partner Teradata could help NetApp move into the data warehousing market. He said the two vendors do not compete have complementary technologies. “I’m optimistic about that one,” he said of Teradata.
May 20, 2011 12:58 PM
Posted by: Dave Raffo
symantec; clearwell; discovery; archiving
Symantec took a big plunge into the e-discovery market Thursday night when it acquired Clearwell Systems for $390 million, giving it technology that will complement its Enterprise Vault email archiving application as well as other products in its backup, security and data management portfolios.
Clearwell’s e-discovery suite handles the collection, identification, preservation, processing, review, analysis and production of data. On a conference call to discuss the deal, Symantec CEO Enrique Salem said Clearwell’s suite of products goes far beyond the Discovery Accelerator module for Enterprise Vault.
“Our Discovery Accelerator was a bolt-on technology to Enterprise Vault,” Salem said. “We clearly saw that we had a part of the discovery process, but not the breadth that Clearwell is doing end-to-end from the initial stage of gathering the information through the final production. We needed to add the significant steps of analysis and review. That process is where we found the Clearwell technology excels. This extends our ability to preserve it, analyze it, and review it.”
Symantec acquired Enterprise Vault product when it bought Veritas, which had acquired Enterprise Vault vendor KVS in 2004. Salem said since then, customer focus for email archiving and management “has shifted from quota management to help me with the discovery process.”
Clearwell’s eDiscovery product has four modules: Legal Hold, Identification and Collection, Processing and Analysis, and Review and Production. It provides features such as transparent keyword and concept search, legal hold management workflow, the ability to collect information from multiple data sources, advanced processing capabilities such as deduplication, filtering and content analysis, linear and non-linear review, and a reporting and auditing trail for case management.
The Clearwell product portfolio and its team will become part of Symantec’s Information Management Group (IMG). Symantec intends to continue selling Clearwell’s products on a standalone basis as well as in connection with Enterprise Vault. Clearwell and Symantec already have a technology partnership that lets customers use the two vendors’ products in an integrated fashion. Brian Dye, Symantec’s VP of product management for the IMG, told StorageSoup that Clearwell technology would also be used with Symantec’s NetBackup, Data Loss Prevention (DLP) and Data Insight products.
“This is about a broader strategy to secure and manage information,” Dye said. “It’s not just about combining Clearwell with Enterprise Vault.”
Enterprise Strategy Group analyst Katey Wood said eDiscovery software has become more important over the past few years following changes to the Federal Rules of Civil Procedures (FRCP) that requires IT to become more involved in requests for legal information. The rapid growth of electronic data makes good archiving and discovery capabilities even more important.
“It’s become more and more expensive to outsource that to service providers and have attorneys review all things that are relevant,” she said. “There’s a big movement to bring software in-house so companies can do the process themselves. They can do cut down and filter information before sending it off to lawyers.”
Gartner placed Clearwell in the leaders category of its eDiscovery Magic Quadrant released last week, with Symantec going in the challengers section.
Clearwell claims more than 400 customers and around 200 employees, and said it had around $56 million in revenue over the 12 months ending March 31. The deal is expected to close around by September.
Clearwell also makes its eDiscovery technology available through service providers, and Symantec executives said they would probably continue down that path.
In 2009, EMC acquired Clearwell rival Kazeon for a price that was reportedly below $100 million.
May 18, 2011 4:13 PM
Posted by: Randy Kerns
storage buying decisions; vendors shows; EMC World; Symantec Vision
, storage efficiency
This is the season for vendors of storage systems, servers, software and networking equipment to hold events to show off their products and discuss their strategies. EMC World and Symantec Vision recently wrapped up, with more shows to follow.
These events really do display the results of the hard work of talented people. Some products are organically developed while others result from the acquisitions of other companies and their talented people.
The messages are typically about new product capabilities, which may be iterative refinements to the current products or new products entirely. It is easy to get enamored with these – the result of both effective marketing and our insatiable desire to keep up with the next new thing.
But for IT organizations, the focus needs to be dialed back a bit to look at the requirements – both now and in the next three to five years. Some of the new things fit into this category while others will not. The importance is moderated by meeting specific requirements and putting a set of evaluation criteria in place with a decision process.
Vendor materials –marketing collaterals, specification sheets, case studies, and other realistic information — are useful in providing a foundation for understanding the product or solution. There really is no substitute for having a detailed knowledge base to work from, however.
There are two elementary starting points when looking at new products or technologies to meet the needs of an IT operation. You must ask if there is a capability that can only be done by using this product/technology, and if there is an economic advantage with a particular solution. There are unique products that hold their place because of what they bring. But for other products, there are competitive alternatives and sometimes tradeoffs that bring more considerations into the decision.
Economics for storage systems, including evaluating storage efficiency of the individual elements (see an explanation of this at Evaluator Group’s web site), are the most important considerations besides the fundamental requirements of storing information and providing access. Economics include the gains from doing things faster or more efficiently. If they can be quantified, then the decisions can be made and defended with more confidence.
It is necessary at times to remember that the focus on making storage decisions needs to be rooted in sound judgment with the best information and counsel available. The value provided by the products and the new generations and new technology improve the choices.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
May 18, 2011 1:01 PM
Posted by: Dave Raffo
, data deduplication
Quantum is preparing to add source-side data deduplication to its DXi disk backup platform, which currently performs target-side dedupe. Adding source-side dedupe will help Quantum take on EMC, which sells separate products — Avamar and Data Domain – for source and target dedupe.
Quantum CEO Jon Gacek revealed details of the vendor’s product roadmap over the next year Tuesday during its quarterly earnings call. Along with source dedupe, Quantum will add a NAS interface to its DXi 6700 midrange backup system and enhance the DXi software’s capability for protecting data in virtualized environments. Gacek said Quantum is also planning to deliver its StorNext archiving software on appliances and upgrade its Scaler i6000 enterprise tape library with improvements for archiving, high availability and security.
After the earnings call, Gacek disclosed a little more about Quantum’s product plans to StorageSoup. He said the DXi source-based dedupe would consist of client software running on servers that would dedupe data over the wire to improve performance and require less bandwidth. “We have a competitor that sells that as two products,” Gacek said, referring to EMC. “We’ll sell it as an integrated solution. One product is better than two.”
Quantum’s source-side dedupe follows the EMC Avamar model. While EMC recently improved Avamar’s performance when used with its Data Domain target dedupe, customers still must buy both products to get source and target dedupe.
The inclusion of source and target dedupe in one product is not unique — most major backup software applications support both. But Quantum is most focused on competing with Data Domain, the giant in the disk backup market.
Quantum still lacks global deduplication, which dedupes across multiple nodes as if they were one node. Data Domain added that feature across two nodes last year with its Global Deduplication Array.
The midrange DXi 6700 is currently a Fibre Channel virtual tape library (VTL) interface device. Quantum will add multiprotocol support, just as it has for its enterprise DXi 8500 system that launched last year. The DXi 6700 will support VTL, NFS, CIFS, and Symantec OpenStorage (OST) interfaces. Quantum’s other midrange and SMB DXi devices are NAS-only.
With StorNext, Quantum is following the path Symantec recently set by offering its FileStor software on an appliance. But Gacek said Quantum will have several appliance choices for different markets and use cases. He said the goal is to make StorNext easier to implement than it is now as a software sale that requires customers to set up their hardware based on their workloads. “We’ll flavor the appliances to go after different types of customers,” Gacek said.
Gacek, who replaced Rick Belluzzo as Quantum CEO April 4, will also revamp the company’s sales force to assign sales teams based on customers’ size and industry instead of by geography and specific product.
Gacek said Quantum’s goal is to provide alternatives for customers and channel partners to EMC’s Data Domain disk backup and Oracle/Sun tape libraries.
EMC sold Quantum’s DXi software before acquiring Data Domain, and Quantum is still looking to recover from the revenue it lost when EMC dropped its OEM deal.
Quantum grew its branded disk and software revenue 38% to $113 million for the fiscal year that ended in March, yet its overall revenue of $672 million fell one percent because of the loss of OEM revenue. Its $165 million in sales last quarter was slightly up over the previous year, however.
May 16, 2011 7:32 PM
Posted by: Dave Raffo
ssd; mlc; slc; nand flash; sandisk; pliant technology
It’s no secret that there are more solid state drive (SSD) vendors in the market today than can possibly survive, and there is bound to be consolidation through acquisition and some companies folding. One step toward consolidation came today when SanDisk acquired startup Pliant Technology for $327 million, giving SanDisk entry into the enterprise market and providing more resources for development of Pliant’s technology.
SanDisk makes NAND flash products for the retail and consumer markets, mostly smart phones, digital cameras and tablets. Pliant makes enterprise multi-level cell (MLC) and single-level cell (SLC) SAS interface SSD drives, including those used by OEM partners Dell, LSI and Teradata in storage systems and servers. There is no overlap in products between SanDisk and Pliant.
SanDisk’s Chief Strategy Officer Sumit Sadana said the vendor took a look at all the enterprise flash providers before acquiring Pliant because of “the technology that Pliant brings to the table, its credibility with tier one storage OEM customers, and its ramping revenue. Pliant is a leader in MLC-based flash, and that allows customers to gain access to this technology in the enterprise space at a lower cost than SLC. We think that’s the winning approach for the future.”
Sadana said SanDisk will continue to sell Pliant’s SLC flash, but will concentrate on MLC. “We’re not opposed to SLC-based solutions, but we will be heavily driving MLC to dramatically increase the presence of flash in the enterprise,” he said.
He said SanDisk will also continue Pliant’s strategy of selling through storage OEMs. Dell sells Pliant SSDs in its PowerVault storage arrays and PowerEdge servers. Pliant SSDs are also used in storage enclosures sold by LSI and Teradata’s data warehouse systems. Pliant has also been pursuing OEM deals with other major storage vendors.
Sadana said most of Pliant’s 80 employees will join SanDisk, including its founders – president Mike Chenery, chief architect Doug Prins and CTO Aaron Olbrich. Pliant CEO Richard Wilmer, who joined the company in March, will stay on through the transition period but Sadana said Pliant VP of marketing Greg Goelz will run the enterprise business and report to SanDisk VP Sanjay Mehrotra.
Goelz said Pliant, which had $57 million in funding, first approached SanDisk about a strategic relationship regarding SanDisk’s NAND chip. “We were also looking at raising money to fuel 2011 growth,” he said. “They looked at us and said, ‘We want to be in the enterprise, you guys are doing well …’”
He said Pliant did not have to be acquired to survive, but it can benefit from being part of a larger established vendor.
“SanDisk wants to be in the enterprise space and we know one or two things about the enterprise space,” Goelz said. “They’re a technology leader in NAND, and it’s a good natural fit.”
Jim Handy, SSD analyst at Objective Analysis, said he agrees that the two companies are a good fit because Pliant includes enterprise data protection features such as triple-redundant protected meta data and support for the T10 Data Integrity Field standard.
“It’s interesting that SanDisk wasn’t paying the slightest bit of attention to the enterprise before,” he said. “And Pliant is a serious player in the enterprise. Pliant is a company that understands enterprise storage rather than a company that understands flash memory. Most companies trying to get into flash memory don’t understand things like end-to-end data protection. That’s completely alien to people in the flash memory market.”
Handy said SanDisk’s challenge is to adopt an enterprise business model. “It’s a consumer oriented company,” he said. “This will be a new deal for them, not just because of the technology but also because of the sales challenges and the new approach to selling.”
May 13, 2011 6:02 PM
Posted by: Dave Raffo
EMC World; FCoE; Isilon; big data; symmetrix vmax; centera; primary deduplication
LAS VEGAS, Nev. — Notes, quotes and anecdotes from EMC World 2011:
All of EMC’s talk about the cloud this week brought up the inevitable questions about public trust in the cloud, especially after high-profile outages like the one suffered by Amazon EC2 a few weeks ago.
EMC CEO Joe Tucci said EMC has already solved the kinds of problems that caused the Amazon crash and other cloud glitches.
“Customers have to be able to recover their data quickly, and that’s what EMC does for a living,” Tucci said. “The Amazon issue had to do with the way data is recovered. Obviously we’ve been doing this for a long time. It took Amazon a long time to recreate lost data and some of that data won’t be recovered. We keep extra copies. If you lose it, we say, ‘OK, it’s lost, but we have an extra copy.’ We also give the option to encrypt data on storage, so if data gets stolen, you have encrypted data.”
Support claims rankle some customers
EMC held a Big Data Summit during the show, consisting of about 30 Isilon customers. The customers were asked what EMC can do better and while most offered suggestions for new product features, a few raised customer support issues. One long-time EMC customer who purchased Isilon storage after EMC acquired the NAS vendor in January took issue with EMC president Pat Gelsinger’s comments that EMC support is consistently among the best in the industry.
“Your support is ugly and as you make acquisitions, it gets uglier,” said the customer, an information services director at a telephone company. “That’s a concern I have with Isilon. I like what I hear from [other users] that you don’t have to touch it for six months, but the day I touch it I need support.”
He talked about spending eight hours on the phone with support once for a Clariion problem. He said he spoke with EMC’s support center in Ireland from 2 a.m. to 8 a.m., and then was moved to another call center before the problem was fixed. “A problem that should’ve taken 20 minutes to fix took eight hours,” he said. “Around 8:15, my EMC salesman called and said he wanted to take us to dinner. I said, ‘You can take us to dinner, but bring a truck and take this thing out of here.’”
Gelsinger was gone by then, leaving EMC global marketing CTO Chuck Hollis to take the brunt of the complaints. “When I hear stories like that, I want to cry,” Hollis told him.
Another customer said she found VMware support lacking for customers using EMC storage, even though EMC is the majority owner of VMware. Hollis pointed out VMware is a separately run company, but said EMC recently spent $60 million to hire VMware specialists inside EMC.
Symmetrix sets the table for FCoE
Fibre Channel over Etherenet (FCoE) support for EMC’s VMAX seems like a minor enhancement now, but EMC’s chief strategy officer for enterprise storage Barry Burke said he expects FCoE to become the dominant protocol over time. “I think at the end of the day, FCoE wins out,” he said. “It just doesn’t happen overnight. A lot of customers ask about it. The problem is, there aren’t a lot of storage targets that support it. Now we have FCoE support across the board.”
EMC is expected to add support for SAS drives to VMAX later this year, but Burke said “there’s not a rush” to phase out Fibre Channel drives because there is still customer interest. He said EMC has also advised its Symmetrix DMX customers not to expect any software upgrades as all of its development is going into the newer VMAX platform.
Centera’s still alive
EMC has made a lot of product noise this year with its massive January launch, a February investors event and this week’s EMC World. But its object-based Centera archive system hasn’t had any updates, and barely a word was spoken about it during those three events. That leads me to believe that Centera is either a perfect product that needs no further development or it’s about to be put to rest.
Neither is the case, according to Jon Martin, director of product management for EMC’s Cloud Infrastructure Group.
“Despite what you might hear from our competitors, Centera is not end of life,” Martin said. “In the second half of the year we’ll have a new release addressing some customer requests for features.”
Martin did admit Centera is “mature technology – not a lot of revolutionary features we can add” but said it still has a role despite EMC’s newer and shinier Atmos object-based cloud storage platform. He said Centera has advanced compliance and data retention capabilities that Atmos is not built to address.
Will big data, SSDs push primary dedupe?
Permabit CEO Tom Cook was among the attendees at EMC World, and said the big data theme made him optimistic about his company’s Albireo primary data deduplication technology. “There’s an opportunity to Data Domain this space,” he said, referring to the ability to do for primary data what Data Domain has done for backups.
Cook said besides helping to offset customers’ rapid data growth, primary dedupe can help make solid state drives (SSDs) more economical by reducing the amount of data that goes on them.
Cook also said his OEM deals with BlueArc and Xiotech should result in those vendors delivering products with Albireo this year. He said it was Permabit’s decision to end its deal with LSI after NetApp acquired LSI’s Engenio storage division because its other OEM partners consider NetApp a major competitor.
Remember Rainfinity? EMC acquired the file virtualization vendor in 2005 only to phase out its product line. Now Rainfinity technology has resurfaced in the Cloud Tiering Appliance EMC launched this week. The CTA migrates files from storage devices to the cloud, from hardware of EMC competitors onto EMC storage or from one EMC system to another.
EMC to do more with Hadoop
EMC’s support of Hadoop on its Greenplum appliances is likely the first step in the storage vendor’s deeper involvement with the Hadoop open source community. “We have work to do to establish ourselves as a credible player in the Hadoop community,” EMC chief marketing officer Jeremy Burton said. “We have to contribute.”
EMC executives say they can add technology to provide fault tolerance, mirroring and high availability to Hadoop. The work with Hadoop will be done through the Greenplum team, Burton said.
May 11, 2011 12:04 PM
Posted by: Randy Kerns
For years, storage systems have evolved with the addition of high value features. The features have become differentiating characteristics between the storage systems in many cases.
They’ve also become in many cases requirements for any sophisticated storage array.
• snapshot point-in-time copies,
• remote replication with synchronous and asynchronous transfers and support for consistency groups,
• thin provisioning,
• data compression,
• tiering across multiple storage device types,
• caching with multiple levels,
• cloning of volumes,
• And a number of others not as prevalent across different storage systems.
Most of these features have traditionally been extra cost items in storage systems. The costs are normally additional licensing charges, either per storage system or capacity-based. The vendor justifies the extra charges because of the effort to develop the feature and the additional costs incurred for the support. With this method, the customer does not pay for unused features.
However, there is a change underway for storage vendors to include some of these features in the base product as part of the base price. This is being done as a competitive issue. When buying features a la carte, customers often feel charges are just being piled on. The additional charges can raise the price significantly, and it is frustrating to customers. Vendors who include the features in the base price argue that they deliver more value that way and the single price has become a differentiator for them.
Interestingly, I have run into customers who have asked if they could get a price break if they didn’t use one of the included features. I attribute this to the conditioning that they have to pay extra for features. I heard a salesman respond that a customer who buys a car wouldn’t expect to pay less if that person didn’t intend to use the back seat. These features are just included in the base product now.
Whether this becomes the standard for storage systems is not clear but certainly that is the direction vendors are heading today. There are still high-value features that are extra charge but there are already a number of capabilities of the system included that were once add-on features. Over time, the competitive pressures will probably continue to drive more features to the base product. While this may be good for the IT customer, it cuts into a vendor’s profitability. But, it is probably inevitable. We spend time evaluating the features of the different storage systems and write about them at the Evaluator Group web site.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
May 9, 2011 6:40 PM
Posted by: Dave Raffo
solid state storage; PCIe flash
LAS VEGAS, Nev. – The first big storage product launch at EMC World is “Project Lightning,” code-name for a PCIe flash-based server cache device to accelerate performance of its storage systems. It was one of several solid state drive (SSD) products EMC previewed Monday at the show. Project Lightning is scheduled to ship in the second half of the year.
The vendor said it would qualify multi-layer cell (MLC) SSDs for its storage arrays this year, and deliver all-Flash Symmetrix enterprise systems by the end of next month and an all-flash VNX unified system this year. EMC also created a flash business unit to develop new technologies and manage partner and supplier relationships.
EMC began shipping single-layer cell (SLC) flash on its Symmetrix systems in 2008, and claims it has shipped nearly 14 PB of flash capacity in storage arrays since last year. The vendor said half of its VMAX and VNX systems orders now include flash capacity.
EMC has not shipped PCIe flash or MLC, which is lower-cost flash than SLC but comes with performance and reliability tradeoffs. EMC rival NetApp ships a PCIe based Flash Cache product in its storage arrays that is the main focus of its SSD strategy.
Storage vendors have increasingly turned to MLC to try and spur SSD sales, which still meet resistance because of the price.
“EMC will enter the server flash business,” EMC president Pat Gelsinger said of Project Lightning during his keynote address today. “We’ll take flash and put in in the server so it acts as server DAS or server cache.”
During a media briefing after his keynote, Gelsinger would not say if EMC is working with Intel on Project Lightning. According to leaked Intel roadmaps, the chip vendor is working on PCIe flash products. “This is an EMC product,” Gelsinger said. “Obviously Intel is one of our most core technology partners, but Intel hasn’t announced its plans yet so it would be premature to announce any partnerships.
Gelsinger also said EMC would integrate its FAST automated tiering software with the PCIe card to optimize data placement.
Gelsinger pointed out Project Lightning is server-side flash, while NetApp’s Flash Cache is in the storage. “We will offer flash in the server as DAS or cache and build a connection to extend flash intelligence into the storage array or server,” he said. “Flash Cache is just storage-side cache, it’s largely what FAST already does.”
EMC rolled out a few other products today:
The Isilon NL-108, the biggest version of its nearline NAS platform, scales to 108 TB with 36 3 TB drives in one system and can scale to 15.5 PB with a 44-node cluster. Isilon also rolled out SmartLock, a policy-based retention software designed to prevent accidental deletion of data.
Atmos 2.0 has improved performance for object ingestion, GeoParity software for moving data across Atmos deployments, GeoDrive software that lets Windows users move data t the cloud and an Amazon S3 interoperability API that lets S3 customers use Atmos.
Three GreenplumHD appliances integrate Hadoop open source software for distributed applications with large data sets. The appliances run Hadoop as a virtual appliance or on a Greenplum hardware appliance.