The software has been generally available since the end of 2009 as part of the EMC Select program. Customers interested in linking critical database and other application transactions with the performance boost available from SSDs can have Precise’s software create a list of “suggestions” of what volumes and transactions could best benefit from Flash storage. An integration between Precise and Symmetrix Management Console can then ‘hand off’ that list of suggestions to FAST, which will perform the migration to higher tiers of storage accordingly. In the ‘handoff’ scenario, the storage manager would manually approve the data movement suggested by Precise.
EMC offers some application performance management through its Ionix IT Operations Intelligence products, but that monitoring is focused on the network rather than transactions,” Precise’s EVP of Products and Marketing Zohar Gilad said.
Of course, EMC FAST is far from the only automated storage tiering software currently available. Gilad said integration with other vendors’ storage tiering software is on the roadmap, but declined to disclose who else Precise might be working with.]]>
Three array partners, 3PAR, IBM and HDS, had already integrated with the API prior to this announcement. This week, Symantec is announcing support for EMC’s Clariion (Symmetrix to follow by the end of the year), HP’s XP24000 (EVA to be determined), as well as arrays from Fujitsu, NetApp and Compellent (the latter still to come later this month).
The API helps Veritas File System (VxFS), which underpins Storage Foundation, and storage arrays communicate which blocks have been deleted and can be freed up and put back into the storage pool. Symantec’s Thin Optimization can also help users perform “thin migrations” from a thick-provisioned to a thin-provisioned array.
Some of the array vendors offer similar features natively — HDS, for example, offers what it calls Zero Page Reclaim, and Compellent offers Thin Import. Symantec claims its approach is more efficient than these native tools. “The reality is that typically unused storage is delted or overwritten, but not written over with all zeroes,” said vice president of product management Josh Kahn. “That makes things like zero-page reclaim less efficient.”
Kahn said Symantec is testing the Thin Reclamation API to get some statistics on the amount of additional capacity users can reclaim with it, but doesn’t yet have those specific numbers.
“The big advantage to this is migrating data from thick to thin, and it’s a pretty efficient tool for heterogeneous environments,” said Enterprise Strategy Group (ESG) analyst Bob Laliberte. “Whether it’s efficient enough to use in tandem with a storage vendor is a question of whether you’re an existing Symantec shop.”]]>
As we were soliciting suggestions for consumer items, however, some enterprise selections also found their way in, many of them quite clever.
So, without further ado, the things storage pros are hoping for that you won’t find in any store.
“How about an [IBM] XIV that I can attach an Redundant Array of Solar Panels (RASP) to and heat my Ice Fishing house while storing some photos and videos of the fish that got away?” quipped StorageIO analyst Greg Schulz.
Added Wikibon’s David Vellante:
Wikibon security analyst Michael Versace had an even more ambitious gift idea:
I’d buy the RSA Security/VCE vision for how to secure the journey to the cloud. I see the current strategy as best-in-class. I’ve heard, read, and dug into the strategy for those that purport to have a cloud security framework, and other than RSA, they are all either as clear as mud, after-the-fact, completely proprietary, or marketware. Questions for RSA/VCE remain around delivery, strength, messaging, and priorities in their roadmap.
Finally, Enterprise Strategy Group’s Bob Laliberte treated us to a rendition of the 12 Days of Storage:
12 Virtual desktops
11 Snapshots of your data
10 Gig Ethernet
9 FCoE Converged Network Adapters Converging
8 Backups that actually work
7 Endpoint security devices
6 Types of de duplication
5 Token Rings
4 internal private clouds
3 tiers of storage
And a Policy engine with a decision tree
You can read the rest of our (perhaps more attainable) gift suggestions here.
P.S. Another holiday gem came our way via Quantum Corp. this morning — an original song they recorded to bolster their competitive claims. They’ve given us permission to post it here online — think of it as our gift to you. Click here to hear the Quantum song.
P.P.S. UK storage pro Martin Glassborow also has a series of posts at his Storagebod blog detailing wish lists for specific vendors, including HP and EMC, as well as a roundup of vendor bloggers’ wishes as well…]]>
EMC competitors Hitachi Data Systems and Hewlett-Packard say they will have something similar, and they deny EMC has beaten them to the punch because FAST will not have block-level access coming until the second half of 2010 at the earliest. IBM is working on similar data placement software.
HDS’s Hu Yoshida pointed out that HDS has had a Tiered Storage Manager offering available that moves volumes among tiers of storage. It’s not the sub-LUN level experts have said will be necessary to spur enterprise SSD adoption, but then again, Yoshida said, neither is FAST version 1.
“EMC has been doing this for some time,” Yoshida pointed out, citing the SymOptimizer software that was available for DMX3 and DMX4, which at one EMC customer said caused his array to lock when it performed migrations (EMC says FAST does not lock the entire array, just the devices where data is being swapped).
Yoshida also said he wondered why EMC bothered with FAST 1. “It’s usually easier to start with the smallest unit, the chunk or page, than the LUN,” he said. “There’s also more benefit to customers with dynamic provisioning, and moving less data around means the performance overhead on the system’s internal bandwidth is lower.”
Yoshida said HDS is working on page-level automated tiered storage movement for next year.
Kyle Fitze, marketing director for the HP storage division, said HP will also have something similar to FAST next year. “We think that having policy-based automated migration is a key component for driving adoption of solid-state drives, which are still a very small single-digit percentage of the overall volume of disk drives shipped,” Fitzke said. Like HDS, HP has offerings that can move data nondisruptively today, but not automatically according to policy.
Also, at the high-end, HP and HDS’s disk array are one and the same under the covers — the HDS USP-V, which HP OEMs as the XP. It will be interesting to see if the automation each exec was talking about will come from one place or two…]]>
But for years now, VMware and EMC have walked a tightrope between EMC’s ownership of the server virtualization company and VMware’s cooperation with EMC’s competitors. VMware has been careful not to favor EMC storage integrations, for example, over competitors like NetApp.
However, analysts see that picture beginning to shift after two pieces of news from EMC to kick off VMworld. The first was pre-released last week that EMC is now an official reseller of VMware’s AppSpeed application as part of its Ionix data center management portfolio. That’s a part of a wider emphasis at this year’s show on improving virtualized application performance and reliability using reporting and monitoring tools. Virtualization has clearly moved beyond “why” and “how” into “now what?”
This morning at the show, EMC took a step further into VMware’s world by dislosing its acquisition of FastScale, a privately held Santa Clara, Calif. firm which makes software for application image management (AIM).
Taneja Group senior analyst Jeff Boles said he’s intruiged that EMC — rather than VMware — acquired a company with technology that lets you run more virtual machines on your hardware.
“EMC has kept its distance in some ways from VMWare, but I’m under the impression that tide is changing,” Boles said.
FastScale is not a storage product, but Bob Quillin, senior director of product marketing for EMC’s resource management group, wrote in his Infrastructure 2.0 blog of FastScale’s impact on the storage infrastructure: “FastScale increases the relevance to and alignment with VMware by maximizing the density of VM’s that can be run on an ESX (up to 3X the VMs), decreasing memory and disk usage, and thus enabling the most optimal platform for tier-one application delivery,” Quillin wrote.
Another EMC blogger, Chuck Hollis, put it this way: “From a storage perspective, maybe we ought to call it ‘pre-dupe’ rather than ‘de-dupe’? Compared against what can be done with ordinary disk-based deduplication, we’re now able to go so much farther in terms of footprint reduction — not only on disk, but in memory where it ‘really’ counts.”
Said Boles, “this is a key space to watch — this kind of optimization can have significant implications for the storage infrastructure.”]]>
The version launched today includes virtual-machine level granularity for identifying resource contention, CPU and memory utilization, and CPU efficiency within the physical host; previously, analytics were performed by Akorri according by physical server rather than VM.
On the storage front, this new release adds support for 3PAR storage arrays to existing support for HDS and NetApp. BalancePoint can also now map storage switch infrastructures made up of Brocade and Cisco switches, analyze storage switch performance, and identify overutilized and underutilized SAN switch ports.
The most comparable product to Akorri that storage users would be familiar with would be NetApp’s Onaro, which is also software deployed without host agents, and also contains SAN switch performance analysis, as well as mapping virtual-machine level relationships to the underlying storage infrastructure. However, Akorri also includes more server-specific analytics while Onaro focuses on the storage infrastructure. Another storage-focused monitoring tool, Symantec’s CommandCentral Storage, also offers virtual-machine level analytics, but requires host agents. CommandCentral Storage, while also storage-focused, can be rolled up into Symantec’s overall data center management framework for cross-domain support.
In recent years, widespread VMware deployment has called for better analytics in IT to help smooth out performance bottlenecks and resource contention within the physical infrastructure. Until recently, however, deployment of storage resource management (SRM) tools been sluggish, although recent analyst research this year suggests that the economic downturn has more users looking to analyze and improve existing assets for better storage efficiency using these tools. This research also suggests that storage teams are increasingly cooperating with other IT departments, especially networking, to better optimize data center performance.]]>
Hurley told me she spent most of her time since becoming CEO last May trying to get the Bocada’s house in order. “We went through our recession already,” she said, adding the vendor rebounded to reach profitability by the end of last year. Hurley said that was mostly the result of improving internal business processes.
Having completed its internal makeover, Hurley said Bocada will update its Bocada Enterprise software June 30 and again later this year. She hopes the two-phase approach to breaking up the monolithic software into a modular front end will help attract more channel sales and improve workflow within the product.
Bocada Enterprise 5.4 will add “policy mining,” which will allow the software to understand each policy for every backup server client, when that policy changed, and how that has impacted backup job failures or error reports. This version will also begin the modularization process by more clearly delineating the workflow between each of the services it provides, from healthcheck to problem management to change management. “Today we leave the customer to navigate the workflow themselves,” Hurley said. “They have to know where they have to go next. Our next update will move them through to the next step.”
The second update planned for later this year will separate the front-end into sections that can be sold and deployed separately, though the back-end will remain the same. The customers Bocada has in mind for this are service providers who may need to offer a combination of services to customers and issue service level agreements (SLAs) for each service. Advanced modules are also planned for generating SLAs and thresholding, i.e., “If this keeps happening, 30 days from now you might not meet your SLA,” explained Hurley.
Other products that began as backup reporting tools, such as Aptare’s StorageConsole, have broadened their capabilities to include storage resource management (SRM). But Hurley said Bocada plans to stick to its knitting in the data protection space. “To me, even addressing everything in data protection is hard — we don’t want to dilute that value by also having to go and look at how much capacity you have on Clariion,” she said.
Bocada may have picked a good time to re-enter the reporting software market; TheInfoPro’s Wave 12 Storage Study showed that capacity planning and reporting shot to #1 on the list of priorities for storage professionals during the economic downturn.]]>
As a sidebar to that story, though, I also had a pretty interesting conversation with the CEO of Aptare, Rick Clark, about how business has been during the global economic crisis. The consensus among analysts in this market has been that while the need for better capacity management is real, organizations have generally not been willing to pay for it. However, Clark said Aptare has more than 300 customers, and that sales the last three quarters have been the strongest in the company’s history.
For more on this topic, check out our podcast of the interview with Clark about how the company has grown in recent months:]]>
Among the updates, according to an email from Steve Cohen, Product Marketing Manager for SANscreen:
While reporting tools have been steadily adding features and are now available from major vendors, this category of products remains a tough sell, according to analysts. For more on that, see our coverage of Symantec’s CommandCentral announcement.]]>
The Shoah Foundation, founded by Stephen Spielberg to preserve Holocaust survivors’ narratives after Schindler’s List and now a part of the University of Southern California, has conducted interviews with thousands of survivors in 56 countries. The Foundation has 52,000 interviews that amount to 105,000 hours of footage.
CTO Sam Gustman says the footage was originally shot on analog video cameras, then converted to digital betacam and MPEGs for distribution online. It currently amounts to 135 TB. However, the Foundation is converting the footage to Motion JPEG 2000, which will create bigger files–about 4 PB of data, Gustman estimated. Each video will be copied twice, bringing the total to 8 PB.
Gustman says the Foundation received a $2 million donation of SL8500 tape libraries, Sun STK 6540 arrays and servers from Sun Microsystems in June. The Foundation has an automated transcoding system running on the servers, and that takes up the 140 TB of 6540 disk capacity for workspace. Sun’s SAM-FS software will automate the migration of data within the system, to the 6540 and then to the SL8500 silo for long-term storage.
We’re hearing a lot in the industry these days about rich content applications such as this one moving to clustered disk systems, but Gustman said disk costs too much for the Foundation’s budget. He sees the potential for an eventual move to disk storage, but “disk is still too expensive–four to five times the total cost of ownership, mostly for powerand cooling.”
Another advantage to the T10000 tape drives the Foundation plans to use is that they will eliminate having to migrate the entire collection to disk during copying, transcoding and technology refreshes. One T10000 drive can make copies or do conversions directly between drives in the robot, and the virtualization layer with SAM-FS means that can happen transparently.
However, as an organization charged with the historic preservation of records, Gustman agreed with others I’ve talked to about this subject in saying that there’s still no great way to preserve digital information in the long term. “The problem with digital preservation right now is that you have to put energy into it–you can’t just stick it in a box and hope it’s there 100 years from now,” he said. “Maybe there’ll be something eventually that you don’t have to put energy into, but it doesn’t exist yet.”]]>