In the last few years we’ve seen advances in storage technology that have tremendous potential for IT customers. Some of these are enabled by investments made in developing flash solid-state drive (SSD) technology and adapting it to enterprise storage systems.
These new storage technologies improve the processes of storing and retrieving information. They reduce costs, and lead to greater storage efficiencies and capabilities.
Notable storage technologies that have been delivered relatively recently to customers include:
• Tiering products with the ability to place or move data based on a probability of access to maximize the performance of the storage system.
• All-SSD storage systems designed for near- zero latency to provide the highest level of performance in a storage system.
• Faster networks and network convergence where the pipes used in storage to move data allow greater bandwidth and bring the ability to standardize on one type of infrastructure throughout an organization.
• SSD add-in technology where the non-volatility and performance of SSDs can be exploited in a direct manner.
• Forward error-correction technology as a new way to protect data from a failure.
• Scale-out NAS systems to address enterprise demands for the volumetric increase in unstructured data to be stored.
There’s another type of product that has not received as much attention, perhaps because it is not simply a matter of using a new technology. That is the process of automating information management to make it easier for administrators to understand the value of data. This area may be more difficult to master, but new developments may yield greater economic value – primarily in operational expense – than some of the newer technologies.
An example of automated management would be what is called “Active Management.” This Active Management is similar to Storage Resource Management (SRM), but besides monitoring storage resources it takes automated actions based on conditions and rules defined by default or through customization by the storage administrator.
Yet despite their clear benefits, the adoption rate for these technologies and processes will be slow. That’s mainly because IT is conservative in adopting new concepts and technologies. This does not mean that the technologies won’t be embraced and be successfully deployed. It will just take longer.
The excitement around a new technology needs to be kept in perspective with how long it takes to be deployed successfully by IT. The technology adoption rate reflects the inertia and conservative nature for handling the critical task of storing data.
The longer technology adoption rate can kill start-up companies when they can’t get the investments required to get to the point of profitability. Many investors have an aggressive profile that defies the reality of adoption rates. Larger companies can handle the conservative adoption more successfully, but with much internal angst.
But no matter how great a new storage technology may be, the customer needs to understand it to make correct decisions about investing in it before it takes hold in data centers.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Startup FlashSoft, which was first out with flash caching software, is looking to stay ahead of impending competition with a major upgrade to its FlashSoft SE barely four months after making the software generally available.
FlashSoft today said FlashSoft SE 2 is in beta, with support for Linux, larger solid-state drives (SSDs) and read-only cache. FlashSoft SE improves application performance with solid-state storage by turning SSD and PCI Express (PCIe) server flash into a cache for the most frequently accessed data.
FlashSoft developers have been busy. CEO Ted Sanford claims 800 feature enhancements in version 2. Besides Linux support, the biggest improvements are an increase in the maximum cache from 256 GB to 1 TB (it now can accelerate up to 10 TB of active data), and the ability to detect a failing SSD, issue an alert, and switch the cache to pass-through mode to avoid writing data to any SSD without a redundant backup. FlashSoft SE 2 also has a new GUI that provides runtime data and statistics through the Windows Management Interface (WMI) API.
The software now supports read-only caching as well as read-write caching. Sanford said read-write caching works best for most applications, but apps with large read requirements such as video can benefit from read-only. FlashSoft also increased its read cache performance for large files and data sets.
In addition, FlashSoft upgraded FlashSoft SE-V for virtual servers to 2.0, adding support for virtual servers running on Linux and Windows.
FlashSoft SE pricing starts at $2,000 and depends on the number of servers accelerated and the type of SSDs used. FlashSoft sells FlashSoft SE as standalone software, but Sanford said he is working on partnerships to bundle the software with SSDs.
Sanford said the new features were mostly suggestions from early customers and organizations evaluating FlashSoft SE software. FlashSoft is probably also hearing footsteps from other flash cache competitors entering the market.
EMC‘s PCIe server-side cache “Project Lightning” product that uses EMC’s FAST software to improve performance is due out by the end of the year, as is Fusion-io’s ioCache that uses software that Fusion-io acquired from early FlashSoft competitor IO Turbine. SSD vendor STEC is sampling its EnhanceIO SSD Cache Software to partners and early customers, startup VeloBit is in beta with its SSD caching software, and Nevex and perhaps other startups are on the way with competing products.
“We feel like it’s a significant accomplishment that we’re shipping a 2.0 release in advance of anybody else shipping 1.0,” Sanford said. “We believe we will continue to advance the art. Leveraging flash as cache in existing servers or being added to new servers coming out is a large market. When you can improve an application’s performance by three to five times, that’s a fundamentally strong value proposition. There will be room for significant success for several companies.”
Today, DataCore addressed that omission through a partnership with cloud storage gateway vendor TwinStrata.
SANsymphony-V virtualizes storage across pools of heterogeneous systems, adding management features such as thin provisioning, RAID striping, asynchronous replication, and snapshots. The new tiering feature lets customers dynamically move disk blocks among different pools of storage devices.
Beginning in late October, when a customer purchases SANsymphony-V — which DataCore calls a “storage hypervisor” — it will include a 1 TB version of TwinStrata’s CloudArray virtual appliance at no extra cost. That lets DataCore customers move data off to the cloud, although they need a subscription with a cloud storage provider such as Amazon S3 or Nirvanix. A DataCore customer can also go beyond a 1 TB gateway by upgrading the appliance through TwinStrata, which charges $4,995 for unlimited capacity. The CloudArray software deduplicates, compresses and encrypts data before moving it to the cloud.
TwinStrata also sells its cloud gateway as an appliance. DataCore CEO George Teixeira said SANsymphony-V will also work with the hardware appliance, using it as cache to speed backups.
“Now we’ve allowed a cloud tier to be part of our storage hypervisor,” Teixeira said. “When data gets to a lower tier, it can be put on an iSCSI device that is actually a cloud disk.”
There are other cloud gateway products on the market, but Teixeira said he picked TwinStrata because it supports iSCSI while most of the others are for files and backup. As the name implies, SANsymphony-V is a SAN application, so it required a block storage gateway.
“Most of them [gateways] are doing things at the file system,” Teixeira said. “These guys [TwinStrata] present an iSCSI disk. We can use storage virtualization across all disk and this looks like another disk we are auto-tiering, so it plays into our model. This doesn’t require a lot of thinking on the part of the customer. It’s just another tier, and you can choose to pick a pay-as-you-go model.”
Although TwinStrata can handle primary storage, Teixeira said he expects his customers to use the cloud mostly for backup and archiving. “I think we have some ways to go to get to primary storage in the cloud,” he said. “We’re looking mostly at backup, archiving and scratch storage. We think most production data will stay on-premise, but why not put non-production and backup data on the cloud?”
TwinStrata is looking to use partnerships to get its gateway into the market, even if it has to give it away at first. Last month the startup began offering a free 1 TB CloudArray appliance to Veeam Backup & Replication customers who want to back up to the cloud.
Gartner research director Gene Ruth said organizations are interested in moving to the cloud, but are still looking for the best way to go about it. He said storage virtualization and gateways are two potential starting points.
“It’s hard for people to get their arms around the idea that they’ll put all their data out on the cloud, but they know they have some data they can put out there,” he said. “I see storage virtualization in general as one of those building blocks to help consolidate a cloud storage environment around a common provisioning point. It’s not the end-all, but it’s a good start.”
Ruth said he also sees gateways as key enablers of cloud storage, and not just as standalone devices.
“It seems a pretty obvious step for major vendors to put that [gateway] functionality into disk arrays and file servers,” he said. “I don’t think it’s that difficult to add a gateway to their arrays.”
Large companies are represented by many people filling various roles to the outside world. These people are trying to accomplish different things. A marketing person is trying to create information that will highlight the company and give customers a reason to consider or continue with its products. Sales people are on the front lines with the customers, trying to influence them with the company’s offerings. Press and analyst relations people try to pass along useful information and increase the awareness of the company and its products.
At any given time, all of these people serve as the face of the company they represent.
This is no different in the storage industry than with other types of companies. But because information is stored for a long time, customers may not interact with storage companies as frequently. That means their impressions could have a longer term impact.
As an example, I was working with an IT director of a major company recently and the director was upset with what a salesperson had done in engaging the CIO of the company. The IT director did not have a problem with the specific person, but with the company. This director vowed to not buy a solution from that company for the problem we were trying to solve. It didn’t matter whether the sales person was using company-endorsed sales practices or not. A new barrier had been created.
This isn’t an isolated incident. I find that the negative perception caused by an individual can affect a sale even when the customer knows the vendor representative doesn’t represent the entire company. The customer might consciously or sub-consciously associate the vendor with one person, and the way that the customer has been dealt with may impact all future relations with that vendor. That could impact the customer’s ability to make objective decisions. But that is human nature and does require additional thought or effort.
The lesson here is that at any moment during external communications, the company employee must remain keenly aware that he or she is the “face of the company” right then. Saying outrageous things or taking a condescending attitude ultimately reflects on the company. It only takes one bad encounter to give someone a negative view. In the case of storage, this view could last for years.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Check out our Storage Headlines podcast, where we review the top stories of the past week from SearchStorage.com and Storage Soup.
Here are the stories we covered in today’s podcast
(0:18) Violin Memory launches all-flash storage for the enterprise
(1:24) FalconStor founder Huai found dead
(2:17) Arkeia adds dedupe, SSDs to backup appliances
(3:12) QLogic takes another whack at converge storage networks
(4:01) Storage Products of the Year 2011
QLogic is taking the stance that having multiple personalities is the sane way to approach converged storage networking. With Fibre Channel (FC) remaining the dominant protocol and Ethernet becoming a better candidate for SANs, QLogic has new gear that supports the latest flavors of both.
The storage networking vendor updated its product platform to 16 Gbps Fibre Channel this week, including a switch that supports FC and 10 Gigabit Ethernet (10GbE) ports to give it what QLogic calls “dual personalities.” QLogic also launched its 8300 Series Converged Network Adapter (CNA) that supports Ethernet, Fibre Channel over Ethernet (FCoE) and iSCSI, and the 2600 Series 16 Gbps FC HBA.
The Universal Access Point 5900 (UA5900) can be configured to run 16 Gbps Fibre Channel or 10 GbE traffic. Customers can start with 24 device ports and grow to 68 ports by adding licenses. Four of the ports can be used as 64 Gbps Fibre Channel trunking ports, and the switches can stack to 300 device ports. The UA5900 can be a Fibre Channel or Ethernet edge switch, and — with a Converged Networking license – it can serve as a top-of-rack FCoE switch to compete with Brocade’s 8000 and Cisco’s 5548UP devices.
QLogic also said it would bring out an intelligent storage router – called the iSR6200 – with support for Fibre Channel, FCoE and iSCSI. The router is designed for SAN-over-WAN connectivity.
The UA5900 and adapters are expected to ship through QLogic’s OEM and channel partners in early 2012, with the iSR6200 expected late next year.
QLogic was one of Cisco’s early allies in delivering FCoE gear years ago, and is on its third generation of converged networking devices. But FCoE has gained little adoption and Fibre Channel isn’t going away. QLogic execs say they expect Fibre Channel to remain strong while FCoE is a longer term item for many organizations. “We expect over the longer period, FCoE will gain momentum,” QLogic director of product marketing Craig Alesso said. “But Fibre Channel is still the workhorse for most enterprises.”
When FCoE does gain momentum, what role will hardware adapters play? Intel has launched software FCoE initiators that use host processing power and work with any network adapters. Intel’s plan is to eliminate the need for CNAs, but Alesso said QLogic’s adapters will have a big role in running FCoE. He maintains that CNAs are better suited for I/O processing and server CPUs should be used for applications.
“People can run FCoE initiators, but there’s a [performance] cost,” he said. “We free up servers to do what customers want to do with servers – run multiple virtual machines and multiple applications. The CPU should be used for running applications, not the I/O. We should run the I/O. Also, with [software] initiators, you lose management. You don’t have the common look and feel among management utilities.”
Arkeia Software CEO Bill Evans has watched Symantec roll out a steady stream of backup appliances over the last year, and he asks, “What took so long?”
Arkeia began delivering its backup software on appliances four years ago, and this week launched its third generation of appliances. They include the data deduplication that Arkeia added to its software a year ago, solid state drives (SSDs) to accelerate updates to the backup catalog, and up to 20 TB of internal disk on the largest model.
“Since 2007, we’ve been telling everybody that appliances would be big,” Evans said. “Symantec has validated the market for us.”
Evans said about 25% of Arkeia’s customers buy appliances. Because they take less time to set up and manage, he said appliances are popular in remote offices and among organizations without much IT staff.
The new appliances are the R120 (1 TB usable), the R220 (2 TB, 4 TB or 6 TB), the R320 (8TB or 16 TB) and the R620 (10 TB or 20 TB). The two smaller models include optional LTO-4 tape drives while the two larger units support 8 Gbps Fibre Channel to move data off to external tape libraries and RAID 6. They all include Arkeia Network Backup 9 software and built-in support for VMware vSphere. Arekeia’s progressive dedupe for source and target data is included with the R320 and R620, and optional with the R220. Pricing ranges from $3,500 for the R120 to $47,000 to the R620 with 20 TB.
The R620 includes 256 GB SSDs, enough to manage the backup catalog. “We would never put backup sets on SSDs, that would be too expensive,” Evans said. “But it makes sense to use SSDs to manage our catalog, which is a database of our backups. The catalog is random, and updating the catalog could be a performance bottleneck.”
“If we were simply a cloud gateway and combined SSDs and disk in a single package, we wouldn’t know what incoming data should live on SSD and what should live on disk. It all looks the same. Because we wrote the [backup] application, we could say ‘this data lives on disk and this data lives on SSD.’”
For disaster recovery, the appliances can be used to boot a failed machine by downloading software from a backup server to the failed machine. The appliances can also replicate data to cloud service providers.
FalconStor founder ReiJane Huai, who stepped down as CEO last year after disclosing accusations of improper payments to a customer, was found dead from a gunshot Monday outside his Old Brookville, N.Y. home. Police have told New York newspapers his death was an apparent suicide.
Huai, 52, also served as CEO of Cheyenne Software before leading FalconStor for a decade. He resigned and was replaced as CEO by Jim McNiel when government agencies began investing the vendor’s accounting practices.
According to newspaper accounts, Huai was found shot in the chest Monday morning. In a statement to Newsday, a FalconStor spokesman called Huai “a visionary and a leader” who was “admired and respected by a great many people.”
Huai came to the United States from his native Taiwan in 1984 to study computer science at New York’s Stony Brook University. He joined Cheyenne Software in 1985 as a manager of research and development for its ARCserve backup product, worked at AT&T Bell Labs from 1987 to 1988, and returned to Cheyenne as director of engineering in 1987. He became Cheyenne CEO in 1993 and sold the company to CA in 1996 for $1.2 billion. After a brief stint at CA, Huai founded FalconStor in late 2000, and held its CEO and chairman titles until last September.
Huai resigned from FalconStor last Sept. 29 after he disclosed that improper payments were allegedly made in connection with licensing of FalconStor software to a customer. The company began an internal investigation at the time, and so did the New York County District Attorney, the U.S. Attorney’s Office for the Eastern District of New York and U.S. Securities and Exchange Commission (SEC). None of the investigations have released any findings.
FalconStor has received grand jury subpoenas from the SEC and the U.S. Attorneys’ Office, and the SEC issued a subpoena seeking documents relating to the vendors’ dealing with the customer in question. The U.S. Attorney’s Office grand jury subpoena sought documents relating to some FalconStor employees and other company information.
FalconStor executives have claimed in public statements and SEC documents that it is cooperating with both investigations.
Two class actions lawsuits were also filed against FalconStor last year alleging the company made false statements because it failed to disclose weak demand for products and that it made improper payments to a customer. Huai was named in those suits along with FaclonStor CFO James Weber and board member Wayne Lam.
DataDirect Networks (DDN) today launched a new member of its Storage Fusion Architecture (SFA) family of high-performance computing (HPC) arrays, and quickly pointed out a large customer deal involving the new system and IBM’s General Parallel File System (GPFS).
DDN claims the SFA10000-X can handle mixed workload read-write speeds of 15 GBps with solid-state drives (SSDs). It holds up to 600 drives for a maximum capacity of 1.8 PB in a rack. DDN aims the system at Big Data (analytics and a large number of objects), media and content-intensive applications. It will replace the S2A9900. DDN already has a SFA10000-E system aimed at highly virtualized environments.
DDN said Italian research center Cineca in June acquired a SFA10000-X from IBM. DDN Marketing VP Jeff Denworth offers the deal as proof that the relationship with DDN and IBM remains solid. IBM recently issued an end-of-life notice to customers for its DCS9900 – based on DDN’s S2A9900 — and suggested the DCS3700 that IBM sells from DDN competitor NetApp Engenio as a replacement.
The Engenio platform has competed with DDN for years, and is now in the hands of NetApp – another IBM partner. Denworth said IBM and DDN still have OEM deals for two other systems – including the S2A 6620 that IBM sells as a backend to its SONAS — and said IBM may have plans for the SFA10000-X.
“IBM discontinued one system among the portfolio we sell through them, and that system is four-year-old technology,” he said.
So why didn’t IBM replace the SFA99000 with the SFA10000-X? “All I can say is the SFA10000-X has a certain customer profile,” Denworth said. “I can’t make any statements about IBM’s intentions for that product.”
DDN executives call DDN the world’s largest privately held storage vendor, and claim they are doing well enough that the loss of any single partner wouldn’t break the company. DDN claims 83% revenue growth from 2007 through 2010 and is on a pace for more than $200 million in revenue this year.
Yet despite a flurry of storage system vendor acquisitions last year and others looking to go public, DDN remains independent and private. DDN EVP of strategy and technology Jean-Luc Chatelain said an IPO will only happen if the terms are enticing enough.
“We’re privately held, and we like it that way,“ he said. “An IPO is not an end for us, it’s a means. If we can use an IPO as a tool for additional currency for growth, we’ll look at that.”
DDN is growing its executive team. Chatelain joined from Hewlett-Packard in February. This month DDN hired former HP executive Erwan Menard as COO, Adaptec veteran Christopher O’Meara as CFO, and Quantum veteran William Cox as VP of worldwide channel sales.
On the technology front, DDN is using enterprise multi-level cell (eMLC) SSDs for the first time with the SFA10000-X. It is also embracing the Big Data label that storage vendors have been throwing around since EMC acquired scale-out NAS vendor Isilon late last year.
“DDN has been doing Big Data since 1998, everybody else is ust catching up,” Chatelain said. “I don’t like the term, but everybody’s using it now. Our customers do Big Data for a living.”
Competitive pressures often cause companies to lose focus when adopting product marketing strategies. These pressures come from executives and boards, and can be intense. They can also cause a vendor to pay attention to the wrong things, instead of putting the attention on the customer.
Vendor strategies must start with a few basics: What is the best way to position a product, and what product characteristics are necessary to meet future needs? Positioning a product is foremost about fitting customer needs. Describing how it fits those needs can be done in many ways, and typically there are multiple approaches taken in addition to data sheets and product specifications. These include:
• A short description of how the product can be used to meet the customer’s needs.
• A longer document that has details of usage in a specific environment.
• A white paper that explains the product in context of the value it can bring.
Positioning statements usually includes how a product fares against the competition. One sign of misguided focus is when the lead information about how competitive the product is starts with the negatives of a competing product. By starting with competitors’ negatives instead of laying out its product’s advantages, a vendor risks wasting the limited time a customer will spend on the material. For us at Evaluator Group, when we put together our Evaluation Guides for customers, starting with the negatives is a big red flag.
Delivering a product that meets future needs is another area where a company can get its focus skewed. Common focus miscues include:
• Lacking an intimate understanding of customer operational characteristics and their business processes.
• Lacking good judgment of the adoption probability within a specific timeframe of new technology by customers.
• Using general surveys to predict future customer needs.
• Watching what competitors are doing and trying to follow their lead.
These mistakes lead vendors to look in the rear-view mirror. Instead of looking out the windshield when making plans, they look back to see what has already happened.
Keeping the pressures in perspective and maintaining focus on how to position and deliver products can be tough for some companies. Those that do it well are more successful and from our perspective have a better handle on the competitive environment. Companies that have allowed their focus to shift make big mistakes and become less competitive.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).