In information technology, the part of Newton’s laws of motion regarding inertia where a body at rest tends to stay at rest is often prevalent. In IT, this law means changes are increasingly difficult to make because change is often resisted. Resistance to change means missed opportunities to integrate new technologies, improve processes and become more effective.
The reasons for resistance can be rationalized effectively by management making excuses. That does not mean these reasons are correct, it just puts perspective on why seemingly incomprehensible choices are made that defy logic when considered in the entirety of IT management.
I’ve heard these reasons for not making a strategic change:
• Avoidance of risk. Making a change to introduce a new technology introduces risk of some type. The risk is really about potential failure and the implications of that failure on the organization and the people making the decision.
• Inability to schedule the time to implement a new technology or process. “We don’t have time to do this” is a common explanation for not doing something. The conversation goes into limited budgets for staffing and not enough people to take on the extra work. The advantages that might be gained by the implementation are typically dismissed out of hand.
• Limited budget to invest in the technology. The blame for this is usually placed on “executive management” or “the business” making choices that would limit IT’s ability to invest in new technology.
• IT decision makers want to wait until other organizations have proven the technology works. There usually is some justification here because we all know of past products and technologies that did not last for an extended period of time. Organizations want assurance that they are not investing in a transient technology. In IT, the time expectation for something new is perceived to be 10 years.
• Complexity introduced into IT over time increases risk or makes change more difficult. This means that over time, the effort to avoid complexity proves too great and causes a greater resistance to change in the future.
The negatives for resisting new technologies or new procedures in IT are easy to argue with. Many new technologies can bring value to organizations that properly implement them. These include tiered storage systems with solid-state drive (SSD) technology, storage virtualization, scale-out NAS storage, data reduction technologies, IT as a Service, and ‘big data’ analytics. To not move forward with technologies and processes that will have staying power means missing economic advantages. The lack of advances in IT can have a parallel effect on the IT leadership.
Changes will have to be made eventually, and may be more costly the longer they are put off. I recently heard about an organization looking to replace a data center because it was more than eight-years-old. The justification was that the efficiency gain of a new data center was worth the financial investment. That might be true statistically, but it does not seem to be an intelligent overall investment. Evolving through the introduction of new technologies, improvements in changing procedures, and educating IT personnel has to be a better answer than a complete discard and start over
But if the barriers – the arguments given to avoid introduction of technology or change – are so overwhelming and inhibit greater efficiencies, it may be the path of least resistance. The force on a body at rest — the inertia of IT to not do something new — may not have enough impact to start the motion forward. Education on the technologies and economic advantages need to have the net force to move IT forward.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
EMC executives say the storage vendor’s VFCache strategy is to work with server vendors, not to take their business.
During Monday’s VFCache launch, EMC president of information infrastructure products Pat Gelsinger said EMC’s move into server-side flash does not mean it has designs on becoming a server company. It only wants to sell the flash that goes into servers.
“From our side, this is truly cooperation [with server vendors],” Gelsinger said. “We’re not competing with them. There is no coopetition. This is just another card that goes into the server. We’re not in the server business. We’re extending the storage array on the server side and bringing the I/O stack into the server. We’re not going into the server.”
Gelsinger said the VFCache PCIe card is certified to run on servers from Cisco, Dell, Hewlett-Packard and IBM. There are no reseller or OEM deals with the server vendors for VFCache, although Gelsinger said there may be in the future.
Except for EMC’s close ally Cisco, the other three server vendors also sell storage. It will be interesting to see how they react to VFCache. But this isn’t the first time EMC has extended its technology into the server without actually selling servers. As the parent company of VMware, EMC is already a major player in server technology.
So even if EMC doesn’t want to sell servers, it wants a front-row seat to view the server world from.
“The biggest vulnerability EMC has in competing with the IBMs, HPs and Dells of the world is those other guys have access to the entire stack because they sell the servers and everything in between,” said Arun Taneja, founder of the Taneja Group analyst firm. “VMware gave EMC leverage to the server side and put the rest of the industry on notice – if you want to compete you have to buy stuff from EMC.”
David Flynn, CEO of EMC’s largest server-side flash competitor Fusion-io, maintains that EMC is trying to extend its vendor lock in with VFCache. He wonders why EMC doesn’t only sell its management software and let customers pick their own PCIe cards to place in the server.
While it plays nice with all the top server vendors with VFCache, EMC has made it clear that Micron is its favorite PCIe flash partner. Gelsinger – who asked for a moment of silence Monday for Micron CEO Steve Appleton, who was killed in a plane crash last week – emphasized that Micron is EMC’s preferred partner for VFCache although he acknowledged LSI is also a partner in a multi-vendor arrangement.
“Micron has extraordinary I/O performance,” Gelsinger said. “This is the best technology in the industry for PCIe flash.”
Data migration can be a nightmare for any company, so imagine what an IT manager feels like when his cloud storage vendor tells him, “Hey, we are planning to move about one terabyte of your data from one cloud provider to another and, we promise, you won’t experience any downtime.”
True, that is supposed to be one of the key attributes of the cloud. The storage services provider or cloud provider takes on all work and responsibility associated with data migration, and the user isn’t supposed to notice a hiccup. That’s what the IT manager at a California energy company experienced last year when its storage services vendor, Nasuni Corp., moved 1 TB of its primary storage from cloud provider Rackspace to Amazon S3. The project took about six weeks and it was completed in January this year.
“Initially, I was concerned,” said the manager, who asked not to be identified because his company does not allow him to talk to the media. “The data we had in Rackspace was our working data, so it was our only copy. I was concerned about how it would work. I thought for sure I would feel a glitch here and there, but I did not.”
He said Nasuni made a full copy of the data, then replicated the changes to keep the Amazon copy current so the data could be switched later. Nasuni basically set up a system in Rackspace, and sent data copies and version history from Rackspace’s cloud to Amazon. The customer’s network was not used during the process. The energy company now has 9 TB from two data centers on Amazon S3 — 6 TB of primary production data and 3 TB of historical production backup data.
Rob Mason, Nasuni founder and president, said the energy company’s data migration was part of Nasuni’s larger project to concentrate all of its customers data on either Amazon S3 or Microsoft Azure because its stress testing of 16 cloud providers showed those two providers could meet Nasuni’s SLA guarantees of 100% availability, protection and reliability. Previously, Nasuni had 85% of its customers’ data residing in Amazon with the rest spread out in about six other cloud providers. Rackspace held 10% of Nasuni’s customer data.
“We couldn’t offer our SLAs on Rackspace,” Mason said. “All our customers on our new service are either on Amazon or Azure now. For customers who wish to move from our older gateway product to the new service, which includes SLAs, if they are not already on Amazon or Azure, we will move them to one of those two providers are part of the upgrade.”
For the energy company, the goal is to have all of its data — about 15 TB — eventually residing in the cloud. “It was a constant struggle for more disk space,” the IT manager said. “And, my God, the RAID failures. It’s not supposed to fail but it did.”
Optimizing the data center is a major initiative for most IT operations. Optimization includes using resources more effectively, adding more efficient systems with greater capabilities and consolidating systems using virtualization and advanced technologies.
The goals for optimization are reducing cost and increasing the operational efficiency. Capital cost savings come from getting more effective use of what has been purchased and operational cost savings come from reducing administration and physical resources such as space, power, and cooling. Optimized operations make IT staffs more capable of addressing the demands for business expansion or consolidation.
Along with server virtualization, storage efficiency is a major focus area for data center optimization (DCO) initiatives because of the opportunity for major savings. This Evaluator Group article provides an IT perspective on measuring efficiency. Storage efficiency can be accomplished in the following ways:
• Making greater use of storage capacity through data reduction technologies (compression and deduplication) and allocation of capacity as needed (thin provisioning).
• Supporting more physical capacity for a storage controller by enabling greater performance from the controller.
• Increasing performance and responsiveness of a storage system with storage tiering and intelligent caching using solid-state technology.
• Improving data protection with advanced snapshot and replication technologies and data reduction prior to transferring data.
• Scaling of capacity and performance in equal proportion (scale out) to support greater consolidation and growth.
• Providing greater automation to minimize administrative requirements.
DCO requires a strong overall strategy. Storage has a regular cadence of technology transition and product replacement, and DCO requires adding products and upgrading systems already in place. Evaluating the best product to meet requirements is a major part of the execution of the plan. There are many complex factors to consider and the decisions are not straightforward.
As DCO initiatives continue, storage efficiency will remain a competitive battleground for vendors and an opportunity for customers.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Quantum CEO Jon Gacek teased what he called the “cloud offering” several times during the backup vendor’s earnings call this week but didn’t go deep into details beyond “our vmPro technology, along with our deduplication technology, is the basis of a cloud-based data protection offering that we will be introducing in the coming months.” In an interview after the call, he let on that the DXi would provide the backup, and there will likely be a service provider partner.
“We’ll probably launch with a partner first and go from there,” Gacek said.
Last October, Quantum revealed it plans to let SMB customers replicate data to the cloud from a new Windows-based NAS product. But that’s apparently not the same as what Gacek talked about this week. The SMB replication uses Datastor Shield software, which is different than the DXi software.
LSI CEO Abhi Talwalkar said during his company’s earnings call that the WarpDrive PCI Express (PCIe) card will be used in the EMC product.
“We’re expanding and increasing our focus in storage and server application acceleration, bringing the performance advantage of the Flash to enterprise servers, storage and networking applications,” Talwalkar said. “We are pleased to be participating in the EMC Lightning program.”
The LSI WarpDrive holds 300 GB of single-level cell (SLC) flash, and Talwalkar said LSI is close to releasing a second-generation WarpDrive that includes SandForce storage processors and supports multi-level cell (MLC) and eMLC flash. LSI and Micron have frequently been mentioned as likely partners for the PCIe flash component ever since EMC previewed Project Lightning in May. Industry sources say it is likely that EMC will use two PCIe flash sources, with Micron’s P320h PCIe card as the other.
EMC demonstrated Project Lightning at VMworld and highlighted the technology last October at Oracle Open World. It uses EMC’s FAST tiering software and PCIe flash to improve application response time and throughput by servicing reads in flash while passing writes through to the storage array.
EMC will include PCIe flash and a SAN host bus adapter in an appliance called a VFCache Driver. The PCIe flash will be used to access read data stored in cache while writes will be passed through the HBA to storage.
EMC CEO Joe Tucci has changed his mind about retiring at the end of the year.
During EMC’s earnings conference call today, Tucci said he has agreed with a request from EMC’s board to stay on as chairman and CEO into 2013. He said when he does step down, his replacement will come from EMC’s senior management team but the vendor is not yet ready to name the successor.
“After much soul-searching, I have agreed to extend my role as chairman and CEO into 2013,” Tucci said. “I’ve started to increase the responsibilities of my senior team. When the time is right, my successor will be named.”
Tucci last September told the Wall Street Journal that he would relinquish his CEO title by the end of 2012 and remain on as chairman for two years.
The top candidates to replace Tucci are Pat Gelsinger, president of EMC’s information infrastructure product group; CFO Dave Goulden; and Howard Elias, president of EMC’s cloud services group.
I was asked to do a disaster recovery review for a small non-profit corporation recently. While larger organizations regularly bring in somebody to review their preparedness for disasters, small businesses rarely bring in an outsider. This company had fewer than 20 employees, all at a single location.
As always, the first step was to interview the key people in the company. The purpose of these interview is to find out about the current situation and to understand what the staff believes about the DR plan. With a company of this size, it did not take long to understand the situation. The staffers generally believed they could handle a disaster and had no immediate concerns. However, the current situation did not give me the same confidence.
There were two servers used for the major applications, an accounting system and a CRM system. These servers were also used for general file sharing. Each of the two applications had customized reports added. The individual laptops and desktops each had their own office software installed as well as many unshared files and copies of the shared files.
A tape backup was run every night for the servers, and one of the staff took the tape home and rotated through a week’s worth of backups. The tapes were never checked. The service provider who would restore tapes was a part-time administrator who ran a business providing services for other organizations.
The administrator, known inside this company as “the guy,” would come over on demand when there was an issue. The DR plan was to take the tapes to the part-time administrator’s office and restore the data on servers there. This had never been done, not even as a test.
The company’s DR plan did not address the possibility of a regional disaster where the personnel were not available or operations were impacted by lack of power or a network failure. The feeling was that the operations could tolerate being unavailable for a week, and any longer impact was highly unlikely and had greater consequences that would overshadow being out of operation. The possibility of losing key personnel was not included in this review but was part of an overall staffing plan.
The shortcomings were obvious, but the real issue was the lack of understanding of their limitations and the practices required. There was an unwarranted belief that there would be no issue restoring data from tape and that any server could immediately assume the role of the application servers and did not need to be exercised regularly. This obviously meant that the company needed education around the topic of DR and best practices, and that the local service provider chosen may not have the skill or desire to do what was really best for the customer.
I wrote a report and made recommendations of what should be done. The flexibility to address the problems was more limited with the small business than with companies that I would typically deal with, so I needed to consider the expenses and training.
Small businesses need a disaster recovery plan and a set of practices to implement. They also need education about how to develop a plan, what to look for and some criteria around choosing a services provider (“the guy”). It will be interesting to follow-up and see what changes are made.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Nexenta scored a $21 million funding round this week, and the open-source ZFS-based software vendor will use the money to expand globally and market its new virtual desktop infrastructure (VDI) product.
Nexenta’s NexantaStor software runs on commodity servers, turning them into multiprotocol storage systems. Nexenta CEO Evan Powell said Nexanta software was sold in $300 million of its partners’ hardware deals last year. The startup has more than 250 resellers. The largest is Dell, which uses Nexanta software in the Compellent zNAS product.
Powell said 50% of Nexenta’s sales are already international, and the vendor only has one person working outside of the U.S – in Beijing. He plans to add staff in China and open offices in Japan and the Netherlands and probably other countries.
On the product front, the vendor is preparing to launch NexentaVDI, a virtual appliance that integrates with VMware View. NexentaVDI lets customers quickly provision storage for virtual desktops, and helps optimize performance by allowing thresholds for IOPS per desktop.
Nexenta previewed the VDI software during VMware World Europe in Copenhagen last October. NexentaVDI is in beta, and Powell said he expects to launch around April.
Powell said another change coming is that he expects Nexenta software running on more solid-state device (SSD) storage systems this year. NexentaStor has been optimized to run on SSDs, but the hardware will continue to come from partners.
“As a software company, we can remove the pernicious vendor lock-in on storage,” Powell said. “Storage is one of the last bastions of lock-in business models. Customers want to know how much they’re going to pay for storage in the future, and there’s a pent-up demand to get back at storage vendors who have exploited their customers for 10 or 20 years. We publish our prices and we don’t lock you in [to hardware]. But users like to buy arrays, they want to buy a box, plug it in, see the lights blink, and they have storage. So we reach out to vendors who sell arrays.”
Nexenta could lose its biggest array partner, however. Dell has made it clear that it is integrating clustered NAS technology it acquired from Exanet into Compellent SAN arrays to make them multiprotocol systems. After that, will Dell need Nexenta?
Powell is hoping that Dell will continue offering zNAS as an option for Compellent. He said one prospective customer is looking at a multi-petabyte deployment including zNAS. “I believe there’s room for both proprietary scale-out NAS with Exanet and zNAS with NexentaStor,” Powell said.
We’ll have to wait to see if Dell agrees.
WhipTail, the all-flash storage array vendor tucked away in Whippany, N.J., closed a Series B funding round and revealed a high-profile customer this week.
Although WhipTail failed to disclose the amount of its funding, but industry sources say it was about $9.5 million. That’s not in the same ballpark of the $35 million and $40 million funding rounds its rival Violin Memory secured last year, but WhipTail CEO Dan Crain said his company is close to profitable with close to 100 employees and is picking up about 20 customers per quarter.
“We are well-capitalized,” Crain said.
WhipTail bills its XLR8r as a cost-effective enterprise all-flash array, using multi-level cell (MLC) memory drives. The vendor goes after customers with a virtual desktop infrastructure (VDI), but Crain said it serves many types of industries.
AMD’s System Optimization Engineering Department said it replaced 480 15,000 RPM Fibre Channel drives with WhipTail’s solid-storage arrays for a 50-times improvement in latency and 40% performance increase.
AMD did not say how much flash capacity it bought from WhipTail, but Crain said is average deal is in the 25 TB to 30 TB range.
WhipTail isn’t the only all-flash array vendor out there. Nimbus Data, SolidFire, Texas Memory Systems, and Violin have all-SSD systems, Pure Storage is in beta and the large storage vendors will likely follow. Unlike a lot of the all-flash vendors, though, Crain said “We don’t compete on price. We solve a myriad of problems around performance.
“The field is still narrow for credible SSD manufacturers. The storage industry inherited NAND, and there is a lot of science and engineering that has to go into making NAND work in the enterprise,” he said. “We understand this stuff. We treat NAND and flash memory like flash, we don’t treat it like a hard disk.”