Data Center Apparatus


October 6, 2017  12:23 PM

Hungry data center vendors may be good for buyers

Robert Gates Robert Gates Profile: Robert Gates
data center

Data center hardware spending remain soft, but that could empower enterprise buyers who see the benefits of better performance and control of their own gear.

The overall server market remains stagnant, with little help from enterprise buyers. Revenue was up 6.3% in the second quarter from a year ago, propped up by cloud service providers and Intel’s new Skylake processors, according to the Worldwide Quarterly Server Tracker from analyst firm IDC. Gartner’s server revenue numbers (2.8% year-on-year growth) also show no boost from enterprise purchases; instead citing data center infrastructure build-outs in China and hyperscale data center purchases of original design manufacturer gear.

Synergy Research Group draws an even more sober picture of data center hardware spending: revenue from traditional non-cloud data center hardware and software — servers, operating systems, storage, networking and virtualization software – has sunk 18% during the past two years.
With no end in sight to slumping enterprise server demand, pressure continues to build on server vendors, and that spells an upside for enterprise IT buyers, said John Dinsdale, chief analyst and research director at Synergy.

“Vendors will be facing increased competition for a somewhat smaller pool of dollars, which could result in more aggressive pricing and also sales and support teams that are working really hard to keep clients happy,” he said.

Consider Tyreworld, a tire wholesaler in Dortmund, Germany, which runs its website in the cloud but keeps its IT infrastructure in its own data center. To keep up with growth in its dropshipping business, this summer the company considered several options to speed up its Microsoft SQL database, including a hardware upgrade or a move to one of two cloud computing data centers in Germany, said Manuel Hanke, IT manager at Tyreworld.

He chose the former: DataCore MaxParallel for Microsoft SQL Server on top of new Huawei servers. A move to the cloud with comparable hardware, would have cost 800 euros per month, while the servers and software were a one-time charge of 7,500 euros, he said.

“Hard drives in the cloud are expensive because we need so much space,” Hanke said. “That would be too much in the cloud.”


CONTENT: ROBERT GATES FOR TECHTARGET; DESIGN: LINDA KOURY

Some specific enterprise workloads still spark interest in data center hardware spending. On-premises data centers are still important for data sovereignty, or workloads with strict service level agreements such as block storage. New servers often go toward analytics workloads – not the traditional next-morning inventory reports, but more modern high IOps, high frequency, low latency workloads such as real-time analytics that benefit from multi-core compute. In-memory databases also are still largely on-premises because of concern about available bandwidth, latency and network speed.

Many companies need a foothold in both worlds – cloud for test and development, but in-house infrastructure to run production workloads. Server vendors are keen to deliver from both ends, as seen with Hewlett Packard’s recent acquisition of Cloud Technology Partners.

And some companies, like Tyreworld, don’t want to deal with an external data center and are happy to own their IT infrastructure.

“Even if you have a dynamic business, as we do, an in house solution works very well,” Hanke said.

September 15, 2017  10:06 AM

IT buyers will make up their own mind, even if GE went with Dell

Robert Gates Robert Gates Profile: Robert Gates
data center, DataCenter, Dell EMC, GE, وكيل ’ general electric

GE’s software will run on Dell Technologies hardware from now on – but most enterprises don’t care much about big-name endorsements.

A Dell-GE deal last week made Dell the primary IT infrastructure supplier to GE for hardware and services, spanning servers, storage, backup, client products, peripherals and professional services. The multi-year deal is one of the largest non-government contracts in the history of Dell Technologies, Dell or EMC, according to Dell, which did not disclose financial terms of the deal or specific products involved.

The eye-popping scope of the Dell-GE deal is good news for investors and the company’s bottom line, but it likely won’t sway many IT buyers. When CIOs, IT directors and other IT buyers look around to see what other companies buy, they tend to focus on their own industry, said Stuart Miniman, an analyst Wikibon, Inc.

That’s especially true here, because a selling point for GE’s Predix its Internet of Things platform is that users don’t need to think about the hardware underneath.

“If I look at those commercials that GE puts out about hiring software developers, the last thing I should be worrying about is what servers or storage is sitting under that,” Miniman said.

Dell gets credit that a cutting-edge platform such as Predix is supported by its hardware. Several years ago, experts predicted many service providers and some companies building new platforms, such as GE with Predix would run on original design manufacturer or whitebox hardware.

“The reality is that there has not been a wholesale move to that,” he said.

GE’s search for a single infrastructure provider that broadly support the internal operations of GE and GE Digital meant few vendor options, said John Spooner, an analyst at Technology Business Research in Hampton, N.H. who tracks both companies.

“If they only want one vendor, there aren’t many companies that offer everything from a laptop PC to a high end storage array and complex server,” he said. “It is really only Dell that does that at this time.”

Hewlett Packard Enterprise (HPE) and HP Inc., Lenovo, and Cisco could have been in the mix as multi-line vendors for everything from high-end servers to laptops, Spooner said.

“That is what any CIO or IT department would do,” he said.

The Dell-GE deal also creates synergy between the two suppliers. For example, when GE talks to a potential Predix customer who asks for a complete package, GE might recommend Dell and Dell Technologies, although ultimately the customer decides on the hardware provider. However, GE isn’t limited to Dell and its sister companies, including VMware, Pivotal, RSA and Virtustream – it also counts Microsoft as a partner and also uses Amazon Web Services.

With the breadth of technologies and services illustrated in the Dell-GE deal, Dell looks more like the IBM of old, and also Hewlett-Packard after it bought Electronic Data Systems in 2008, said Charlie Rice, vice president of engineering at Gametime United, Inc., a ticket broker in San Francisco.
And that’s not necessarily a good thing.

“How did that work out for HP?” he said.

Meanwhile, GE’s decision to outsource so much of its IT functions is ironic because GE now promotes itself as a high-tech company in TV ads, but handing over its IT keys suggests they have nothing innovative to add, he said.

“Didn’t Kodak do this same thing?” he asked. “So sure, outsource your future — what could possibly go wrong?”

Robert Gates covers data centers, data center strategies, server technologies, converged and hyper-converged infrastructure and open source operating systems for SearchDataCenter. Follow him on Twitter @RBGatesTT or email him at rgates@techtarget.com.


May 10, 2017  7:58 PM

Unoriginal pricing model finds a place in the data center

Robert Gates Robert Gates Profile: Robert Gates
data center, Data Center Hardware

Flexible hardware pricing?  What took so long?

Perhaps there were a few IT pros that wondered this when they first heard about the Cloud Flex Pay per use pricing that was introduced as a new buying model for hyper-converged infrastructure introduced at Dell EMC’s annual customer event this week.

It is a billing model that they see elsewhere, say, software as a service and infrastructure as a service, where they pay for what they use. But it’s uncommon when it comes to data center hardware, where spending six digits or more on a server is still the norm and is often the only option (even with financing).

Dell EMC executives said this week there’s already as many as 25 storage customers that have started with Cloud Flex’s cousin, Flex on Demand, which is available for all of the company’s storage products. And company executives have already outlined some specifics about two customers, without naming them. A service provider in the mid-Atlantic committed to 60% of its shipped storage systems at a set cost per gigabyte. They take that cost, and then mark it up as other services are included.

When a customer requests more storage, they tap into the unused 40% the same day and know the exact cost. They avoid getting a quote, waiting for it to be shipped and having to configure and provision it. The service provider’s CFO and CEO said the company plans to go all in on Flex on Demand for VMAX, Unity and Data Domain.

Analyst Zeus Kerravala pointed out that most IT pros will buy the gear they need to get the job done, as long as they have the budget to do it. Maybe these new pay as you go price models will make it easier to convince the bean counters that they really have the money available to buy the gear that IT pros need in the data center.

But the cost of HCI is definitely an issue for some users. Many IT pros avoid buying HCI appliances, such as VxRail and Nutanix, because of the cost, said Luis Quiroa, a support engineer at Sisteco S.A. in Bodega, Guatemala.

Some have steered toward VMware vSAN because of the lower cost, Quiroa said. Without a significant upfront cost, the new Cloud Flex billing model could put HCI appliances back into consideration for companies that want to use HCI.

Users have told me that Cloud Flex seems to make sense. And of course it does, it is a model that they have seen for years in other areas of IT but was slow to come to hardware.

It is also a model that will arrive in the data center later this year in Microsoft Azure Stack, which will make use of a similar pricing.

“It will have a consumption-oriented economic model for all of the services, just like Azure public cloud economics,” said Chad Sakac, president of the converged platform and solutions division of Dell EMC. “The economics will mirror the public cloud and be an extension of the public cloud on to your premises.”

In addition to the same pricing model, it will have the same application deployment experience in Azure public cloud and Azure Stack. In many data centers, Azure Stack will be built on HCI. Now we are starting to connect the dots and the picture is emerging.

It will be interesting to see if one of the other large IT vendors without a public cloud – Hewlett Packard Enterprise – introduces something similar sometime soon. We’ll see next month when HPE hosts its own customer event.


April 20, 2017  9:40 AM

Flyover country: Making data center market growth great again

Robert Gates Robert Gates Profile: Robert Gates
data center, Data center budget considerations, Data center colocation, Data center facilities, Data Center locations, Data Center Operators, Data centers

Your data center provider is more likely than ever to have a southern drawl or Midwestern lilt.

The Southeast, Southwest, and Midwest are among the fastest growing data center markets in the United States, according to year end 2016 data from Synergy Research Group. This illustrates how data center market growth has gone far beyond its traditional strongholds of Washington, DC, New York and Silicon Valley.

That lesson from Synergy’s latest report is déjà vu all over again, as former major league catcher Yogi Berra once said. There’s much more to the United States than just the metro areas along the two coasts, as we all learned after the November election.

In all, the top ten U.S. metro areas accounted for 74% of retail and wholesale colocation revenue in 2016, according to Synergy. Dallas is right behind Washington, D.C. as the highest-growth metro region for data centers. Chicago, Atlanta and Austin also all outpaced the national average in 2016. The one market that cooled off was Silicon Valley.

“There has been a dilution of new growth because there are other options,” said Sean Iraca, vice president for service enablement at Digital Realty Trust, the largest provider in five of the top 10 markets in the country.

metro-colo-q416

A lack of data center market growth in some medium and small metro areas hasn’t been for lack of demand, said Bryan Loewen, executive managing director at real estate firm Newmark Grubb Knight Frank’s data center practice. Instead, the major suppliers of the money needed for large data center build-outs are more reticent to back a project in smaller markets.

Demand still exceeds supply in four of the top five markets – Washington, D.C., Chicago, Silicon Valley, Dallas, but not New York – so those spots are all safe bet for investors.

That is yet another reminder: decisions about technology investments often are made not by technologists, but the companies that hold the purse strings big enough to fund a data center build.
Metro areas such as Silicon Valley and Washington, D.C. saw data center construction five years ago because enterprises wanted private access to physical cross connects to achieve low-latency, low-bandwidth cloud computing, Iraca said. In recent years, all of the hyperscale and cloud computing companies have introduced and expanded private access and added availability, which has shifted some of that demand to other markets.

Although hyperscale companies still represent the largest source of demand for colos in 2016, everyday organization shouldn’t fear that they’ll be squeezed out of colo spaces. More mature data center markets mean less pricing volatility, and colocation data center buyers are better educated and have a better understanding of what they are consuming than ever, Loewen said.

“Consumers are not allowing data center operators to get outsized returns,” he said.


March 27, 2017  4:02 PM

You did what? Blunders, boo-boos and bloopers from data center outages

Robert Gates Robert Gates Profile: Robert Gates
AWS, CIO, data center, IBM

Data center outages at Delta Airlines and Amazon Web Services stole the headlines in recent months, but there’s plenty of other outages at everyday enterprises that fly under the radar.

IT pros dished the dirt last week on the show floor at IBM Interconnect, anonymously sharing tales about their data center outages at the hybrid cloud booth. They illustrated the various problems behind data center downtime, and a reality check about how that next outage could be caused by just about anything.

A CIO, two weeks into the new position, claimed he was hired to implement a “transformational agenda” – but first he endured a one week outage of a core, externally facing customer system. “I spent months delaying my agenda to focus on sustainability” wrote the unnamed CIO.

An insurance company in Connecticut performed a data migration from its original system to a new platform, then shut down the old system, claimed another contributor. But when they attempted to bring up the new system, the data was corrupt.

In a networking tale of woe a F5 refresh took out an entire website when a parameter set to direct traffic to the least loaded server instead sent the traffic to a test server. You can probably guess what happened next.

Another debacle cited failure of an unspecified storage component which degraded performanceand ultimately triggered the disaster recovery plan. But there was one problem: — “We had no way to failback – not good,” wrote the IT pro.

Nature was blamed for one data center takedown — a squirrel chewed into a main power feed during maintenance to the data center’s battery backup. That caused a blackout – albeit short – with the data center going down for about five seconds until the generators kicked in. No word on whether any data –or the squirrel — was lost.

One IT pro lamented how a load test was conducted on productive storage during working hours. It was a virtualized environment and nothing should have happened, but the ports became saturated and the network couldn’t handle the load so there was downtime.

Timing can be everything, and that was certainly the case when the hard drive died in a network staging server at one company — just before a new product was to be launched, according to the anonymous writer.

Backup for data center cooling and power systems are especially important, as shown by one story where an IT pro claimed that there was no UPS or generator backup for cooling towers on the roof of the data center. When power went out, the CPU overheated with no working cooling system.

Don’t blame me

Notice a common theme? None of the authors accept guilt in their stories of data center downtime. In fact, nobody is blamed in most cases. So much for the blameless post mortem, even when it is anonymous. A majority of data center outages are caused by human error, which leaves us wondering exactly what was the painful truth behind these outages.

Now that you’ve read some tales from the data center trenches, what’s your best story about an outage and downtime?


March 27, 2017  9:03 AM

And the next sexy technology is….Blockchain?

Ed Scannell Ed Scannell Profile: Ed Scannell
data center

When I first noticed the line there was maybe a few dozen people in it. The number quickly swelled to 100 and then 200 or more, as the line snaked its way down the aisles of the vendor exhibition at IBM’s Interconnect 2017 conference in Las Vegas.

It’s not that unusual to see show attendees lined up at a vendor’s booth waiting for, well, almost anything. This line, however, carried a certain sense of anticipation and energy, with people engaged in animated conversations and craning their necks to see if the long line was moving forward.

Who or what they could be waiting for? Was it a well-known computer industry or sports celebrity making an appearance, or perhaps a well-known a software giveaway?

Turns out it had nothing to do with anything like that. These people were waiting for a copy of a book with the decidedly unsexy title: Blockchain Revolution: How the Technology Behind Bitcoin Is Changing Money, Business, and the World.

Really? A book chronicling the rise of blockchain — or as some refer to it, the blockchain — draws this large a crowd? I felt a little like the clueless reporter in Bob Dylan’s Ballad Of A Thin Man with the chorus: Something is happening here and you don’t know what it is, do you Mr. Jones?

Well alright, maybe I’m not that clueless — I did spent the previous two days at the show listening to a series of executives from IBM and end user companies sing the praises of the technology. Some even suggested that blockchain is the breakthrough technology the Internet has been waiting for.

There certainly is a lot of hype building around this technology. That’s nothing new in this industry, going back to the glitzy, multi-million dollar marketing campaigns for desktop products like Apple’s original Macintosh, or Microsoft’s Windows 95. For the most part these and other heavily hyped products lived up to those promises and/or went on to influence other breakthrough products.

But this one is different. The success of blockchain – and we are still waiting to see how successful it will become – figures to be more through its quiet adoption by large (and some smaller) enterprises. True Believers say it offers rock-solid security in the cloud through what is essentially a distributed database that records transactions in blocks in a way that makes tampering damn near impossible.

IBM is certainly one of those True Believers. Some company executives believe it is one of the company’s two or three most strategically important technologies. Big Blue is betting a good chunk of its future on blockchain along with its Watson’s cognitive technology and an aggressive adoption of open source across its software portfolio.

The latest testimony to that is the raft of announcements the company made at Interconnect involving all three technologies, along with a featured presentation by Leanne Kemp, CEO of Everledger, about how her company uses IBM’s blockchain to prevent fraud in the diamond industry.

IBM is not alone in not just talking the talk but walking the walk about blockchain. Microsoft has stated its commitment to the technology and is working on its Project Bletchley, an open blockchain-as-a-service offering. JP Morgan now has two blockchain offerings aimed at the enterprise called Juno and Quorum. Accenture has introduced an early version of an “editable blockchain” allowing for a blockchain to be edited under what the company calls “extraordinary circumstances” to fix human errors or accommodate certain legal and regulatory requirements. And the Linux Foundation has its Hyperledger Fabric.

And in one last example of blockchain’s growing popularity, not just among fully matured IT geeks in enterprise shops but among aspiring geeks, Wiley this spring will release Blockchain For Dummies.

So if I am having difficulty grasping the finer points of blockchain this year as explained to me by technical industry experts, maybe you’ll find me in a similar line at a trade show looking to pick up a copy.


March 15, 2017  9:49 AM

One quantum leap for IBM, one small step for IT

Ed Scannell Ed Scannell Profile: Ed Scannell
data center

After cooking for more than three decades in its research laboratories, Big Blue has served up an appetizer of its quantum computing efforts, to give the IT world its first taste of what possibilities the emerging technology might offer.

The first morsel is a set of APIs to help developers build interfaces between five IBM-hosted quantum computers and the company’s installed base of host-based systems. Included with the new APIs is an improved quantum simulator, accessible through the IBM cloud, to run algorithms on model circuits of up to 20 qubits.

IBM promises to follow up with a full-blown SDK in the first half of this year that lets programmers build simple quantum applications for both business and scientific use.

Despite the sci-fi aura around quantum computing, it would be a mistake to view the technology as the successor to IBM’s current mainframe and Watson technologies any time soon. Rather than a one-for-one replacement, it is intended more as a complimentary technology that solves complex problems beyond the abilities of today’s traditional systems.

“There could be a universal fault-tolerant quantum computer able to do everything a classic computer can do, but that’s a long way off,” said Dave Turek, vice president with IBM’s high performance computing and cognitive systems. “For now, what we are trying to do is solve problems that are intractable for conventional computers.”

One practical example is what Turek referred to as the traveling salesman problem. Conventional computers could easily calculate an optimal route for a salesman if it only involved two, three or four cities. But to calculate the possible routes for 20 or more cities, the number of possible routes grows exponentially and becomes too difficult for today’s systems.

“To solve that you have to assess a number of routes that can only be counted by counting the number of atoms in the universe. That is how big that number of routes gets,” Turek said.

Another example of what quantum computing can do is to help discover more efficient and creative ways to produce greater quantities of ammonia, a key ingredient in fertilizer. With the world’s population expected to nearly double by 2050 and farmers having to feed twice as many people with the same amount of land at their disposal, such advances will be essential.

“Conventional computing models can’t get within a reasonable distance of what is going on at the molecular level to create models to solve such problems,” Turek said.

Other business problems that can be addressed with quantum computing, Turek said, include finding optimal routes for logistics and supply chains, coming up with better ways to model financial data and identifying risk factors in making investments, making cloud computing more secure through quantum physics and improving the capabilities of machine learning when handling large data sets.

Since last year IBM and a handful of industrial partners — all members of the IBM Research Frontiers Institute, including Samsung, Honda, Canon, and Hitachi Metals – have researched potential quantum applications and evaluated their viability for business use.

Full-blown quantum computing systems won’t be operational for several years, but opening it up to corporate developers will get IT shops thinking about what the technology can do, Turek said.

“I think you will see some very interesting things in the next two years as developers begin solving real commercial problems,” he said.

Specifications for the new Quantum APIs are available on GitHub at https://github.com/IBM/qiskit-api-py.

Ed Scannell is a senior executive editor with TechTarget. Contact him at escannell@techtarget.com.


September 8, 2016  12:23 PM

Dell acquisition plan leaves users hungry for details

Ed Scannell Ed Scannell Profile: Ed Scannell
data center, EMC, VMware

IT professionals hoping for a taste of what the combined Dell-EMC-VMware will serve up to them for new products and strategies, got a bowl of steam instead.

In formally announcing the completion of their $67 billion deal, executives from Dell and EMC spent most of their presentation on Sept. 7 reciting the resume of the combined companies, reminding us of how big and bad they plan to be in the IT world:

• The world’s largest privately held technology company ($74 billion in revenues);
• Holding the number one, two or three position in several major product categories including PCs, servers, storage and virtualization; and
• A corporate structure that supposedly allows them to innovate and pivot quickly like a startup, but with pockets deep enough to heavily invest in research and development for the long term.

“We are going to be the trusted provider of essential infrastructure for the next industrial revolution, so organizations can build the next generation of transformational IT,” said Michael Dell, chairman and CEO of Dell Technologies.

If nothing else, you have to admire Mr. Dell’s confidence and ambitions. On paper, the new company at least appears to have a fighting chance of accomplishing this objective. With archrivals IBM and HPE either selling , spinning off, or merging huge pieces of themselves and creating much smaller competitors, Dell Technologies could indeed end up being the biggest and baddest boy on the IT block.

But what looks formidable on paper — as we have seen in this industry time and again — ends up not being worth the paper it’s written on. For instance, Hewlett Packard execs believed they would dominate the world of desktop PCs and Intel-based servers after buying Compaq Computer Corp. in 2001, only to squander whatever advantages the latter had when dozens of key Compaq executives left and a number of key products were dropped just a year or two after the deal.

“They have enough resources to compete with just about anyone,” said one long-time IT professional with investments in both Dell server and EMC storage products. “But they haven’t specifically laid out how they [Dell-EMC-VMware] will work together to make say, cloud-based environments work hand-in-glove with on-premises environments.
Such a lack of clarity, he added, “reminds me of a certain presidential candidate with huge ambitions and few details about how he gets there.”

It’s not just the lack of specifics about how the combined companies will work cooperatively together that makes some skeptical. It is also Michael Dell’s bold claim that the new company can “innovate like a startup”. But can a newly-formed $74 billion elephant keep pace with, not just with real jack rabbit startups, but also invest enough to match the R&D dollars typically spent by IBM, Microsoft and Google annually?

Dell certainly has a history of being a fast follower in the hardware business the past 30 years, but never a company that felt comfortable making a living out on the razor’s edge.

Michael Dell’s answer to growing this now mammoth business while still delivering more innovative products faster seems to revolve around Dell’s decision to go private a couple of years ago.

“The single best way to get bigger, but also move faster, is to detach yourself from the 90-day reporting cycles that are common among larger companies,” he said. “I think going private has kicked the company into a new gear. We have had 14 quarters in a row of gaining [market] share in our client business. Dell Technologies can act fast and not be governed by short-term concerns.”

Going private indeed may have helped spur consistent growth in Dell’s client business – a business that is declining for not just Dell but all of its major competitors – but he failed to mention how it has resulted in any significant technology innovations the past couple of years.

As announced earlier this year the new company is now called Dell Technologies, with Michael Dell serving as chairman and CEO. The company is split into two groups: Client Solutions headed by Dell president and vice chairman Jeff Clarke, and an infrastructure group to be led by David Goulden, the former head of EMC’s Information Infrastructure organization. Both organizations will be supported by a Dell EMC Service unit.

The rest of the old EMC Federation — namely VMware, Virtuestream, Pivotal, Boomi, RSA and SecureWorks, — will continue to function independently and are free to pursue their own strategic agendas and develop their own ecosystems, “which is our commitment to remaining open and offering customer choice,” said Michael Dell. “But we have also strategically aligned our technologies to deliver integrated solutions like hybrid cloud, and security and seamless infrastructure technology from the edge to the core to the cloud.”

Again, all that looks good on paper. — but can this melding of two giant IT suppliers work beneficially for users where so many similar unions have failed? Maybe with the next press conference Dell can offer users at least an appetizer instead of a bowl of steam as to how this will all work.


April 29, 2016  4:12 PM

Users, experts speak out about DCIM tools

Austin Allen Profile: Austin Allen
data center, DCIM

Companies that offer DCIM tools position them as essential, promising a holistic view of the performance of a data center. The DCIM market went from volatile to pretty stagnant, though a buyout between two of the major vendors could jump start demand.

There are several problems with data center infrastructure management (DCIM) tools at the moment.

DCIM tools can be fairly complex and IT pros may initially be overwhelmed with the amount of information the tools provide. Going all in with may even require organizational changes, so slowly adding tools is probably a better bet.

These three comments highlight the broad points of view around the industry about DCIM.

DCIM tools help solve problems like the one craigslist engineer Jeremy Zawodny posted on Quora, an open question-and-answer website:

Quora, data center, DCIM tools, DCIM, data center infrastructure management, TechTarget, failure rate

There’s more potential in DCIM than power and cooling measurements or even asset controls. According to data center facilities expert Robert McFarlane, DCIM tools will fall away from the forefront, but that might not be a bad thing.

“DCIM will become less of the big industry buzz word and settle down into the background,” McFarlane said. He doesn’t think that means DCIM will be less important, but rather, that IT pros will take a close look at DCIM when they want to track a specific metric in the deployed infrastructure. Some in the industry even see DCIM being essential to preventative data center maintenance.

Potential users who invest heavily in DCIM tools today expect a broad, integrated platform that isn’t always reality. Commenter ‘NoteShark’ detailed this disconnect between expectations and reality in response to Robert Gates’ story “Buyout could give stagnant DCIM tools market a boost” from February 2016 (linked above).

TechTarget, Robert Gates, DCIM tools, DCIM, continuous integration, DevOps, system portfolio, asset management, portfolio management

For some, configuration drift seems to occur due to the ops and facility teams only having a normative description on which to base their designs — from racks to system architectures. When the tool doesn’t have input from everywhere in the stack from facility to app, DCIM tools don’t live up to their fullest potential. And while configuration drift can happen at every level, ‘NotesShark’ goes on to say how a DevOps -type of IT environment, where there is more communication and a better flow of information between dev and ops teams, would benefit most from thorough asset and portfolio management alongside current DCIM tools’ abilities in facility and hardware tracking.

Where do you stand on DCIM tools’ usefulness and their future?


January 26, 2016  10:12 AM

Colocation and cloud providers experience outage woes

Austin Allen Profile: Austin Allen
AWS, Cloud infrastructure, Data center colocation, Verizon

Cloud infrastructure offerings increased in resiliency in 2015, assuaging the fears of many businesses looking to switch some applications or transition production IT entirely to the cloud. Enterprises want to save money while retaining the same performance, which cloud providers aim to deliver. Granted, 2015 wasn’t a perfect year.

While evaluating cloud providers’ reliability is difficult since there are few independent data sources, it is not impossible. SearchCloudComputing created a general assessment of cloud infrastructure performance in 2015 by combining a few sources of data, including a CloudHarmony snapshot of cloud provider performance over a 30-day period and Nasuni’s reports on the cloud providers that it uses.

In February 2015, Google’s infrastructure as a service offering Google Compute Engine (GCE) experienced a global outage for over two hours. The outage was at its peak for forty minutes, during which outbound traffic from GCE experienced 70% loss of flows.

Months later, Amazon Web Services (AWS) experienced outages over a weekend in September that affected content delivery giant Netflix and throttled service for other U.S.-East-1 region AWS users while recovery efforts took place. Compared to previous years when AWS experienced some major outages, 2015’s cloud problems were definitely less major, more of a slowdown than a full stop. However, the list of AWS services affected was longer than the list of services unaffected.

Is Colo the Way to Go?

Even though offerings from cloud providers are improving, some companies found that the cloud just couldn’t handle their business needs. Since 2011, Groupon has been moving away from the cloud and to a colocation provider. Cost drove the online deals company towards running its own data center IT, with its enterprise needs covered in nearly every area, from databases and storage to hosting virtual machines.

However, colocation providers aren’t free of problems. A study of the costs of data center outages from Emerson and Ponemon Institutes found that UPS system failure accounted for a fourth of all unplanned outages, while cybercrime rose from 2% of outages in 2010 to 22% in 2016.

Verizon’s recent data center outage that took airline company JetBlue offline for three hours and grounded flights highlights the importance of failover plans and redundant power. Verizon, which runs its own data centers for its telecom business, is a surprising sufferer in this outage scenario, according to some observers.

Companies that run owned data centers aren’t free from the same problems that plague cloud and colocation data centers, from stale diesel fuel to poor disaster recovery planning in advance of an attack, error or natural disaster. Data center IT staff must consider how much oversight they have over potential problem areas, and how much control they want — or can have — over the outage and how it is resolved. Visibility into the outage and its aftermath also will vary from provider to provider.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: