Data Center Apparatus

September 8, 2016  12:23 PM

Dell acquisition plan leaves users hungry for details

Ed Scannell Ed Scannell Profile: Ed Scannell
data center, EMC, VMware

IT professionals hoping for a taste of what the combined Dell-EMC-VMware will serve up to them for new products and strategies, got a bowl of steam instead.

In formally announcing the completion of their $67 billion deal, executives from Dell and EMC spent most of their presentation on Sept. 7 reciting the resume of the combined companies, reminding us of how big and bad they plan to be in the IT world:

• The world’s largest privately held technology company ($74 billion in revenues);
• Holding the number one, two or three position in several major product categories including PCs, servers, storage and virtualization; and
• A corporate structure that supposedly allows them to innovate and pivot quickly like a startup, but with pockets deep enough to heavily invest in research and development for the long term.

“We are going to be the trusted provider of essential infrastructure for the next industrial revolution, so organizations can build the next generation of transformational IT,” said Michael Dell, chairman and CEO of Dell Technologies.

If nothing else, you have to admire Mr. Dell’s confidence and ambitions. On paper, the new company at least appears to have a fighting chance of accomplishing this objective. With archrivals IBM and HPE either selling , spinning off, or merging huge pieces of themselves and creating much smaller competitors, Dell Technologies could indeed end up being the biggest and baddest boy on the IT block.

But what looks formidable on paper — as we have seen in this industry time and again — ends up not being worth the paper it’s written on. For instance, Hewlett Packard execs believed they would dominate the world of desktop PCs and Intel-based servers after buying Compaq Computer Corp. in 2001, only to squander whatever advantages the latter had when dozens of key Compaq executives left and a number of key products were dropped just a year or two after the deal.

“They have enough resources to compete with just about anyone,” said one long-time IT professional with investments in both Dell server and EMC storage products. “But they haven’t specifically laid out how they [Dell-EMC-VMware] will work together to make say, cloud-based environments work hand-in-glove with on-premises environments.
Such a lack of clarity, he added, “reminds me of a certain presidential candidate with huge ambitions and few details about how he gets there.”

It’s not just the lack of specifics about how the combined companies will work cooperatively together that makes some skeptical. It is also Michael Dell’s bold claim that the new company can “innovate like a startup”. But can a newly-formed $74 billion elephant keep pace with, not just with real jack rabbit startups, but also invest enough to match the R&D dollars typically spent by IBM, Microsoft and Google annually?

Dell certainly has a history of being a fast follower in the hardware business the past 30 years, but never a company that felt comfortable making a living out on the razor’s edge.

Michael Dell’s answer to growing this now mammoth business while still delivering more innovative products faster seems to revolve around Dell’s decision to go private a couple of years ago.

“The single best way to get bigger, but also move faster, is to detach yourself from the 90-day reporting cycles that are common among larger companies,” he said. “I think going private has kicked the company into a new gear. We have had 14 quarters in a row of gaining [market] share in our client business. Dell Technologies can act fast and not be governed by short-term concerns.”

Going private indeed may have helped spur consistent growth in Dell’s client business – a business that is declining for not just Dell but all of its major competitors – but he failed to mention how it has resulted in any significant technology innovations the past couple of years.

As announced earlier this year the new company is now called Dell Technologies, with Michael Dell serving as chairman and CEO. The company is split into two groups: Client Solutions headed by Dell president and vice chairman Jeff Clarke, and an infrastructure group to be led by David Goulden, the former head of EMC’s Information Infrastructure organization. Both organizations will be supported by a Dell EMC Service unit.

The rest of the old EMC Federation — namely VMware, Virtuestream, Pivotal, Boomi, RSA and SecureWorks, — will continue to function independently and are free to pursue their own strategic agendas and develop their own ecosystems, “which is our commitment to remaining open and offering customer choice,” said Michael Dell. “But we have also strategically aligned our technologies to deliver integrated solutions like hybrid cloud, and security and seamless infrastructure technology from the edge to the core to the cloud.”

Again, all that looks good on paper. — but can this melding of two giant IT suppliers work beneficially for users where so many similar unions have failed? Maybe with the next press conference Dell can offer users at least an appetizer instead of a bowl of steam as to how this will all work.

April 29, 2016  4:12 PM

Users, experts speak out about DCIM tools

Austin Allen Profile: Austin Allen
data center, DCIM

Companies that offer DCIM tools position them as essential, promising a holistic view of the performance of a data center. The DCIM market went from volatile to pretty stagnant, though a buyout between two of the major vendors could jump start demand.

There are several problems with data center infrastructure management (DCIM) tools at the moment.

DCIM tools can be fairly complex and IT pros may initially be overwhelmed with the amount of information the tools provide. Going all in with may even require organizational changes, so slowly adding tools is probably a better bet.

These three comments highlight the broad points of view around the industry about DCIM.

DCIM tools help solve problems like the one craigslist engineer Jeremy Zawodny posted on Quora, an open question-and-answer website:

Quora, data center, DCIM tools, DCIM, data center infrastructure management, TechTarget, failure rate

There’s more potential in DCIM than power and cooling measurements or even asset controls. According to data center facilities expert Robert McFarlane, DCIM tools will fall away from the forefront, but that might not be a bad thing.

“DCIM will become less of the big industry buzz word and settle down into the background,” McFarlane said. He doesn’t think that means DCIM will be less important, but rather, that IT pros will take a close look at DCIM when they want to track a specific metric in the deployed infrastructure. Some in the industry even see DCIM being essential to preventative data center maintenance.

Potential users who invest heavily in DCIM tools today expect a broad, integrated platform that isn’t always reality. Commenter ‘NoteShark’ detailed this disconnect between expectations and reality in response to Robert Gates’ story “Buyout could give stagnant DCIM tools market a boost” from February 2016 (linked above).

TechTarget, Robert Gates, DCIM tools, DCIM, continuous integration, DevOps, system portfolio, asset management, portfolio management

For some, configuration drift seems to occur due to the ops and facility teams only having a normative description on which to base their designs — from racks to system architectures. When the tool doesn’t have input from everywhere in the stack from facility to app, DCIM tools don’t live up to their fullest potential. And while configuration drift can happen at every level, ‘NotesShark’ goes on to say how a DevOps -type of IT environment, where there is more communication and a better flow of information between dev and ops teams, would benefit most from thorough asset and portfolio management alongside current DCIM tools’ abilities in facility and hardware tracking.

Where do you stand on DCIM tools’ usefulness and their future?

January 26, 2016  10:12 AM

Colocation and cloud providers experience outage woes

Austin Allen Profile: Austin Allen
AWS, Cloud infrastructure, Data center colocation, Verizon

Cloud infrastructure offerings increased in resiliency in 2015, assuaging the fears of many businesses looking to switch some applications or transition production IT entirely to the cloud. Enterprises want to save money while retaining the same performance, which cloud providers aim to deliver. Granted, 2015 wasn’t a perfect year.

While evaluating cloud providers’ reliability is difficult since there are few independent data sources, it is not impossible. SearchCloudComputing created a general assessment of cloud infrastructure performance in 2015 by combining a few sources of data, including a CloudHarmony snapshot of cloud provider performance over a 30-day period and Nasuni’s reports on the cloud providers that it uses.

In February 2015, Google’s infrastructure as a service offering Google Compute Engine (GCE) experienced a global outage for over two hours. The outage was at its peak for forty minutes, during which outbound traffic from GCE experienced 70% loss of flows.

Months later, Amazon Web Services (AWS) experienced outages over a weekend in September that affected content delivery giant Netflix and throttled service for other U.S.-East-1 region AWS users while recovery efforts took place. Compared to previous years when AWS experienced some major outages, 2015’s cloud problems were definitely less major, more of a slowdown than a full stop. However, the list of AWS services affected was longer than the list of services unaffected.

Is Colo the Way to Go?

Even though offerings from cloud providers are improving, some companies found that the cloud just couldn’t handle their business needs. Since 2011, Groupon has been moving away from the cloud and to a colocation provider. Cost drove the online deals company towards running its own data center IT, with its enterprise needs covered in nearly every area, from databases and storage to hosting virtual machines.

However, colocation providers aren’t free of problems. A study of the costs of data center outages from Emerson and Ponemon Institutes found that UPS system failure accounted for a fourth of all unplanned outages, while cybercrime rose from 2% of outages in 2010 to 22% in 2016.

Verizon’s recent data center outage that took airline company JetBlue offline for three hours and grounded flights highlights the importance of failover plans and redundant power. Verizon, which runs its own data centers for its telecom business, is a surprising sufferer in this outage scenario, according to some observers.

Companies that run owned data centers aren’t free from the same problems that plague cloud and colocation data centers, from stale diesel fuel to poor disaster recovery planning in advance of an attack, error or natural disaster. Data center IT staff must consider how much oversight they have over potential problem areas, and how much control they want — or can have — over the outage and how it is resolved. Visibility into the outage and its aftermath also will vary from provider to provider.

December 8, 2015  1:37 PM

Is your dream geek gift on the list?

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche
data center

Each year, SearchDataCenter ushers in the holiday season with a geek gift guide by Beth Pariseau, who enjoys a brief break from breaking stories about AWS public cloud to tell you about what to find on Amazon’s other major property.

This year, SearchDataCenter’s writers and editors decided to get in on the fun, and share what tech gift they’d like to unwrap:

Austin AllenAustin Allen, assistant site editor: “A Pebble Time Round. It is the first smartwatch that actually passes as a watch because it’s so thin and light.”

Geekiest gift he’s ever gotten? “A Motorola Xoom.” You might remember the Super Bowl commercial for it.

Michelle Boisvert, executive site editor: “The Garmin Forerunner 920XT watch. As a triathlete and a Type A personality (they go hand in hand), I like to track everything during my training and races. What was my swim pace, Michelle Boisvertmy transition time, my cadence on the bike? Currently, I have two different watches I use: an old school Timex wristwatch for swimming and a Garmin FR60 for running. This works for training — when I have time to swap watches — but not in races. The Garmin Forerunner 920XT is a single watch that tracks swim distance and speed (in the pool or in open water!), pace, power output, heart rate and cadence on the bike (with optional bike mount) and all the bells and whistles of data wanted during a run. So, if anyone happens to have an extra $450 lying around, you know where to find me.”

Geekiest gift she ever received? “Probably a Tanita scale that measures weight and body fat percentage. And no, I did not want this. No one wants to see a scale around the holidays!”

Meredith CourtemancheMeredith Courtemanche, senior site editor: “I would love a new iPod. My iPod Touch is over five years old now, and likes to repeat a song two, sometimes three times before moving on to the next one. If anyone wants to come over and trance out to Bing Crosby’s White Christmas on repeat, hit me up.”

Geekiest gift she’s ever gotten? “Does a VTech from childhood count? It looked like a laptop anyway and since this was before toys could go online, my data is safe in an attic somewhere.”

Stephen J. BigelowStephen J. Bigelow, senior technology editor: “I could go for a nice low-profile bluetooth headset for the gym so that I can play music from my smartphone and still be able to work the machines without those silly, wired earbuds falling out or yanking my smartphone to the ground.”

What about you, IT reader? What do you want for the holidays?

July 7, 2015  3:40 PM

The best of SearchDataCenter in June

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

The summer weather didn’t slow down anyone in the cool, dark halls of the data center. Catch up on the big news and expert advice from the past month that other data center pros found valuable.

The big picture:

Ten data center trends you need to know

These trends, shared at the Gartner IT operations conference, will shape the face of the data center sector for the coming years.

Top story on jobs:

Will hyper-converged systems take your job?

The buzz at Red Hat Summit included talk of converged and hyper-converged infrastructures. Many attendees were keen to learn if and how these systems would change their daily work.

Opinions to stir up conversation:

Great ideas that never took off

Do you run a DC data center with high-voltage racks built into a glacier? No? Neither do most of your peers. But just because a concept missed mainstream adoption or faded from use does not mean that we can’t learn something from it for tomorrow’s data centers.

Most helpful tip:

Data is nothing without data management

Big data means data sprawl and more work for data centers. This tip outlines ways to corral enterprise data and store it without exhausting your hardware and staff resources.

In the news:

HP turns to open source for UX revamp

HP told attendees at its HP Discover conference that the impetus for its Grommet user interface came from a decision to look like one company across its various enterprise tools and applications.

Bonus link:
The June issue of Modern Infrastructure

This e-zine covers everything from micro services to mega convergence in data center storage. Check out expert stories on bare metal, desktop security and big data as well.

June 15, 2015  9:12 AM

A great time to be a geek

Stephen Bigelow Stephen Bigelow Profile: Stephen Bigelow
Big Data, Internet of Things, IT Strategy

The problem with getting older is that I sometimes find myself set in my ways — gravitating toward things that I knew (or was at least interested in). I confess that I sometimes feel a little overwhelmed by the many abstract concepts emerging across the industry like big data and the Internet of Things just to name a few. After all, I’m a hardware guy, and finding ways to monetize or justify business value in 26 billion connected devices or securely deliver streaming content to a multitude of remote device users is tougher to wrap my brain around than the newest Intel command set. There are moments when I’d rather just move to Nebraska and raise alpacas.

But watching this morning’s keynote address by Gartner’s Chris Howard on “Scenarios for the future of IT” at Gartner IT Operations Strategies & Solutions Summit in Orlando, Fla., reminded me of something that I’d long-forgotten: IT has never been about servers and networks and stacks and all of the engineering stuff; IT is about solving business problems and enabling the business.

Back in those ancient days before the Internet (yes, I was there), IT supported the business by storing and serving up files and even supporting the groundbreaking notion of collaboration. Later, networks and user bases expanded, and businesses needed IT to solve new problems, allowing businesses to support remote users and market the business differently on that thing called the world-wide web.

As we fast-forward to today, Howard’s hour-long keynote focused on the challenges of the digital business. This included the importance of context, providing access to data that isn’t tied to devices, where devices have the intelligence to determine where you are and what you need. He also talked about the need for analytics that extend to the edge of the environment (not just in a data center) to decide what data is important and how it should be used.

And while Howard cited numerous examples of these issues — where many of the working elements are already in place — there was NO mention of the underlying systems, networks, software, or other elements needed to make all of these business activities possible. It was then that I realized there shouldn’t be.

It’s not that the underlying parts aren’t important. It’s just that the underlying parts aren’t the point. Thinking back, it really never mattered what server or disk group served up files back in the day. The only goal was that IT needed to deploy, configure and maintain that capability. While today’s business demands and pace has changed dramatically, the basic role of IT remains essentially unchanged; to enable, protect and support those competitive business capabilities in a reliable, cost-effective manner. The underlying “stuff” is there, and IT professionals have the savvy to make it all work.

So the real challenge for today’s IT pros is to embrace these many new ideas and find the way to map those complex business needs to the underlying infrastructure, which must inevitably evolve and grow to meet ever-greater bandwidth, storage, and computing demands.

Who knows what the next few days in Orlando might bring? Maybe this old dog might actually learn a new trick or two?

June 12, 2015  9:13 AM

The 19 variables that most affect Google data centers’ PUE

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche
data center

Google used machine learning to parse the multitudinous data inputs on its data center operations, as a way to bust through a plateau in energy efficiency evidenced by its measured power usage effectiveness (PUE).

In a white paper describing the effort to improve PUE below 1.12, Google’s Jim Gao, data center engineer, wrote that the machine learning approach does what humans cannot: Model all the possible operating configurations and predict the best one for energy use in a given setting.

The 19 factors that interrelate to affect energy usage are as follows, according to Google’s program:

  1. Total server IT load (kW)
  2. Total campus core network room IT load (kW)
  3. Total number of process water pumps (PWPs) running
  4. Mean PWP variable frequency drive (VFD) speed: Percent
  5. Total number of condenser water pumps (CWP) running
  6. Mean CWP VFD speed: Percent
  7. Total number of cooling towers running
  8. Mean cooling tower leaving water temperature set point
  9. Total number of chillers running
  10. Total number of dry coolers running
  11. Total number of chilled water injection pumps running
  12. Mean chilled water injection pump set point temperature
  13. Mean heat exchanger approach temperature
  14. Outside air wet bulb temperature
  15. Outside air dry bulb temperature
  16. Outside air enthalpy (kJ/kg)
  17. Outside air relative humidity: Percent
  18. Outdoor wind speed
  19. Outdoor wind direction

Gao states: “A typical large­scale [data center] generates millions of data points across thousands of sensors every day, yet this data is rarely used for applications other than monitoring purposes.” Machine learning can understand nonlinear changes in efficiency better than traditional engineering formulas.

Read the paper here.

May 29, 2015  9:19 AM

SearchDataCenter’s can’t-miss articles of May

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

Time’s tight and alerts are always rolling in, so you’re bound to miss some great articles during the month. If you only have time to peruse a few good reads, here are the stories that other data center pros recommend.

Most controversial topic:

Cloud threatens traditional IT jobs, forces change

The tension between outsourcing to the cloud and keeping workloads in the owned on-premises data center is increasing, and especially pressuring traditional job roles within the IT department.

Must-watch technology trend:

Flash storage here, there — everywhere?

Flash storage offers some impressive improvements over disk-based technologies, and the integration options range from server-side flash to full arrays and everything in between.

Look inside your peers’ data centers:

Align the data center with corporate goals

Enterprises aren’t in the business of running data centers — they sell goods and provide services. But data centers are crucial to operations. Here’s how a coffee company and a law-school centric organization work better thanks to their data centers.

Best-read interview:

A check-in with Facebook’s Chef chief

Facebook’s Phil Dibowitz talks about the company’s DevOps migration and how they work with Chef and other open-source tools.

Best planning tips:

What to update and upgrade this year

Parts of your data center are due — perhaps long over-due — for improvements. Smart investments will pay off with higher performance, energy savings and more reliable service.

Bonus link:

Modern Infrastructure’s May issue

The May issue of Modern Infrastructure tackles containers, cloud and colocation options, the changes in Ethernet technology and more. Ever heard of ChatOps?

July 3, 2014  9:23 AM

Long weekend in the data center

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

The IT team never really gets a break — it’s the first week of July and everyone else is taking a long weekend, but you’re on call, minding the beeps and flashes of some uncaring, disinterested servers. While the storage array quietly dedupes its backups, take a little downtime with data center comics, viral videos and other fun links.

This post was inspired by a conversation with Kip and Gary cartoonist Diane Alber. You can check out her comic at see what kind of trouble could be brewing in your data center if you did clock off for a week at the beach.

More fun:
An oldie but goodie, no one makes data center backup media come to life like John Cleese:

Yeah, you’ve been there. The Website is Down pits the sys admin against the world, or at least against the sales team in the conference room. Commiserate with the series on YouTube (warning: language, though we’ve chosen a safe-for-work episode to start you off):

Let’s hope you haven’t been here– this .gif of a server rack falling off the loading dock is strangely mesmerizing:

Is there one person on your IT team looking a little frazzled and muttering to himself about five nines? Try writing it all down, a la Sh*t My Cloud Evangelist Says (again, language):

What are your favorites? DevOps reaction .gifs? Network admin rants? Lego men running a colo?

April 25, 2014  9:15 AM

Converted-mine data center tour amidst rocks and racks

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

Data center colocation providers have gotten creative with where they place facilities to save energy or increase security, and one cloud provider has found its home underground.

Lightedge Solutions, a cloud infrastructure and colocation provider in the U.S. Midwest, opened a facility in SubTropolis Technology Center, a converted limestone mine in Kansas City, Mo. The underground data center build eschewed precast walls and typical construction, saving 3-6 months on the new build compared to an above-ground data center, according to president and COO Jeffrey Springborn.

“Looking back, everything has gone really smoothly for a first project in a retired mine,” Springborn said.

Inside Lightedge Solutions' underground data center.

Figure 1. The limestone walls act as external insulation, absorb heat from the electronic equipment and provide natural security for equipment hosting corporate and sensitive data. “It’s a hardened facility that’s ready to go in a cookie-cooker fashion,” Springborn said. Pictured: Kansas City Chiefs owner Clark Hunt, whose family owns SubTropolis Technology Center, speaking with Missouri Gov. Jay Nixon at Lightedge’s grand opening in April 2014.

Lightedge colocation and cloud hosting infrastructure

Figure 2. The hardened environment of the underground mine appeals to high-security industries, Springborn said, such as government and medical IT. But cloud infrastructure is so hot, the mine’s users will also include a mix of local enterprises that want to migrate off -premises to cloud or to colocate their own equipment. The cloud hosting infrastructure that Lightedge uses in the Kansas City facility matches the infrastructure in its other facilities. Because of its mix of enterprise customers, Lightedge’s facility provides private cloud hosting without shared equipment.

Lightedge's connectivity-inspired entrance

Figure 3. Lightedge’s cloud hosting infrastructure comprises Cisco and EMC hardware with a VMware cloud layer. It uses high-speed 10G network connections between data centers and software-defined networking to ease network management, symbolized in the lightscaping at the colocation facility’s entrance.

The chiller and generators at Lightedge in SubTropolis

Figure 4. Because Lightedge was the first data center built into the former limestone mine, the company had to plan the portal in and out of the mine for its above-ground generators’ and chiller’s pipes. Pipe location and design must support future expansion of the data center, while accommodating the mine structure and easements.

Lightedge colocation center site plan

Figure 5. Without requiring a typical above-ground building, Lightedge will deploy new 10,000 square foot quadrants in four to five months. Building above ground, Springborn said, Lightedge would have put in the shell infrastructure for 50,000 to 100,000 square feet, paying for and maintaining the structure before it was useful to the business. This site plan shows the grid-like configuration of Lightedge’s data center.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: