Data Center Apparatus

June 15, 2015  9:12 AM

A great time to be a geek

Stephen Bigelow Stephen Bigelow Profile: Stephen Bigelow
Big Data, Internet of Things, IT Strategy

The problem with getting older is that I sometimes find myself set in my ways — gravitating toward things that I knew (or was at least interested in). I confess that I sometimes feel a little overwhelmed by the many abstract concepts emerging across the industry like big data and the Internet of Things just to name a few. After all, I’m a hardware guy, and finding ways to monetize or justify business value in 26 billion connected devices or securely deliver streaming content to a multitude of remote device users is tougher to wrap my brain around than the newest Intel command set. There are moments when I’d rather just move to Nebraska and raise alpacas.

But watching this morning’s keynote address by Gartner’s Chris Howard on “Scenarios for the future of IT” at Gartner IT Operations Strategies & Solutions Summit in Orlando, Fla., reminded me of something that I’d long-forgotten: IT has never been about servers and networks and stacks and all of the engineering stuff; IT is about solving business problems and enabling the business.

Back in those ancient days before the Internet (yes, I was there), IT supported the business by storing and serving up files and even supporting the groundbreaking notion of collaboration. Later, networks and user bases expanded, and businesses needed IT to solve new problems, allowing businesses to support remote users and market the business differently on that thing called the world-wide web.

As we fast-forward to today, Howard’s hour-long keynote focused on the challenges of the digital business. This included the importance of context, providing access to data that isn’t tied to devices, where devices have the intelligence to determine where you are and what you need. He also talked about the need for analytics that extend to the edge of the environment (not just in a data center) to decide what data is important and how it should be used.

And while Howard cited numerous examples of these issues — where many of the working elements are already in place — there was NO mention of the underlying systems, networks, software, or other elements needed to make all of these business activities possible. It was then that I realized there shouldn’t be.

It’s not that the underlying parts aren’t important. It’s just that the underlying parts aren’t the point. Thinking back, it really never mattered what server or disk group served up files back in the day. The only goal was that IT needed to deploy, configure and maintain that capability. While today’s business demands and pace has changed dramatically, the basic role of IT remains essentially unchanged; to enable, protect and support those competitive business capabilities in a reliable, cost-effective manner. The underlying “stuff” is there, and IT professionals have the savvy to make it all work.

So the real challenge for today’s IT pros is to embrace these many new ideas and find the way to map those complex business needs to the underlying infrastructure, which must inevitably evolve and grow to meet ever-greater bandwidth, storage, and computing demands.

Who knows what the next few days in Orlando might bring? Maybe this old dog might actually learn a new trick or two?

June 12, 2015  9:13 AM

The 19 variables that most affect Google data centers’ PUE

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche
data center

Google used machine learning to parse the multitudinous data inputs on its data center operations, as a way to bust through a plateau in energy efficiency evidenced by its measured power usage effectiveness (PUE).

In a white paper describing the effort to improve PUE below 1.12, Google’s Jim Gao, data center engineer, wrote that the machine learning approach does what humans cannot: Model all the possible operating configurations and predict the best one for energy use in a given setting.

The 19 factors that interrelate to affect energy usage are as follows, according to Google’s program:

  1. Total server IT load (kW)
  2. Total campus core network room IT load (kW)
  3. Total number of process water pumps (PWPs) running
  4. Mean PWP variable frequency drive (VFD) speed: Percent
  5. Total number of condenser water pumps (CWP) running
  6. Mean CWP VFD speed: Percent
  7. Total number of cooling towers running
  8. Mean cooling tower leaving water temperature set point
  9. Total number of chillers running
  10. Total number of dry coolers running
  11. Total number of chilled water injection pumps running
  12. Mean chilled water injection pump set point temperature
  13. Mean heat exchanger approach temperature
  14. Outside air wet bulb temperature
  15. Outside air dry bulb temperature
  16. Outside air enthalpy (kJ/kg)
  17. Outside air relative humidity: Percent
  18. Outdoor wind speed
  19. Outdoor wind direction

Gao states: “A typical large­scale [data center] generates millions of data points across thousands of sensors every day, yet this data is rarely used for applications other than monitoring purposes.” Machine learning can understand nonlinear changes in efficiency better than traditional engineering formulas.

Read the paper here.

May 29, 2015  9:19 AM

SearchDataCenter’s can’t-miss articles of May

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

Time’s tight and alerts are always rolling in, so you’re bound to miss some great articles during the month. If you only have time to peruse a few good reads, here are the stories that other data center pros recommend.

Most controversial topic:

Cloud threatens traditional IT jobs, forces change

The tension between outsourcing to the cloud and keeping workloads in the owned on-premises data center is increasing, and especially pressuring traditional job roles within the IT department.

Must-watch technology trend:

Flash storage here, there — everywhere?

Flash storage offers some impressive improvements over disk-based technologies, and the integration options range from server-side flash to full arrays and everything in between.

Look inside your peers’ data centers:

Align the data center with corporate goals

Enterprises aren’t in the business of running data centers — they sell goods and provide services. But data centers are crucial to operations. Here’s how a coffee company and a law-school centric organization work better thanks to their data centers.

Best-read interview:

A check-in with Facebook’s Chef chief

Facebook’s Phil Dibowitz talks about the company’s DevOps migration and how they work with Chef and other open-source tools.

Best planning tips:

What to update and upgrade this year

Parts of your data center are due — perhaps long over-due — for improvements. Smart investments will pay off with higher performance, energy savings and more reliable service.

Bonus link:

Modern Infrastructure’s May issue

The May issue of Modern Infrastructure tackles containers, cloud and colocation options, the changes in Ethernet technology and more. Ever heard of ChatOps?

July 3, 2014  9:23 AM

Long weekend in the data center

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

The IT team never really gets a break — it’s the first week of July and everyone else is taking a long weekend, but you’re on call, minding the beeps and flashes of some uncaring, disinterested servers. While the storage array quietly dedupes its backups, take a little downtime with data center comics, viral videos and other fun links.

This post was inspired by a conversation with Kip and Gary cartoonist Diane Alber. You can check out her comic at see what kind of trouble could be brewing in your data center if you did clock off for a week at the beach.

More fun:
An oldie but goodie, no one makes data center backup media come to life like John Cleese:

Yeah, you’ve been there. The Website is Down pits the sys admin against the world, or at least against the sales team in the conference room. Commiserate with the series on YouTube (warning: language, though we’ve chosen a safe-for-work episode to start you off):

Let’s hope you haven’t been here– this .gif of a server rack falling off the loading dock is strangely mesmerizing:

Is there one person on your IT team looking a little frazzled and muttering to himself about five nines? Try writing it all down, a la Sh*t My Cloud Evangelist Says (again, language):

What are your favorites? DevOps reaction .gifs? Network admin rants? Lego men running a colo?

April 25, 2014  9:15 AM

Converted-mine data center tour amidst rocks and racks

Meredith Courtemanche Meredith Courtemanche Profile: Meredith Courtemanche

Data center colocation providers have gotten creative with where they place facilities to save energy or increase security, and one cloud provider has found its home underground.

Lightedge Solutions, a cloud infrastructure and colocation provider in the U.S. Midwest, opened a facility in SubTropolis Technology Center, a converted limestone mine in Kansas City, Mo. The underground data center build eschewed precast walls and typical construction, saving 3-6 months on the new build compared to an above-ground data center, according to president and COO Jeffrey Springborn.

“Looking back, everything has gone really smoothly for a first project in a retired mine,” Springborn said.

Inside Lightedge Solutions' underground data center.

Figure 1. The limestone walls act as external insulation, absorb heat from the electronic equipment and provide natural security for equipment hosting corporate and sensitive data. “It’s a hardened facility that’s ready to go in a cookie-cooker fashion,” Springborn said. Pictured: Kansas City Chiefs owner Clark Hunt, whose family owns SubTropolis Technology Center, speaking with Missouri Gov. Jay Nixon at Lightedge’s grand opening in April 2014.

Lightedge colocation and cloud hosting infrastructure

Figure 2. The hardened environment of the underground mine appeals to high-security industries, Springborn said, such as government and medical IT. But cloud infrastructure is so hot, the mine’s users will also include a mix of local enterprises that want to migrate off -premises to cloud or to colocate their own equipment. The cloud hosting infrastructure that Lightedge uses in the Kansas City facility matches the infrastructure in its other facilities. Because of its mix of enterprise customers, Lightedge’s facility provides private cloud hosting without shared equipment.

Lightedge's connectivity-inspired entrance

Figure 3. Lightedge’s cloud hosting infrastructure comprises Cisco and EMC hardware with a VMware cloud layer. It uses high-speed 10G network connections between data centers and software-defined networking to ease network management, symbolized in the lightscaping at the colocation facility’s entrance.

The chiller and generators at Lightedge in SubTropolis

Figure 4. Because Lightedge was the first data center built into the former limestone mine, the company had to plan the portal in and out of the mine for its above-ground generators’ and chiller’s pipes. Pipe location and design must support future expansion of the data center, while accommodating the mine structure and easements.

Lightedge colocation center site plan

Figure 5. Without requiring a typical above-ground building, Lightedge will deploy new 10,000 square foot quadrants in four to five months. Building above ground, Springborn said, Lightedge would have put in the shell infrastructure for 50,000 to 100,000 square feet, paying for and maintaining the structure before it was useful to the business. This site plan shows the grid-like configuration of Lightedge’s data center.

December 3, 2013  10:46 AM

IBM roadmap coming to a fork in the road between software and hardware

Ed Scannell Ed Scannell Profile: Ed Scannell
data center

Over its 100-year history, IBM has reinvented itself numerous times to remain competitive in hot, emerging markets or, more rarely, just to survive and remain whole.

The last reinvention 20 years ago addressed the second reason when IBM grew too fat and happy about 20 years ago, and missed a couple of important technology trends below the mainframe platform. The company put up an unprecedented three consecutive years of losses, and seriously considered breaking itself up into a loose federation of 13 business units, called the Baby Blues. But then Lou Gerstner came in and took things in exactly the opposite direction, making the units work together more cooperatively.

Today, IBM doesn’t need an immediate reinvention — but maybe it should start thinking about it. For several quarters in 2013, sales of IBM’s mid-range servers, its Power Series, plummeted dramatically. Despite IBM’s steady delivery of new hardware technology and decision to replace Unix with Linux as the primary operating system for Power servers — a wise choice — a growing number of data centers are choosing Intel-based servers instead.

Compounding IBM’s woes, its X Series of Intel-based servers that compete with HP and Dell has shrinking margins. So much so that earlier this year, IBM was engaged in talks with Lenovo to sell off the series.

Most observers don’t see IBM exiting the hardware business any time soon — mainframe sales have actually picked up significantly this year and its Power Series appears to be producing enough profits for now. But others already see Big Blue transitioning to more of a software-and-services company, one keenly focused on cloud opportunities. One that will, oh yeah, also sell some hardware.

“I think you will see IBM move from a hardware-and-services company to a services-plus-software company just because of the way enterprises are consuming software. [IBM] are setting themselves up now for what they see coming over the next number of years,” said Matt Casey, an analyst with Technology Business Research in Hampton, N.H. “If this shift results in growth over the short term, great. But I see this as the beginning of a larger journey.”

This shift in IBM’s roadmap, whether intentional or not, is already taking place financially. At the end of fiscal 2012, IBM generated $25.4 billion in software revenues, with hardware producing only $17.6 billion. IBM Global Services easily holds up its end of the software-plus-services focus, raking in $40.2 billion last year. And the gap is growing wider still in 2013.

But can IBM’s software business grow fast enough to offset its hardware business declining? Unlikely, given the recent growth of IBM’s bread-and-butter server-based applications, and the fact its cloud-based products portfolio, while growing, is only about $2.2 billion right now, with a goal of reaching $7 billion by 2015.

And, are we looking at a smaller IBM over the next few years? The company has always proved adept at growing the top line, even when it sells off billions of dollars worth of business, such as its still-profitable $11 billion PC business to Lenovo back in 2004.

“IBM is a company run by the CFO, so becoming a smaller company would be a difficult case to make with stakeholders,” Casey said. “But if they decide the Intel (server) business and even the Power business no longer fits its core mission, they will sell it.”

Casey adds that IBM would take the proceeds from selling such units and buy or develop whatever it needs to maintain its size and ability to fulfill its core mission as a full-service provider of information processing technologies for large enterprises.

It is difficult to predict if current server hardware trends will force another reinvention of the world’s second-largest IT company, but it will be interesting to see what new directions IBM may take to stay on track.

September 5, 2013  7:26 AM

The trouble with server efficiency and performance

Erin Watkins Erin Watkins Profile: Erin Watkins

The tradeoff between the high cost of better performance and the forward-thinking energy efficiency is something IT shops must ponder when deciding what servers to buy. As chips shrink and become more efficient through manufacturing, the costs to chipmakers go up and those costs are passed on to buyers. So how can server chip design keep improving without incurring the cost of building whole new manufacturing facilities?

Researchers in China may help existing chip infrastructures become more efficient by improving the flow of electrons through semi-floating gates. All well and good, but both ARM and Intel boast fast, energy efficient server chips already and semi-floating gates are not a new concept.


Image by Flickr user Xeusy. Creative Commons license 2.0.

Let’s start with the Chinese team. In a paper published in Science Magazine, the researchers describe a way to use a hybrid of the floating gates found in flash memory and the traditional logic gates, which use complex rules to define which bits get through, and are used in most chips.

The researchers’ method is essentially a modification to the transistor’s isolation of the gate — floating gates are completely isolated from electron inputs and drains, where other gates are connected — and the research team says it can get better speed than current chips and has an energy efficient operating voltage of less than 2 Volts.

Looking at ARM and Intel chip efficiency improvements, it seems many of these come down to the fabrication process and less to the design of the chip itself. Take a recent ARM development by SuVolta, for example. It basically swapped out existing transistors for what it calls “Deeply Depleted Channel” transistors, which use dynamic body bias to reduce electric current leakage while the transistors are turned off. This isn’t necessarily a change to chip design, just switching out technology to consume less power.

Intel, on the other hand, is shifting manufacturing processes to 22 nanometer fabrication. This approach shrinks transistors and keeps them cooler during operation, thus helping to conserve energy. Instead of the side-by-side approach used by many chip makers, Intel has started stacking its transistors. This method improves performance and shortens the path of electricity on the chip, but producing 3D transistors requires up-front manufacturing investments, and may not provide enough return on investment to make it worth pursuing.

That’s the trouble with both SuVolta and Intel’s approaches. They rely on cost-prohibitive manufacturing processes to make smaller and more efficient chips. Every time a new transistor design is developed, a whole new set of equipment must be built. Plus, according to some analysts, Moore’s law is coming to an end, so smaller chips may not be a long term solution. Chip makers can only use manufacturing to boost efficiency for so long before they can’t shrink any further.

The current issues with return on investment for server chip manufacturing combined with the future limits of manufacturing technology are why the semi-floating gates proposed by the Chinese research team may be worth exploring — they’re not based on shrinking parts. For now, however, changes to chip fabrication seem to be making positive strides toward energy efficiency.

What do you think? Do prohibitive fabrication costs and a looming end to Moore’s law mean server chip makers need to find alternative methods for efficiency?

April 26, 2013  11:17 AM

IT surfaces from the back room to take the helm

Tom Walat Tom Walat Profile: Tom Walat

By Beth Pariseau, Senior News Writerimage001

SAN FRANCISCO — The overarching theme here at ChefConf 2013 this week has been a sense of empowerment for IT folk who are used to shoveling coal in the boiler room of an IT infrastructure.

As cloud computing and scripting languages like Chef and Puppet mature, IT will captain the ship, predicted conference attendees and presenters.

An especially resonant quote from a book called The Phoenix Project was recited from the keynote stage on Thursday morning to drive home this point: “Any COO who doesn’t intimately understand the IT systems that actually run the business is just an empty suit.”


It’s easy to dismiss all this talk as self-serving posturing, but in an increasingly Web-based world, it happens to be true. Consumers now interact with businesses through mobile and Web interfaces much more often than on a sales floor or over the phone. And when was the last time a new startup opened up a storefront without a Web presence? It simply isn’t done these days.

No, now the keys to the castle belong to your company’s computer guy. Let’s hope you’ve been nice to him.

It’s also easy to get bogged down in the intricacies of cookbooks, recipes, the finer points of erlang vs. Ruby, Chef vs. Puppet, DevOps vs. traditional operations, at a conference like this. The bottom line, though, is that it’s all a means to the end of making the current world economy work.

Heady stuff. Let’s hope IT wields its newfound power only for good.

April 5, 2013  11:19 AM

Go thin or go home — that’s the graphene way

Erin Watkins Erin Watkins Profile: Erin Watkins

Some analysts estimate that Moore’s Law  — the ability of manufacturers to double the number of transistors on a chip every 18 months — will end in 10 years. At a certain point, the electrons of silicon circuits become unstable and can no longer be used to process information. While the end of the road may be coming for silicon-based transistors, there may be a rising contender to the computing throne.

The 2010 Nobel Prize in physics went to two men — Andre Geim and Konstantin Novoselov — for their research on graphene. The graphite-based material is perfect for data center applications — not to mention consumer electronics, high-speed Internet networking and medical equipment — because it is not only highly conductive, but it also handles electron movement and heat better than silicon.

But a recent article on brings up an excellent point: Not too long ago, carbon nanotubes had the same hype but until recently were too pricey to expand much beyond R&D labs.

Graphene has been tapped as a possible material to make flexible touchscreen displays for consumer devices, and its single-electron properties makes it an excellent choice for the transistors of the future.

Envision, if you will, the size of a current enterprise data center. Now imagine if each processor could hold 3 billion transistors and take up only the width of a pencil tip. Add in the fact that a processor based on graphene transistors would run cooler than current silicon-based processors, and you’ve got a change in cooling infrastructure as well.

If graphene keeps rolling at its current development pace, we’ll be in the future before you know it.

March 12, 2013  8:12 AM

Boston’s ARM servers aim to kick x86 AaaS

Erin Watkins Erin Watkins Profile: Erin Watkins

There’s been a lot of bluster about ARM servers and how they’ll upend the traditional x86 market when 64-bit ARM offerings come to market, but the fact remains that software designed on x86 servers does not readily translate to ARM architecture. To ease this transition, server maker Boston is offering developers a platform to rework x86 code without having to buy an ARM server through its cloud-based ARM as a Service.

ARM as a Service (AaaS)

Photo by alexreinhart on Flickr. Creative Commons license 2.0.

Boston’s product — reminiscent of OpenStack’s which encourages the use of OpenStack on ARM servers — aims to coax curious but wary developers into an ARM sandbox by providing cloud servers that come with a pre-installed flavor of Linux along with development platforms and porting tools.

News of Boston’s AaaS (no, really) comes hot on the heels of news that China’s Baidu has decided to go with Boston rival Marvell’s ARM servers to run its behemoth of a search engine.

As intriguing as Baidu’s ARM evolution and Boston’s cloud service are, will they generate enough buzz for mainstream IT ARM server adoption? Only time will tell.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: