Data Center Issues-Please Help!

Power management
Remote management
Thermal controls
Hi Everyone, My name is Maricel, I work for an IT Infrastructure Consulting company and I am responsible for putting together Educational seminars for our clients who are IT/Facilities Professionals. I want to make sure our upcoming seminar focuses on issues that IT Professionals are faced with in Data Centers today. Some of our clients mentioned that they would like to find out more detailed information about how much it costs to build a data center and whether it would make sense for them to buy or build? What are your thoughts on that? What other issues would like for us to cover? Thank you for your time and I appreciate your help! :)

Answer Wiki

Thanks. We'll let you know when a new response is added.

Doesn’t this depend upon your definition of a ‘data center’? Is it 10-100-1,000-10,000 servers with a several terabytes of disk space, with multiple T1 connections.. Must have a minimum of 2 in geographically disparate regions is needed for fallback protection.. But the very first question to answer is , “What is your definition of a datacenter?”

Discuss This Question: 3  Replies

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • BlueKnight
    My thinking is much like that of the first response. I don't know that you can directly answer how much it would cost to build a data center since there are so many variables: 1) How large the company is - number of servers. 2) Location - San Francisco costs much more than anywhere in North Carolina for example. 3) Communications requirements - how many lines and what of type - also cable plant for the local network(s). 4) Does the company size, or its line of business, warrant an alternate data center in another geographic area of the country? 5) Will the power consumption of the data center require the company to build its own substation? 6) What is the company's tolerance for downtime due to power outages? -- goes to UPS selection and sizing (or generator) These are but a few things to consider. You should also cover backup/recovery practices and testing of the backups to ensure they'll work if needed for restoration. Business continuity and recovery from disaster must be addressed... will the company use a hot site or switch to the alternate data center. And what about personnel? In the event of a disaster how will they contact employees needed to carry out the emergency plan... where will they meet if the data center is destroyed or unreachable... oh yeah -- where did we put all those backups? Are they off site (I hope) -- gotta figure out how to retrieve them, where to have them delivered and how long that will take. "Data Center" is the tip of an iceberg... hopefully these comments help to get your creative juices going. Jim
    10 pointsBadges:
  • aymanh
    I am infrastructure consultant and one of the areas I would like to see addressed is how to make data centers geographically redundant but retain the logical perception of being one facility. With recent natural disasters and impending fear of terrorism which can literally knock out large geographic regions in relation to networks and power availability, it is important to relay the ease and difficulties of building data centers in a distributed sense and yet retain the logical perception of one centralized location of IT operations.
    0 pointsBadges:
  • Ve3ofa
    Introduction Computer systems have specific requirements. A small network can be installed with minimal thought, however as the network grows, so does the complexity of the requirements. If you are going to install a large network (or eventually grow to become a large network) you must concider several factors. Building it ad-hoc is not a viable solution to a lasting, cost effective network. In this paper I will address several of the issues that are common in installation and cost projection of data centers. This should give the reader a firm understanding of the obstacles involved, that must be planned for. Each system has certain physical charistics. It takes up a certain volume, has a specific footprint and has a certain weight. It also has other charastics. It requires a specific amount of electricity, and gives off a specific amount of heat. There are also other issues, such as network wires, and access to the machine for repair/upgrade. For a smoothe running, and cost effective data center, you have to consider all of these issues. Further issues such as how many people are required to upgrade and maintain the network should be considered. As the network grows, spending a few minutes per machine per day can explode into a full days work just doing routine maintainence. Keeping the datacenter as cost effective as possible also means attempting to structure it so that the minimum time requirements to perform tasks are accomplished. Physical Requirements Each component in your network has a footprint (the area of their base), a height and weight. Additionally, if you mount them into shelving units, the shelving units themselves may have a slightly larger footprint (and typically do, however minimal). Access to the front and back is also required for maintainence issues. To be comfortable 2-3 times the size of the machine is typically required between rows of systems. When you are lifting a 100+ pound (45 kg) server onto a shelf you do not want to be cramped and drop it. Typical rack mounting has a vertical height measured in 'U'. One 'U' is 1.75 inches (almost 4.5 cm). Systems that are rack mountable are measured in 'U', they may be 1U, 2U etc. A typical rack is about 40-45U tall, and is 19 inches (approx 48.25 cm) wide. If the system is not directly rack mountable, such as a tower, it may be placed on a shelf. Using typical tower systems you should be able to fit about 4 units per shelf, with 4 shleves, or 16 towers per rack. While all that density may seem to be a good thing, however you need to consider weight. A typical weight for a rackmount system is about 22 pounds (10kg). If you get 1U systems, that means that you can have a 800-900 pounds (400 kg) in roughly 1 square meter. It would ruin your day to have a 800 pound shelving unit crash through to the floor below because the floor could not hold that much weight. The rack can weigh even more if you add a Uninteruptable Power Supply (UPS) in the rack. If there is a raised floor you also need to make sure that the raised floor can support the weight, and the tiles wont break. Should one break the rack can tip over and fall into other towers, casing a lot of damage and downtime. Not very cost effective. Additionally those same 1U rackmountable systems can be 30 inches deep. This means that you need to have adequate space in front and behind to access the system and replace/upgrade hardware when the time comes. Given this, you would need about 12-15 square feet (4 square meters) of floor space. Blade systems can increase the density many times, however they are more expensive, its a trade off of space versus cost. There are other space issues that should be addressed. A good idea is to have a small work space with tools, chairs, and a closet with spare parts. Should something go wrong this gives someone a way to diagnose and repair whatever problems may occur, and by not having to go elsewhere to do this, it can save time. Should someone be required to be in the data center for extended periods, a walkman (or ipod!) is a good idea, as the headphones can keep ears warm, as well as drowning out the fan noise (a loaded data center can sound like a 747 taking off). Power Requirements Most of the items that go into a datacenter require electricity. Monitors, KVMs, networking appliances, systems, lights, etc. Exactly how much, and how it is distributed is something that needs to be addressed. Further you should address backup power, such as a backup battery system. Often it is important to make sure that at least one light is on the backup circuits, so that if the power does go out you can still see as the warm glow of the light on the front of an F5 load balancer is typically not enough :) . Electrical power is typically measured in watts, which is voltage * amperage. In North America the power is typically 110 volts, in Europe it is typically 220 volts, however the draw of a given system will use roughly the same wattage in both places (due to ohm's law). Systems can draw a wide range of power, however 100-200 watts is common, but a rough estimate. Older systems with slower clock speeds that is stripped, and blade computers can require lower amounts of electricity. Power requirements increase with load, CPU clock speed, memory, disk, network, and other peripherals, and systems of the future are likely to use more power overall than systems today (although they are attempting to lower the power requirements), so planning for the future is also important. The best way to measure the specific power requirements of a specific system is to use a plug through meter. You connect this between the computer and the power receptacle, and it will tell you what the power draw is. You should test under a variety of conditions however and not only when the device is idle. Relying on the power supply rating is not advised, as that is the maximum that the power supply will draw, however it will only draw what it needs at that moment, and may never reach anywhere near its maximum. In principle, a 20 amp, 120 VAC circuit can deliver 1700 W rms (average) before blowing. This however does not mean that you can run 16 100W nodes on this circuit. During bootup, disk activity, etc the systems will draw more than their average power and can trip the circuit with just 10 systems (if they are all booting at the same time for example). Additionally, according to the Electrical FAQ a low power factor can distort the peak currents and cause problems. Allowing for a 50% reserve is not unreasonable, and amortized over 10 years is trivial compared to the cost of inadequate capacity and subsequent headaches and lost productivity. When the wiring is done, each phase of a multi-phase circuit should have its own neutral line. This helps to lower line distortion. When there are large numbers of switching power supplies (the kind that virtually all computers use) on a single circuit, especially on longer runs, line distortion increases. The run from the power panel to the power receptacle should be as short as is possible, and the data center should have its own power panel as well. The Harmonics Q&A FAQ at gives a lot of information about how many switching power supplies on a single line can distort line voltage, generate spurious and injurious system noise, reduce a system's natural capacity to withstand surges, and more. It is unwise to assume that your building's power (even when adequate in terms of nominal capicity) is adequate to run a large data center. Doing so may cause shorter system life, and plague you with power related problems, increasing the cost of ownership. You may also want to get a harmonic mitigating transformer for the data center as well. A good UPS system can also help to clean the power. UPS systems can keep your system up for short durations when the electricity goes off. They are not designed for prolonged outages (a generator is better suited for that). One large system may be cheaper than many small ones, and is also more managable. The additional expense is roughly $100 per node, which is not that expensive, especially when it is amortized over time. Keeping it all Cool Maintaining a reasonable temperature for your data center is important to the life of the systems. Computers take power in, and convert that electricity to heat. The amount of heat that is generated is based on the amount of power that is consumed. A single loaded rack can draw from 1.5kW - 10 kW in power depending on configuration. Human bodies, lights, network equipment, KVM switches, etc also add heat to the room. The sun can have an effect depending on roof (dark colored rooves can cause more heat to be asorbed), walls and cause the temperature to raise. Any excess heat that is not removed causes the temperature to rise. Should the air cooling (AC) unit not be sufficient gradually the temperature will rise, and if it gets too hot, a fire can start. Large AC units are typically purchased in units of 'tons', where a ton of capacity is enough to remove the heat that would melt 1 ton of ice at 0? Celsius into water at 0? C, every 24 hours. This works out to be about 3500 W. It is always better to have surplus capacity, and to keep the room below 60 F (about 20? C). For each 10? above 70? F the expected life of a computer is reduced by one year, and also increases the cost of dealing with hardware failures as they occure more often. Any AC/Power system should also have a "thermal kill" feature. This ensures that if the temperature rises above a certain threshold power to the room is turned off. Failure to do this, should the temperature rise beyond the ability of the AC unit to cool it, can result in a fire. Additionally, when practical, an alarm system to alert responsible personel should be installed to warn of rising temperatures when they exceed a lower threshold. Idealy if the temperature rises about 90 F (about 32 C) power should be shut off. It is not uncommon to have multiple AC units, where the secondary units will only turn on when the temperature rises above a certain point, thus reducing a single point of failure. Where you duct the air is also important. The exhaust of the systems (typicaly in the rear of the systems) is always going to be hotter than other spots in the data center. While you should not worry too much about slight temperature differences in the room, you do not want for instance one corner being really hot and the rest of the room being really cold. Planning to ensure that the cool air is distributed evenly must be taken. You also want to have the intake pull the hot air out, and not the cold. Heat rises, so if the output ducts are in the floor, the intakes can be positioned above the racks. The effenciency of the unit depends on this. If you do not, it is still possible to cool the room, but at a higher expense. You have a few options on how you duct the air into the datacenter. The floor is an ideal choice, as it is out of the way and you can blow the cold air directly into the rack unit, however this may not be practical in all situations. If you use ceiling mounted ducts, you should be careful that condensation does not collect on the duct work and drip into the racks. There is another option, and that is to have a portable unit, which primarily are used as backups, but may be the only practical way. The heat from the air conditioning unit must also be removed from the room should the unit be placed in the room (such as with the portable units). Infrastructure Costs Networking is one critical infrastructure requirement to most data centers. Even if there is not outside access allowed, the machine in the center must still be able to talk to each other. You will want to allow for enough room for cables, patch panels, and other components of the network (not to mention switches, hubs, and the like). There are two types of costs when building a datacenter. The initial cost, and the recurring cost. The initial cost includes the physical space, any remodelling that has to be done, equipment purchases, wiring, etc. The recurring costs include man power, electricity, rent, and simmilar charges. The initial cost is somewhat high, however when amortized over time it is not that bad, however you do have to pay up front. When you look at the cost per node, if you spent $60,000 to remodel to hold 300 nodes, the cost per node is $200. If this is designed to operate for 10 years, then it is only $20 per node per year. Adding to this rent and interest it may push the cost to $35 per node per year. There is also the cost of installation of the systems (both physically into the space, as well as installing the software that goes on them). Depending on proficiency of the group doing the install, and exactly how that needs to occur you can spend as little as 1-2 days doing the physical install or weeks (including installing all of the network wire, etc). If your group is not as skilled at punching network wire, you may want to hire a company that specializes in this. When looking at recurring costs we have to factor in all the different components. Electricity where I live retails for about 8 cents per kWh (kilowatt hour). If I were to use 1 W of power 24 hours a day, for a whole year it would cost me about 70 cents. The cost of air cooling to remove that 1 W of power is about 30 cents at that same rate. That means for each watt of power that I use I need to pay about $1.00 per year. If I have a 200 systems in a rack, and they each draw about 100W, that one rack will cost me approximately $20,000 to operate for one year. Adding to that $35 per node per year in renovation costs, which are amoritized over 10 years, that brings our total to $27,000 per year just to operate, not including the cost of the equipment or recurring labor costs. Labor is harder to calculate, because the skill of the people involved as well as some other factors (like equipment reliability, software issues, etc) come into play, however you can still average the cost. If it takes 5 minutes per machine per day on average doing maintainence, and you have 200 machines, it will take 16.67 hours per day to maintain this network. This means that you need at least 2 people just for routine maintence. New installations, upgrades, and such will require more people. Based on what these people will make per year you can easily calculate how much that will cost, on a recurring basis. In order to keep labor costs low, you will want to adhere to a few simple procedures. While it does not take much skill to install a datacenter once the physical infrastructure is in place (power, cooling, racks, etc), a little thought and care now can save countless hours in the future. You do not want to have to shut down a whole rack, or fight with wires tangled in knots to perform some fairly trivial task. When installing wires, do it neatly, use a cable management system and/or cable ties. Keep ethernet wires away from sources of interference such as power cables and fluorescent lights. This will help reduce interference and allow higher transfer rates. Use patch panels to connect switches and servers. Small investments now can save much more in the future. Automating tasks is also another area that can lower the cost of ownership and yield a higher return on investment (ROI). If you have a specific set of programs that get installed on a server, and are going to do this on all the servers, then writing a script to automate this task, and install on all servers can greatly reduce your time spent installing. Further, when you install upgrades, having a process by which the upgrades can be installed to all systems can save a lot of time as well. There are a lot of free and commercial tools available for many different platforms to solve this issue.
    80 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: