TIA 942 was published in 2005 and has been recently updated to 942-A with a variety of changes. Most notable changes include the removal of category 3 and 5e for horizontal cabling (they are allowed for voice backbone only now), and an incorporation of 942-1 and 942-2 making 6A the minimum recommended category of copper cabling in a data center. On the fiber side of the equation, OM4 is now the minimum recommended grade of fiber. The MTP/MPO connector is the only approved connector for multi-strand fiber, and LC is the preferred connector for two strand.
There is a new section on energy efficiency as well which recommends proper pathway planning to eliminate some of the issues that happen due to neglect. Top of Rack switching is not mentioned as it is a point to point connection rather than a structured connection, but it does include language that any point to point cable MUST be removed when it is no longer in use. This may be tricky in some data centers over time, but it is a requirement.
Much of the Uptime Institute information in the informative annex has either been stricken or updated due to changes since the original inclusion. It is important to note that there is NO such thing as a TIA Tier level anything as, by definition, anything contained in the informative annex is not part of the standard. The only mention of Tiers in the main body of standards is the suggestion that for high tier facilities that the primary and secondary cables take disparate pathways.
The 568 series is now a link in this standard as opposed to dual language in both. The same is true for 569, 606, 607 and 758.
A new area called an intermediate distribution area is also included for larger data centers and colocation facilities. This optional area sits between the main distribution area and the horizontal distribution areas.
For a copy of the standard and its contents, you can visit www.tiaonline.com
Here is a link to an article that I received from datacenters.com as a reprint from Information Age. I am including it here so that if you have unregistered hardware on the Cisco UCS platform you can check to be sure you get a replacement on your hardware since it is a fire potential in the data center.
2011 presented many CIO’s with challenges surrounding what to upgrade, how to upgrade or even if to upgrade to newer technologies while balancing business needs and budgets as part of the equation. If electronics sales are a key indicator, a lot of data centers chose to do nothing. Those that did change were faced with vendor wars over their space. Add to that a mix of marketing information in which it is difficult to sort product push from fact, cloud-speak, and everything else heaped on a plate, and the mix becomes more frustration than function.
For those lucky enough to start from scratch, the sphere of influence from architect, M&E, vendor, and internal preference, is broad to say the least. For those tasked with brownfield/upgrades, there is often a sticking point based on capacity, space and funding.
So what does the outlook for 2012 bring? In all reality probably the same mess. We know that bandwidth is increasing in large part due to virtualization and consolidation projects. We know that open systems are facing serious challenges from vendors that are trying to close their architectures to seal out other vendors equipment. We know that staffing is a challenge as we realize that certifications are not nearly as effective as hands on experience. And if we don’t take care of our employees, there are always other jobs out there.
We know that security will continue to be a challenge. Social media is creeping into networks, like it or not.
Energy costs are increasing and carbon taxes are becoming a reality. Smarter solutions are developing and in cooling alone there is a plethora of options.
So what is the one mantra to keep in mind in the coming year? In my humble opinion, it is that there is NO one answer that suits all. Every business and every data center is different with its own challenges. The smartest CIO’s will realize this and evaluate their own situation against all the “information” out there. I put information in quotes because there is some good and certainly some horrible information available.
IT has turned into politics in a way. Companies talk about what other systems can’t do more than they talk about what theirs can do. My advice, turn to your peers. See what works in their facilities and what hasn’t. If you are lucky, you will find those that have some history with solutions that can share problems and successes and provide you with a list of “gotchas” that are easier to cypher than captchas. With this real information, you will better be able to build your own road map for 2012. Also seek out vendors that understand your entire ecosystem, not just those trying to sell a “saving” solution that might, in fact, cost extra in the end. If someone doesn’t understand the entire data center, how in the world can they provide you benefit or value?
That said, I sincerely wish everyone a safe and happy holiday season.
IT has increasing pressure to align or in some cases drive business needs. If you look at a single sector – say Finance, you realize that no two finance organizations are alike. And even within a single finance organization, the business needs from various business units are extremely different! So why is it that we read marketing literature that purports to solve all of our problems in a single shrink wrapped package?
The truth is that in a data center there are several spheres of influence over each application decision. Virtualization is forcing applications to play well on virtual machines, but not all home grown applications perform well when virtualized. Some will be stellar.
Not all applications require a separate network, while some actually perform better if they are not sharing resources. And lets not forget that some application installation companies and some application vendors push certain hardware for their installations as it is easier to troubleshoot on a known platform.
I have seen only a few data centers that have applications running on a single vendor platform across the entire data center floor. As equipment is refreshed/upgraded, then you are back to supporting a variety of platforms based on performance and price concerns at the time of refresh/purchase.
Options are continuing to grow (although I would argue that not everyone takes advantage of them). But the paradigm is shifting from simply price->performance->business need, to now include things such as power consumption, energy ratings, field upgrade ability, performance/power ratios, and a variety of other factors for each purchase.
So the next time a vendor shows up spewing marketing material speak, be prepared to challenge the “one size fits all” approach with your own set of increasing difficult matrices that fit YOUR needs.
I have read at least 3 whitepapers from what I thought to be reputable companies that contain incorrect information. That leaves the question- what exactly is a whitepaper? The internet is full of some useful information and also some total crap information. I have written literally hundreds of articles and whitepapers in my career and go out of my way to assure that information is accurate at the time of writing. Of course, none of us can predict change, but at least mine are accurate at the time of writing.
I think it is a huge problem in this industry that freelance writers, companies, and supposed experts put out information that is false and/or largely inaccurate. Granted not all do, but to the ones that do, shame on you!
The information super highway should be viewed as information – correct! information. Of the three that I am referring to this week, two were intentionally incorrect because they cherry picked through the standards to position their products. Either that, or they don’t know what is actually in the standards! Hopefully those reading these papers will view them as product positioning papers and NOT fact.
But that does beg the question, where does one go for accurate information? The standards are a great start. I will admit though, that they are not the most entertaining reading (the meetings where we write them are not always entertaining either!). Cross check facts between vendors. In some cases, you can use social media to get to the right answer. Although I have also read some really hilariously inaccurate answers in some of the linked in forums as well.
It seems to be the theme these days as companies continue acquisitions, hold on to them for a while, then ditch the ones that don’t fit. The sad part for consumers is that it will end up limiting some of our options moving forward. A wise boss told me once that you can’t be everything to everyone. What you can be is damn good at your core business. With Wall Street pressures to grow big companies, there is pressure to grow through acquisition. Recently Cisco, HP and others are shedding under-performing product lines and moving back to core practices and products. Google has now purchased Motorola while HP is shedding its tablet and PC business.
In the end, the consumer suffers through lack of choices, having to find new products for business needs, and reworking budgets for new technology to replace end of life or death of useful products.
Standards are there for a reason. They are a least common denominator for all things to work and play well with others in the same standards arena. For years, some companies come out with compatible cable, electronics, etc. It is important to note that there is a HUGE difference between the two for the end user. Compatible typically means that the products will work but there is something that keeps them from being fully compliant. In open systems, this can mean a wasted investment as compatible products will be the first any vendor points to if the system doesn’t work.
There is not a “compatible” industry accepted testing. Compatible is the vendor telling you their product should work. The variance between compatible and fully compliant could certainly lead to issues down the road.
It is fine for a company to exceed the standards, and in fact, in some cases there are advantages there. But any product that you plan to have in your data center should be compliant with the standards supporting what you expect to have in the environment.
Data centers today are building to incredible density based on a guesstimate of what will be needed in the future. Some of this may be a huge waste as new liquid cooled processors begin to hit the market. The cooling capacity to build an extremely high density data center may not even be needed in a couple short years. Companies such as IBM have already started deploying the technology. Rear Door Heat Exchangers from companies like Coolcentric (Vette) and others can add the cooling directly to the cabinets required.
The answer, don’t overbuild. There is no reason to oversize air “just in case” when other solutions are either readily available or soon to be available that will allow you to build high density areas, not high density data centers covering the entire floor space. Rear door heat exchangers can be added where the capacity is needed.
In short, you need to pay attention to new technologies when planning for the future.
As more and more electronics manufacturers merge, purchase and work to be everything to everybody, they are turning a deaf ear to end users that just aren’t buying it. Stock prices should be a glaring indicator. In order to buy into these models, one would need to do a rather wholesale replacement of equipment. The truth is, companies just don’t do that anymore! Right or wrong we try to stretch every dollar out of our investments, in particular when times are tough.
With the recent reorganization at HP, one company at least seems to be paying attention to the resounding NO being echoed around the globe. All customer facing departments within HP now report directly into the CEO, Leo Apotheker. Cisco has also had some issues while trying to end of life very popular switches and encourage customers to move to Nexus, and some of their divisions are in trouble.
It’s time everyone goes back to playing well with others in our data centers. Standards exist for a reason. How about using them to help the end user community instead of closing down systems and trying to limit our options.
Vendor loyalty is great if the vendor is really providing you with a benefit. Vendor loyalty that is blind and misplaced is costly and stupid. Yes I typed it- STUPID! When a vendor is a business partner and takes care of your needs by saving you REAL money, (not imaginary money) and they actually do have your best interests at heart then that is a true business partner.
Blindly being an “X” shop and not even considering other alternatives is a horrible mistake. This is particularly true today with top of rack switching being pushed on consumers. There are certain instances when this makes sense, but in most data centers it leads to massive over purchase of ports, increased power, increased cooling, and increased maintenance costs year over year.
Since the data center is a true ecosystem let’s look at a real world example. One end user bought into the several layers of switching in a data center and top of rack switching for gigabit transmissions. Gigabit is fully supported by a 100m channel. That means that any channel within that 100m could utilize a switch port if the cabling system (highways) were in place. Server could be placed where it makes the most sense for power and cooling. The modules selected for the transmission were far more expensive, require more power at the server NIC, and since they are modules only carry a 90 day warranty as opposed to the 20 year warranty that would accompany a structured cabling system.
The standards are clear as to structured cabling systems. Basically they say, use them! Data centers that are in a mess today are in such a state in large part due to point to point connections. So back to the case study.
The top of rack switching was supposed to save $275,000 on a structured cabling system. (they forgot that the cords and modules suggested replace a part of that, so it is really a bogus number to start with). But since those cables come out of the networking budget, it is hidden in another pocket. When we counted up the top of rack ports that couldn’t be used over the 336 cabinets in the data center, they would have purchased the equivalent of 504 switches that were not needed. These switches draw power even in idle mode. The maintenance costs are a reoccurring expense as well. In all, to save $275,000, the cost for the additional unneeded and unusable ports were over $2,000,000.
Now you don’t have to be smarter than a 5th grader to figure out that there is some funny figuring going on here. Jethro Bodine of the Beverly Hillbillies could even do those “goes intas.” It’s high time to really start treating the data center as an ecosystem. Stop listening to self serving vendors that are into selling more ports than you need or can use.
So when I discussed this with the end user, the switch vendor said, “No problem, you can always patch cabinet to cabinet.” This same vendor currently has a bunch of messy cabling pictures to try to sell top of rack. How do they think the cables got to be a mess to begin with? Lots of point to point cables and long patch cords.
My advice, grow some common sense. If your switches work in the top of racks, they will also work in a zone that opens those switches up to a greater number of servers and cabinets. That is where the savings are. A cable costs NOTHING once it is installed.
Anyone that has to deal with compliance is not going to like long patch cords all over the place. Or they will quit before it has to be audited leaving the mess for someone else. In short, think things through before you believe a funny number. Make sure to evaluate a technology thoroughly. One piece of marketing literature does NOT work in every data center. It doesn’t matter who the author. You have to evaluate it for yourself and your environment!