IBM plans to work with Nokia Siemens Networks to make our smartphones more responsive.
What does this mean for the data center? Well, we’ve learned about placing small data center gateways closer to where the information needs to be, which is very similar to what IBM’s partnership spells out.
According to IBM’s press release, the idea is to get mobile data into smaller chunks to be stored and processed closer to its source to reduce latency.
“This enables a huge amount of rich data to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized cloud,” said Phil Buckellew, vice president of IBM Mobile Enterprise, in the release.
Will we see more unused industrial spaces turned into data centers or will there be tiny, containerized data centers popping up on street corners? It will be interesting to see what shape these improvements take.
Linux jobs are in high demand for another year, and there are just not enough seasoned Linux admins to fill the spots available, according to The Linux Jobs Report from Dice.com and The Linux Foundation.
That’s good news for experienced IT staff looking for a change. If you have expertise in Linux, then the only trouble you’ll have is narrowing down which positions you find interesting.
According to the report, these job seekers are in a position to settle for nothing but the best. The report cites money, work/life balance and flexible hours as the top of the “wants” list. Luckily for job-seekers, companies are putting their money where their jobs are and are offering higher salaries than many other tech jobs. Plus, those already high salaries increased on average by 9% from 2012, says the report.
Now seems like an appropriate time to mention that Red Hat Enterprise Linux 6.4 just came out with several new features, just in time for Linux pros who wanted to put more on their resume.
Anyone else want to go back to school right now?
We often speak cavalierly about building new data centers and all that goes into them. But unless you’ve actually done it — and I haven’t — it’s easy to take all the effort and talent involved for granted. Just remember, without the data center architects, your smartphone would be decidedly less so.
Once you have the building itself, there are power busways, networking cables and cooling infrastructure to install, not to mention workstations and any creature comforts you want to include for employees. On top of that, you have the cabinets and racks, servers and storage hardware to roll in, all of which is delicate and expensive. Oh, and you have a time limit: Yesterday.
This video from LeaseWeb really brings home just how much goes into setting up the data centers we rely on for computing. It doesn’t mention whether they had to build the facility itself or just needed to move in the equipment and infrastructure.
So even if you’re busy buying chocolates or ties for your Valentine’s Day sweetheart — or picking your zombie flick for Feb. 15’s Singles Awareness Day — take a minute to thank an engineer in your life, and maybe give her a hug.
Chip manufacturing giant Qualcomm has listed job openings for ARMv8 engineers, which makes it the latest vendor to show interest in building 64-bit processors with ARM’s latest design. Though Qualcomm is by no means the first to enter the 64-bit ARM server fray — Calxeda, Marvell, Applied Micro and others have already gotten their feet wet in the ARM market — it’s an interesting development considering the company’s history has been rooted in smartphone and tablet processors, not servers. Since Qualcomm’s project is still in its infancy, many details are still unknown, such as how many cores per processor and what form the chassis will take.
The use of advanced RISC machine (ARM) processor chips is prevalent in mobile devices, but until recently ARM’s designs were limited to 32-bit architecture which is not enough to handle enterprise-sized workloads. Now that ARMv8 is in the hands of chip designers, the energy efficiency of this architecture — just 5 W per chip in HP’s Calxeda-based Redstone Server — may start turning more heads away from Intel’s x86.
However, Intel isn’t taking the ARM threat lightly. Ahead of the low-powered army of ARM chips on the horizon for 2013 and 2014, Intel has put a trio of processors in its S1200 line up for purchase. The power draw ranges from 6.1 to 8.1 W which is still a bit more than ARM chips, but that’s not the only deciding factor for most IT shops.
Intel’s x86 architecture is well-established — 20 years, give or take a few — and for ARM to really take off, it needs server hardware and software companies to get on board. So far the groups making the biggest splash are Red Hat’s Fedora ARM project, which has been vocal in its efforts to raise awareness of the low-powered chip’s usefulness in such markets as hyperscale computing and web development, and Microsoft, which hopes to come up with 64-bit ARM support for Windows — both desktop and server.
Is there room in the market for both low-powered options? We’ll get back to you in a year or two.
It’s hard work discovering the secrets of the universe, and now the Einstein@Home distributed computing project has broken the petaflop – the compute power of approximately 61 million iPad 2 processors – barrier in its quest to find pulsars and gravity waves.
“But,” you might say, “Supercomputers can do circles around that!” True, but if you look at a list of the top 500 supercomputers, only the top 23 run in the petaflop range. With such a large volume of collected scientific data to sift through, having a petaflop of extra computing resources is nothing to laugh at. And overall, distributed computing contributes much more than that.
At the time of publication, the distributed computing projects based on the Berkeley Open Infrastructure for Network Computing (BOINC) software system showed a daily average of 7.2 petaflops across more than 720,000 computers. For perspective, the combined power of BOINC-based projects would rank fifth on the list of top supercomputers.
Distributed and grid computing networks like Einstein@Home, SETI@Home and the World Community Grid provide much-needed, cheap computing power for workload-intensive research projects, and cut down on the costs of building supercomputers. The grid model can also provide scalability and redundancy that a data center would struggle to achieve.
So which will it be: Finding alien intelligence with SETI, proving Einstein’s theories or helping to predict the climate?
Did you enjoy yesterday’s spine-tingler? Then you’ll enjoy today’s terrifying tale of data center despair.
“Total shutdown: A cautionary tale”
Sander van Vugt
Don managed a small data center for a school. He was so confident with the data storage on his servers, that he always urged students to store their data on the network drives alone. “Data that is stored there will be backed up by the IT department, so you’ll always have access to the data!” he told wary students.
A few days before the day that senior theses were due, the last Friday of October, when the IT department’s phone started ringing. Mark, the help desk guy, answered it.
“No access to the data volume,” he called over to Don.
Mark hung up the phone, and it immediately rang again.
“One more,” said Mark.
“Tell them we’re working on it,” Don replied.
After half an hour the volume still wasn’t available. With a growing sense of dread, Don decided to reboot the server, backed by Alex, the external specialist. Alex couldn’t find a way to make the volume accessible again either.
When they walked out of the office toward the data center, they met a group of very nervous senior students.
“No comment,” Don said nervously as he pushed past.
Once in the data center, Alex and Don found the server and rebooted it. While booting it gave at least 50 different error messages when it completely crashed and hung.
“What do we do now?” said Don, starting to sweat. The specialist tried to reassure him. “Don’t worry I’ve got it all under control.”
Beyond the sealed, secured doors of the data center, the small crowd had become a shambling, moaning mass of sleep-deprived students.
“You’ve got them under control too, right?” Don asked.
Very late that evening Don and Alex cautiously ventured from the data center to take the train home. The crowd, not to be deterred, shuffled behind, finally caught up to the two technicians at the train station.
Alex’s train arrived before Don’s, so they separated at the railway station. Just as Don’s train arrived, the students shuffled into view. He boarded the train, hoping the doors would close quickly. Don called out to the dead-eyed students. “Wait until tomorrow morning when manual recovery is finished.”
Don and Alex had tried everything, including pulling from the backups that the IT department was supposed to make every day. Unfortunately, they hadn’t verified the tapes so the backups were empty. The manual restoration had not been successful.
Alone in the train car the next morning, Don wondered what he would tell the students. Once the train pulled into the station, he knew it was too late. The shambling horde of students came forward as the train doors opened.
When Don didn’t show up, Alex called him several times but all he got was voicemail. Don, it seems, hadn’t been seen since he got off his train that morning. The only thing left of Don was a broken backup tape at the railway station and his school ID badge. He was never seen again.
Grab a flashlight, head to your nearest lights-out data center, and hunker down for some creepy data center horror stories courtesy of some data center experts. Are these true tales? Stay tuned for tomorrow’s part two.
Data center fright night, part one
“Night of the Living Data Center”
All was quiet and dark – well, it was a lights-out data center after all. The lone admin had been called in. Something had gone wrong in the facility. He went to push the door and – nothing. He remembered he had to use his two-factor secure key, finger print and retina recognition to get inside.
Creeping past the reception desk, a low moan sent shivers down his spine, but it was just the murmurings of the under-paid security guard catching up on his sleep after a 12-hour shift at the local Walmart. Moving silently along, the admin heard “zip, click, zip, click, zip, click” as he walked along. Glancing behind him, he saw the lights slowly clicking off as he moved along. In his befuddled state, it took some time to realize this was the movement-sensitive lights turning on and switching off along the corridor he was in.
At the final interior door, he waited, listening in case something was beyond the door that shouldn’t be. Mind you, as the data center had been built to LEED quality, the amount of insulation in the walls would have deadened the sounds of the demons of hell escaping from the Solaris box he had in the far corner. Screwing up his courage, he opened the door and tip-toed in. There was enormous “CRACK!”
No, sorry, there was an enormous CRAC – it had been there for 20 years, so it shouldn’t have surprised him. Another low moan, far different to the security guard’s sleeping sounds, went past him in a gust of warm, fetid air. He made a mental note to check the raised floor for holes where cooling might be leaking and get the drains fixed.
He made it to the console and logged in. Strange images appeared in front of his eyes – ones he had never seen before. Immediately switching to CLI mode, the easier, more functional and useful graphical user interface was instantly hidden. His fingers flew over the keyboard. Eventually, he calmed down and started typing. The strange incantations he typed brought up steams of glowing data – log files, audit trails, details of pizza delivery companies.
Eventually, tracing the root cause of the issue caused him to take a sharp intake of breath. “How could this happen?” he wondered. Well, it was a waste of time asking anyone else, since they were all in bed at their homes. With a rising sense of panic, he gathered up the tools of his trade – the old Bell telephone modem he might need where he was going, the book on troubleshooting systems and the 1 million candle flashlight. For good luck, he also took a silver thumb drive. Moving away from the console, he muttered some charms under his breath “agility, flexibility, SaaSability” and “I wish we had outsourced this years ago.”
In the corner was a massive black monolith – the admin had always assumed that this was a throwback to the film “2001: A Space Odyssey” and that it was something facilities looked after – nothing for a tech guru like him to worry about. However, the console had provided directions that said this was the source of the trouble. Reading the book, the admin identified the special way to use the runes on the monolith to access secret bits. Finally, he folded down a screen and pushed a switch. The green and black screen flickered, showing a talismanic figure, which changed over into a single word and sent the admin screaming back into the night.
In dripping, blood-red letters, it said “ABEND.”
IT must be ready for fundamental shifts in business and technological paradigms.
Stephen J. Bigelow, Senior Technology Editor
IT professionals are used to change – the reality of change is as old as IT itself. Everyone swaps servers during technology refresh cycles, and new versions of operating systems or key business applications can have IT staff working overtime to get users updated.
Still, change is always a challenge for IT. And while the logistical and technical demands of change will always stress budgets, tax patience, and fray nerves, the IT department soldiers on to solve problems and fight fires and support the business.
But what happens when a technology fundamentally changes the way a business or industry operates?
Just consider an emerging technology like 3D printing where solid objects are created by depositing materials in successive layers according to a digital model. The resulting 3D model can be used as the basis for molds and other core manufacturing processes – even using materials that are correct for the finished product, allowing a manufacturer to actually fabricate finished goods on demand.
The very notion that a company can produce products on-demand flies in the face of traditional business paradigms.
Consider the manufacturing process itself. Traditional manufacturing is based on economies of scale using mass-production of identical items. This involves further practices including logistics, warehousing, sales – the entire business infrastructure which relies on IT resources and support. When small numbers of products can be inexpensively fabricated on demand using an easily manipulated digital model, the business is profoundly affected; and so are the services and support that IT must provide.
There are implications for IT. Designs would proliferate as the number of models multiplies from designers and customers, requiring data storage and security. The concepts of enterprise resource planning (ERP) would change dramatically because the flow of materials into and out of the business would be radically different. Warehousing for work-in-progress and finished goods would be virtually eliminated. Goods could also be built on-site or at remote locations, greatly reducing transportation demands. Our very definition of manufacturing could change. Just imagine an automotive shop that can fabricate certain parts right on the shop floor, or a military unit that can produce key spare parts in the field.
Of course, such fabrication technology is far from perfect; today. But it’s an example of the way that new technologies and their refinement can re-define concepts and practices that are (in the case of mass-production) centuries-old. It’s a wakeup call that IT professionals must look ahead at the changing needs of business and position staff and systems to handle the types of changes that may appear on the horizon – or risk being discarded as obsolete.
Notoriously secretive — at least when it pertains to the innards of its data centers — Google is now offering a peek at the pipes and people that deliver its products — Google Search, Gmail, YouTube and Google Maps — to your computing device.
“Where the Internet lives” features stunning photographs by Connie Zhou of several Google data centers to show the general public the server racks, cooling infrastructure and personnel at the tech giant’s facilities that had been under wraps until now.
You can also take a virtual tour of the Lenoir, N.C., data center through the Street View function in Google Maps. The company’s sense of humor shines through during the tour; if you examine each frame carefully, you’ll spot a few interesting items.
It’s an intriguing behind-the-scenes look at the facilities — rumored to number about three dozen in total — required to handle a bulk of the day-to-day Internet-based workloads that millions of people use on a daily basis.
Cut costs, improve efficiency. Such is the mantra of many a data center manager. While tech giants like Google and Facebook strive to create better, more energy efficient data centers, a small team of researchers from Cornell University and Microsoft have gone back in time 120 years and come up with a way to eliminate another threat to efficiency: cables.
Mathematician Arthur Cayley published a paper in 1889 called Oh the Theory of Groups – mathematical groups, that is – that was full of graphs and equations.
In 2012, those graphs and equations were used to design a wireless data center network running on a 60 GHz wireless band.
According to the paper’s abstract, the benefits — besides eliminating networking cable and switch costs — would include higher bandwidth and fault tolerance and lower latency. Adopting a spoke-and-wheel rack system for servers would facilitate communication using specially built Y-switches would help direct that traffic between racks.
This would mean a complete change in server form factor. The basic parts would remain the same – hard or solid state drive, CPU and RAM – with networking cards replaced with Y-switches. The paper goes on to mention changes to data center routing protocols, MAC layer arbitration and design schematics for the customized Y-switch. The full document is available on Cornell’s Website.
Once servers are cylindrical, it’ll be exciting to see how the buildings surrounding them change to suit. What do you think? More data centers in old missile silos?