The tradeoff between the high cost of better performance and the forward-thinking energy efficiency is something IT shops must ponder when deciding what servers to buy. As chips shrink and become more efficient through manufacturing, the costs to chipmakers go up and those costs are passed on to buyers. So how can server chip design keep improving without incurring the cost of building whole new manufacturing facilities?
Researchers in China may help existing chip infrastructures become more efficient by improving the flow of electrons through semi-floating gates. All well and good, but both ARM and Intel boast fast, energy efficient server chips already and semi-floating gates are not a new concept.
Let’s start with the Chinese team. In a paper published in Science Magazine, the researchers describe a way to use a hybrid of the floating gates found in flash memory and the traditional logic gates, which use complex rules to define which bits get through, and are used in most chips.
The researchers’ method is essentially a modification to the transistor’s isolation of the gate — floating gates are completely isolated from electron inputs and drains, where other gates are connected — and the research team says it can get better speed than current chips and has an energy efficient operating voltage of less than 2 Volts.
Looking at ARM and Intel chip efficiency improvements, it seems many of these come down to the fabrication process and less to the design of the chip itself. Take a recent ARM development by SuVolta, for example. It basically swapped out existing transistors for what it calls “Deeply Depleted Channel” transistors, which use dynamic body bias to reduce electric current leakage while the transistors are turned off. This isn’t necessarily a change to chip design, just switching out technology to consume less power.
Intel, on the other hand, is shifting manufacturing processes to 22 nanometer fabrication. This approach shrinks transistors and keeps them cooler during operation, thus helping to conserve energy. Instead of the side-by-side approach used by many chip makers, Intel has started stacking its transistors. This method improves performance and shortens the path of electricity on the chip, but producing 3D transistors requires up-front manufacturing investments, and may not provide enough return on investment to make it worth pursuing.
That’s the trouble with both SuVolta and Intel’s approaches. They rely on cost-prohibitive manufacturing processes to make smaller and more efficient chips. Every time a new transistor design is developed, a whole new set of equipment must be built. Plus, according to some analysts, Moore’s law is coming to an end, so smaller chips may not be a long term solution. Chip makers can only use manufacturing to boost efficiency for so long before they can’t shrink any further.
The current issues with return on investment for server chip manufacturing combined with the future limits of manufacturing technology are why the semi-floating gates proposed by the Chinese research team may be worth exploring — they’re not based on shrinking parts. For now, however, changes to chip fabrication seem to be making positive strides toward energy efficiency.
What do you think? Do prohibitive fabrication costs and a looming end to Moore’s law mean server chip makers need to find alternative methods for efficiency?
SAN FRANCISCO — The overarching theme here at ChefConf 2013 this week has been a sense of empowerment for IT folk who are used to shoveling coal in the boiler room of an IT infrastructure.
As cloud computing and scripting languages like Chef and Puppet mature, IT will captain the ship, predicted conference attendees and presenters.
An especially resonant quote from a book called The Phoenix Project was recited from the keynote stage on Thursday morning to drive home this point: “Any COO who doesn’t intimately understand the IT systems that actually run the business is just an empty suit.”
It’s easy to dismiss all this talk as self-serving posturing, but in an increasingly Web-based world, it happens to be true. Consumers now interact with businesses through mobile and Web interfaces much more often than on a sales floor or over the phone. And when was the last time a new startup opened up a storefront without a Web presence? It simply isn’t done these days.
No, now the keys to the castle belong to your company’s computer guy. Let’s hope you’ve been nice to him.
It’s also easy to get bogged down in the intricacies of cookbooks, recipes, the finer points of erlang vs. Ruby, Chef vs. Puppet, DevOps vs. traditional operations, at a conference like this. The bottom line, though, is that it’s all a means to the end of making the current world economy work.
Heady stuff. Let’s hope IT wields its newfound power only for good.
Some analysts estimate that Moore’s Law — the ability of manufacturers to double the number of transistors on a chip every 18 months — will end in 10 years. At a certain point, the electrons of silicon circuits become unstable and can no longer be used to process information. While the end of the road may be coming for silicon-based transistors, there may be a rising contender to the computing throne.
The 2010 Nobel Prize in physics went to two men — Andre Geim and Konstantin Novoselov — for their research on graphene. The graphite-based material is perfect for data center applications — not to mention consumer electronics, high-speed Internet networking and medical equipment — because it is not only highly conductive, but it also handles electron movement and heat better than silicon.
Graphene has been tapped as a possible material to make flexible touchscreen displays for consumer devices, and its single-electron properties makes it an excellent choice for the transistors of the future.
Envision, if you will, the size of a current enterprise data center. Now imagine if each processor could hold 3 billion transistors and take up only the width of a pencil tip. Add in the fact that a processor based on graphene transistors would run cooler than current silicon-based processors, and you’ve got a change in cooling infrastructure as well.
If graphene keeps rolling at its current development pace, we’ll be in the future before you know it.
There’s been a lot of bluster about ARM servers and how they’ll upend the traditional x86 market when 64-bit ARM offerings come to market, but the fact remains that software designed on x86 servers does not readily translate to ARM architecture. To ease this transition, server maker Boston is offering developers a platform to rework x86 code without having to buy an ARM server through its cloud-based ARM as a Service.
Boston’s product — reminiscent of OpenStack’s TryStack.org which encourages the use of OpenStack on ARM servers — aims to coax curious but wary developers into an ARM sandbox by providing cloud servers that come with a pre-installed flavor of Linux along with development platforms and porting tools.
News of Boston’s AaaS (no, really) comes hot on the heels of news that China’s Baidu has decided to go with Boston rival Marvell’s ARM servers to run its behemoth of a search engine.
As intriguing as Baidu’s ARM evolution and Boston’s cloud service are, will they generate enough buzz for mainstream IT ARM server adoption? Only time will tell.
IBM plans to work with Nokia Siemens Networks to make our smartphones more responsive.
What does this mean for the data center? Well, we’ve learned about placing small data center gateways closer to where the information needs to be, which is very similar to what IBM’s partnership spells out.
According to IBM’s press release, the idea is to get mobile data into smaller chunks to be stored and processed closer to its source to reduce latency.
“This enables a huge amount of rich data to be processed in real time that would be prohibitively complex and costly to deliver on a traditional centralized cloud,” said Phil Buckellew, vice president of IBM Mobile Enterprise, in the release.
Will we see more unused industrial spaces turned into data centers or will there be tiny, containerized data centers popping up on street corners? It will be interesting to see what shape these improvements take.
Linux jobs are in high demand for another year, and there are just not enough seasoned Linux admins to fill the spots available, according to The Linux Jobs Report from Dice.com and The Linux Foundation.
That’s good news for experienced IT staff looking for a change. If you have expertise in Linux, then the only trouble you’ll have is narrowing down which positions you find interesting.
According to the report, these job seekers are in a position to settle for nothing but the best. The report cites money, work/life balance and flexible hours as the top of the “wants” list. Luckily for job-seekers, companies are putting their money where their jobs are and are offering higher salaries than many other tech jobs. Plus, those already high salaries increased on average by 9% from 2012, says the report.
Now seems like an appropriate time to mention that Red Hat Enterprise Linux 6.4 just came out with several new features, just in time for Linux pros who wanted to put more on their resume.
Anyone else want to go back to school right now?
We often speak cavalierly about building new data centers and all that goes into them. But unless you’ve actually done it — and I haven’t — it’s easy to take all the effort and talent involved for granted. Just remember, without the data center architects, your smartphone would be decidedly less so.
Once you have the building itself, there are power busways, networking cables and cooling infrastructure to install, not to mention workstations and any creature comforts you want to include for employees. On top of that, you have the cabinets and racks, servers and storage hardware to roll in, all of which is delicate and expensive. Oh, and you have a time limit: Yesterday.
This video from LeaseWeb really brings home just how much goes into setting up the data centers we rely on for computing. It doesn’t mention whether they had to build the facility itself or just needed to move in the equipment and infrastructure.
So even if you’re busy buying chocolates or ties for your Valentine’s Day sweetheart — or picking your zombie flick for Feb. 15’s Singles Awareness Day — take a minute to thank an engineer in your life, and maybe give her a hug.
Chip manufacturing giant Qualcomm has listed job openings for ARMv8 engineers, which makes it the latest vendor to show interest in building 64-bit processors with ARM’s latest design. Though Qualcomm is by no means the first to enter the 64-bit ARM server fray — Calxeda, Marvell, Applied Micro and others have already gotten their feet wet in the ARM market — it’s an interesting development considering the company’s history has been rooted in smartphone and tablet processors, not servers. Since Qualcomm’s project is still in its infancy, many details are still unknown, such as how many cores per processor and what form the chassis will take.
The use of advanced RISC machine (ARM) processor chips is prevalent in mobile devices, but until recently ARM’s designs were limited to 32-bit architecture which is not enough to handle enterprise-sized workloads. Now that ARMv8 is in the hands of chip designers, the energy efficiency of this architecture — just 5 W per chip in HP’s Calxeda-based Redstone Server — may start turning more heads away from Intel’s x86.
However, Intel isn’t taking the ARM threat lightly. Ahead of the low-powered army of ARM chips on the horizon for 2013 and 2014, Intel has put a trio of processors in its S1200 line up for purchase. The power draw ranges from 6.1 to 8.1 W which is still a bit more than ARM chips, but that’s not the only deciding factor for most IT shops.
Intel’s x86 architecture is well-established — 20 years, give or take a few — and for ARM to really take off, it needs server hardware and software companies to get on board. So far the groups making the biggest splash are Red Hat’s Fedora ARM project, which has been vocal in its efforts to raise awareness of the low-powered chip’s usefulness in such markets as hyperscale computing and web development, and Microsoft, which hopes to come up with 64-bit ARM support for Windows — both desktop and server.
Is there room in the market for both low-powered options? We’ll get back to you in a year or two.
It’s hard work discovering the secrets of the universe, and now the Einstein@Home distributed computing project has broken the petaflop – the compute power of approximately 61 million iPad 2 processors – barrier in its quest to find pulsars and gravity waves.
“But,” you might say, “Supercomputers can do circles around that!” True, but if you look at a list of the top 500 supercomputers, only the top 23 run in the petaflop range. With such a large volume of collected scientific data to sift through, having a petaflop of extra computing resources is nothing to laugh at. And overall, distributed computing contributes much more than that.
At the time of publication, the distributed computing projects based on the Berkeley Open Infrastructure for Network Computing (BOINC) software system showed a daily average of 7.2 petaflops across more than 720,000 computers. For perspective, the combined power of BOINC-based projects would rank fifth on the list of top supercomputers.
Distributed and grid computing networks like Einstein@Home, SETI@Home and the World Community Grid provide much-needed, cheap computing power for workload-intensive research projects, and cut down on the costs of building supercomputers. The grid model can also provide scalability and redundancy that a data center would struggle to achieve.
So which will it be: Finding alien intelligence with SETI, proving Einstein’s theories or helping to predict the climate?
Did you enjoy yesterday’s spine-tingler? Then you’ll enjoy today’s terrifying tale of data center despair.
“Total shutdown: A cautionary tale”
Sander van Vugt
Don managed a small data center for a school. He was so confident with the data storage on his servers, that he always urged students to store their data on the network drives alone. “Data that is stored there will be backed up by the IT department, so you’ll always have access to the data!” he told wary students.
A few days before the day that senior theses were due, the last Friday of October, when the IT department’s phone started ringing. Mark, the help desk guy, answered it.
“No access to the data volume,” he called over to Don.
Mark hung up the phone, and it immediately rang again.
“One more,” said Mark.
“Tell them we’re working on it,” Don replied.
After half an hour the volume still wasn’t available. With a growing sense of dread, Don decided to reboot the server, backed by Alex, the external specialist. Alex couldn’t find a way to make the volume accessible again either.
When they walked out of the office toward the data center, they met a group of very nervous senior students.
“No comment,” Don said nervously as he pushed past.
Once in the data center, Alex and Don found the server and rebooted it. While booting it gave at least 50 different error messages when it completely crashed and hung.
“What do we do now?” said Don, starting to sweat. The specialist tried to reassure him. “Don’t worry I’ve got it all under control.”
Beyond the sealed, secured doors of the data center, the small crowd had become a shambling, moaning mass of sleep-deprived students.
“You’ve got them under control too, right?” Don asked.
Very late that evening Don and Alex cautiously ventured from the data center to take the train home. The crowd, not to be deterred, shuffled behind, finally caught up to the two technicians at the train station.
Alex’s train arrived before Don’s, so they separated at the railway station. Just as Don’s train arrived, the students shuffled into view. He boarded the train, hoping the doors would close quickly. Don called out to the dead-eyed students. “Wait until tomorrow morning when manual recovery is finished.”
Don and Alex had tried everything, including pulling from the backups that the IT department was supposed to make every day. Unfortunately, they hadn’t verified the tapes so the backups were empty. The manual restoration had not been successful.
Alone in the train car the next morning, Don wondered what he would tell the students. Once the train pulled into the station, he knew it was too late. The shambling horde of students came forward as the train doors opened.
When Don didn’t show up, Alex called him several times but all he got was voicemail. Don, it seems, hadn’t been seen since he got off his train that morning. The only thing left of Don was a broken backup tape at the railway station and his school ID badge. He was never seen again.