Framingham, Mass.-based IDC released its Worldwide Quarterly Server Tracker for the second quarter of 2008 (2Q08) today showing that although the overall server market grew in the second quarter of 2008 (2Q08), x86-based systems experienced their slowest growth rate in 23 quarters.
As a whole, the worldwide server market grew 6.4% year over year to $13.9 billion in 2Q08, marking the ninth consecutive quarter of positive revenue growth and the highest Q2 server revenue since 2000.
Unit server shipments grew 11.1% year over year in 2Q08 driven by a hardware refresh cycle and infrastructure expansions, according to IDC.
Although volume systems revenue grew 2.1% in 2Q08, they underperformed the market for the first time since 4Q06, as server OEM’s experienced strong pricing pressure in the marketplace.
Matt Eastwood, group vice president of Enterprise Platforms at IDC, said in a statement, “IDC saw strong growth in blades, Unix systems, and IBM System z demand across the marketplace. Diversity in market demand demonstrates customers do not believe a single standardized infrastructure is capable of meeting all their computing needs.”
x86 Server Market Dynamics
x86-based systems experienced their slowest growth rate in 23 quarters. x86 server market growth slowed in 2Q08 to a rate of 3% year over year ($7 billion worldwide). The 2Q08 was also the first quarter that spending for non-x86 systems outpaced revenue growth for x86-based systems since 4Q00.
Could it be that server physical sprawl is slowing? Now that virtualization is mainstream, have people slowed down on their server hardware acquisitions?
IDC blames the x86 server market slowdown not on virtualization, but on the pricing climate, saying average selling values declined 8.4% year over year in 2Q08.
“The pricing challenges many OEMs experienced, particularly in the x86 server market, is a concern as it may foreshadow a slowdown in market demand as enterprise budgets face further scrutiny in the second half of 2008,” Eastwood stated.
“While all the major vendors exhibited strong unit growth, there was significant price competition throughout the quarter,” stated Jed Scaramella, senior research analyst for Datacenter Trends at IDC. “Low-end volume servers, such as 1- and 2-socket systems, are somewhat viewed as commodities and experienced the most pricing pressure. Additionally, the quarter was made noteworthy by the fact that several of the tier-one vendors began shipping their new systems targeting large-scale datacenters. Typically, these are stripped down servers that are designed to operate at maximum power efficiency. All components and features that are not essential, including server redundancy, are eliminated to reduce the capital expenditure of these datacenter customers.”
Nashua, N.H.-based Illuminata analyst Gordon Haff said speculation about virtualization causing server sales to decline have always swirled, but never seem to manifest.
“I’ve been hearing the “won’t people buy fewer servers?” question every time there were faster processors, more processor cores, etc. for as long as I’ve been an analyst. And the market just keeps on growing,” Haff said.
“What you are seeing here though–and which relates to virtualization to at least some degree–is the relative popularity of larger servers,” Haff said. “Virtualization really helps people more effectively utilize larger servers even when single apps don’t need all the
horsepower. Even within the x86 server space we’ve seen more interest in 4-socket servers after that category was in decline for years.”
Blade Server Market Shows Strong Shipment and Revenue Growth
Although blade revenue decelerated slightly in 2Q08, year-over-year revenue growth of 40.8% in 2Q08 was the third fastest over the past 2 years, IDC reported.
Overall, bladed servers, including x86, EPIC, and RISC blades, accounted for $1.2 billion in the second quarter, or 8.8% of quarterly server market revenue.
HP held the number 1 spot in the blade market with 53.3% market share and IBM held the number 2 position with 24.8% share. Dell and Sun also experienced blade revenue growth in 2Q08.
There are two ways of measuring application performance from the end user’s perspective: Synthetic transactions and real end user monitoring.
With synthetic transactions, an admin performs typical user functions while on a scripting screen; those actions are captured and run across multiple locations. “Companies with dispersed offices often use synthetic transactions to get a baseline of what a user is seeing for an ERP, HR or other corporate software application,” said David Langlais, director of product management at BMC Software. If an application takes longer than the determined baseline time to respond, IT managers would get an alert.
Synthetic transactions can also help IT managers track down a problem in the WAN. Is the problem inside or outside of the firewall? Run a synthetic transaction inside and outside the firewall to find out. “If my transactions are essentially the same from inside and outside the firewall, it’s not the outside network that’s impeding performance,” Langlais said.
Real end user monitoring tools use probes to track real user transactions in real time. “Companies that tend to face the public on the internet, Expedia, eTrade, Amazon, all use real user experience,” Langlais said.
So what model is the best fit for your organization?
Langlais outlined his recommendation in a recent email:
In general, I think corporate applications which are much more structured are well suited for synthetic. If there are many remote sites with many users (not so common in corporates) then RUM is a good complement. In general, RUM is well suited for mass public applications into e-commerce type applications where you are trying to find out how large groups from geographical areas are getting response. Here synthetic is a good complement to get the baseline response to critical paths through an application.
Synthetic is best for laying down baseline response times for the most common and most critical paths through an application. This is also very important when the application goes over either the Internet (as in a site like Expedia or Yahoo) or over a VPN over a corporate Intranet or public Internet. The implementation of synthetic there works best if synthetic transactions are used outside the firewall and inside the firewall in order to get the comparison of the effect of the Intranet or Internet.
RUM (or real user experience) is best at giving two main perspectives; the real responsiveness of an application (or portion thereof) for a group of users and the actual behavior of users of an application. This second part lets you know where customers thread through an application and where they have problems or abandon an application session.
RUM is limited in that you have no control over what most customers do (not so true for in-house applications with strict flows). In many public facing applications, users may go in and not follow all the way through (a good example is in Expedia when you don’t go all the way to book a flight that you have searched for).
Many proponents of RUM are satisfied with this limitation however and spend a lot of effort pulling out the information they want.
There are also outside services that will do synthetic transaction for you. Give them a script of your transactions, and they replay them from all over the world. “A lot of the Internet-facing sites do that, as well as a lot of large corporations,” Langlais said. “The cons of outside services is that you have less granular control over what you are measuring and the costs can escalate very quickly. Basically, the more you measure and make use of the measurements, the more you want to measure.”
For more info, check out our recent Application performance management tutuorial.
This week I wrote a follow up story on VMware Inc.’s virtualization performance benchmarking tool, VMmark, and found it is mainly used by vendors as a way to market their servers.
Server vendors run the VMmark test under a set of guidelines and submit results to VMware for posting. It is my suspicion that vendors play leapfrog with VMmark by looking at exsiting VMmark results and only submitting their performance results when theirs are as good or better.
For instance, IBM submitted a benchmark for its 16-core System x3850 M2
running VMware ESX v3.5, which trumped the other published as of March 2008. IBM then published a press release to brag about the results, but within a few months, Dell submitted results three PowerEdge systems sporting better virtual machine (VM) performance than IBM, and Hewlett Packard (HP) beat them all out with its ProLiant DL585 G5 server results published August 5.
HP also sent out an email to press this week boasting their top 32-core results, but didn’t mention one minor detail; they are the only vendor with results in the 32-core category so far. Sure, they are number one. They are the only one.
System Administrator Bob Plankers sums this game up nicely in his blog with a post called “Why VMmark Sucks.” Here is what Plankers had to say:
“Having a standard benchmark to measure virtual machine performance is useful. Customers will swoon over hardware vendors’ published results. Virtualization companies will complain that the benchmark is unfair. Then they’ll all get silent, start rigging the tests, scrape and cheat and skew the numbers so that their machines look the greatest, their hypervisor is the fastest. Along the way it’ll stop being about sheer performance and become performance per dollar. Then CapEx vs. OpEx. Watt per tile. Heat per VM. Who knows, except everybody will be the best at something, according to their own marketing department.”
In addition, the benchmark is a real pain to set up and run, and the ‘free’ VMmark software requires other expensive software to work. According to VMware’s website,
VMmark requires licenses for the following software packages;
- Microsoft Windows Server 2003 Release 2 Enterprise Edition (32-bit)—thre 32-bit copies per tile (two for virtual machines and one for that tile’s client system), and one 64-bit copy per tile (for the Java server virtual machine)
- Microsoft Exchange Server 2003 Enterprise Edition
- SPECjbb2005 Benchmark
- SPECweb2005 Benchmark
Plankers said he won’t be wasting any time or money running VMmark. “Instead, I’ll be in meetings explaining to folks why we are maxed out at 30 VMs per server when the vendor says they’ll run 50. Or why we chose VMware over Xen, when Xen claims 100 on the same hardware. I’ll have to remember the line from the FAQ that says “that VMmark is neither a capacity planning tool nor a sizing tool.”
Which begs the question: if it isn’t for use in sizing or capacity planning, exactly what is it good for?”
VMware says the benchmark is good for users who are making hardware purchasing decisions.
“The intention [of VMmark] is that customers can look at the results and make decisions based on what they see. It isn’t just about the fastest server; it’s about making system comparisons; between blades and rackmounts or a two-core or four-core system. Someone can see how much more performance they get from upgrading to four core processors, for instance,” said Jennifer Anderson, the senior director of research and development at VMware.
This makes sense, but as Plankers said, users should beware of benchmark manipulation by vendors and know that the results do not reflect the same workloads that users will run in their own data center environments.
Microsoft’s announcement of licensing changes on 41 of its server software products was a welcome change for users (and analysts) alike, although as reporter Bridget Botelho wrote, the virtualization licensing policy comes with a catch.
As it turns out, the change may or may not have an effect on your company’s disaster recovery testing. This morning Richard Jones from Burton Group wrote about the licensing change’s effect on disaster recovery. The answer? It depends.
Let’s first take a look at what the licensing situation was before the Microsoft announcement this week. From Jones:
Prior to the revisions, a user could not transfer an application license to another physical server more often than once every 90 days. Legally, this didn’t allow for disaster recovery testing with only one license for your application instance, nor for any type of disaster event that would result in failing a service over to a recovery site for less than 90 days. You would need to either stay at your failover site for 90 days minimum, or you would need to purchase additional licenses for your recovery sites, even though they would not be in use except during a disaster event or testing.
So then the announcement changes all that, right? Now you can be fearless in your disaster recovery testing, knowing that Microsoft isn’t going to punish you for transferring licenses from one data center to another, right? Well, not quite.
Microsoft’s press release on the licensing changes doesn’t go into sufficient detail. But when looking for further documentation, there is this Microsoft Word document on the so-called Application Server License Mobility. According to the document, the change “allows you to freely move both licenses and running instances within a server farm from one server to another.” It then defines a “server farm:”
A server farm consists of up to two data centers each physically located in the following areas:
- In a time zone that is within four hours of the local time zone of the other (Coordinated Universal Time [UTC] and not DST), and/or
- Within the European Union (EU) and/or European Free Trade Association (EFTA)
Each data center may be part of only one server farm. You may reassign a data center from one server farm to another, but not on a short-term basis (that is, not within 90 days of the last assignment).
Why does this definition, and in particular the bullet point about four time zones, matter? Jones writes:
For those who have disaster recovery centers within four time zones of your production data center, you now only need one license per covered Microsoft server application instance. This can translate into savings in your business continuity plan. However, those of you who have off-shored – or are looking to off-shore – your disaster recovery solution, you may not be able to reap the benefits of this licensing revision. You will want to take this into account when calculating your potential savings by off-shoring – it may change your plans.
Last week, while doing some background research for our application performance management tutorial, I got into a discussion with Julie Craig, a senior analyst at Enterprise Management Associates about the difference between application monitoring and managing. While some tools can do both (i.e., troubleshooting and user experience monitoring), selecting the right tool for the job often depends on who’s using the tool.
Application performance metrics provide information about things like response time, quality of experience and user experience, Craig said. Application management products typically go “deeper” into the technology underlying the application, so that they can do the following:
- Detect performance problems
- Detect availability problems
- Correlate information from multiple underlying technologies to indicate overall application health.
“When application problems occur, [application management tools] isolate potential root cause to specific infrastructure elements or types,” Craig said. “For example, a performance problem might be traceable to poor database performance. This could be due to multiple factors, including poorly written SQL calls within application code, failure or potential failure of servers or network connections, too many users or transactions creating bandwidth bottlenecks, or potentially hundreds of other factors (or a combination of several). Application management solutions help IT teams track down problem source and hopefully minimize the amount of time that cross-functional technology engineers have to spend on isolating root cause and fixing the problem.”
Taking responsibility for app performance
According to David Langlais, the director of product management at BMC Software, senior IT staff care about whether end users call in to complain about an application performance issue. “For senior staff, seeing the inside of a JVM [Java virtual machine] is not that important.” These technologists care about technology’s results.
But still, someone has to get under the hood and take charge of the problem. With all of the different pieces of infrastructure involved in a multi-tier application, when an end user says, “There’s a problem,” every single person in an IT department could potentially get involved — from application programmers to network or database managers.
“The responsibility of performance management is distributed, but organizations don’t really like to work that way,” Jasmine Noel of Ptak Noel & Associates said. “What I’m seeing is the folks managing the application servers are assuming the broader role of Web applications manager, and they become the one throat to choke.”
Application performance management tools list
Craig emailed me the following list of vendors in the application performance space. The first list includes examples of tools that help senior IT managers get an end-user view of application performance. The second set lists application management tools for server managers looking for a deeper analysis. Many tools overlap both categories:
Application monitoring tools
- CA (Wily)
- Quest (Foglight)
- Oracle (Empirix)
- Citrix Edgesight
- BMC Proactivenet
- Coradiant (TrueSight)
- HP Mercury (now HP Business Technology Optimization Software)
- ASG Application Management
- This group also includes hosted services like Application Performance, Gomez, and Keynote.
Application availability, troubleshooting, root-cause analysis:
- Compuware Application Delivery Management
- Quest (Foglight)
- CA (Wily)
- BMC Proactivenet
- HP Mercury (now HP Business Technology Optimization Software)
- ASG Application Management
- IBM (Tivoli)
- eG Innovations
In your IT department, who’s in charge of application performance? Do you know of other application performance tools we should add to the above list? Leave your feedback in the comments.
When server vendors introduce new blade servers these days, they often mention virtualization in the same breath, often touting the number of virtual machines (VMs) their hardware can support. But those numbers are hardly the result of scientific method.
For instance, San Diego, Calif.-based Verari Systems recently announced that its VMware ESX 3.5-certified VB1257 for BladeRack 2 XL supports up to twice as many VMs as competitive offerings (16). After speaking with Verari, I asked the competition — Sun Microsystems, Hewlett-Packard and Dell — how many VMs their blades can hypothetically support, and was given some big numbers.
But are these server vendors asking the right question? According to Anne Skamarock, a research director at Focus Consulting, the answer is no. Although vendors boast about the number of VMs their hardware supports, “it really is a silly way to look at it,” she said.
“The number of VMs supported depends on the workload. For CPU-intensive workloads, memory will also be a significant factor in performance,” Skamarock said. ”I have spoken with customers who are running 30 VMs per 8-core system and expect to increase that to 50 VMs per system.”
Skamarock said Virtual Desktop Infrastructure adds another twist. “The rule of thumb is six to eight virtual desktops per core, but again, memory will be a big issue here depending on the OS.”
According to preliminary data from SearchDataCenter.com’s 2008 Purchasing Intentions Survey, 61% of the respondents run less than 10 VMs per server, though 33% run 10 to 25, and a mere 5% run more than 25 VMs on a server.
Vendors make big VM support claims
According to VMware Inc.’s website, server consolidation ratios commonly exceed 10 virtual machines per physical processor; so presumably, a blade server with two CPUs, like Verari’s VB1257, should be able to support at least 20 VMs.
Within HP’s ProLiant blade server line, the ProLiant BL460c/465c and BL680c/BL685c would be a good choice for a virtual server platform, primarily because they offer a large memory footprint, which means more than 16 VMs per blade in both cases, plus more network expansion and storage performance, HP spokesman Eric Krueger said.
“Keep in mind of course the number of VMs always vary – the number could be higher or lower depending on the needs the application/VM — but based on the rule of thumb … the BL460c can support up to 16 VMs and the BL680c up to 32,” Krueger said.
Sun Microsystems Inc. claims its Sun Blade servers pack two and three times that many VMs. The Sun Blade X6250, which has up to eight cores with Intel Xeon processors, 64 GB RAM, 110 Gbps I/O and 800GB of internal storage, supports 36 VMs; the Sun Blade X6450, with two or four dual-core or quad-core Intel Xeon processors and up to 96 GB of memory, can support up to 42 VMs and the Sun Blade X8450 with 16 cores per module and 128 GB Memory, supports up to 48 VMs, according to Sun.
Dell was hesitant to name a number of VMs that its PowerEdge blade servers can support, because the number is dependent on a number of factors, like workload, memory, I/O. A spokesperson did say that “Dell has blades that support up to 66 loaded VMs. This is based on VMware’s VMmark benchmark test,” a spokesperson said. “This is an area where we are doing quite a bit of work, so stay tuned.”
So I’m wondering: Are VM support numbers a consideration when buying server hardware, or is it too subjective? Let us know what you think.
When AMD invited me to listen to a webcast regarding the IDF forum, I expected to hear some news; maybe their 45nm processor Shanghai would be coming out earlier than expected to make up for Barcelona‘s delay? That would be big and would help AMD make up some ground in the processor product release wars against Intel.
Instead, the conference served as a means for AMD to plant the seeds of skepticism in the minds of IDF attendees before the conference. AMD executives spent the hour call marketing their existing processing and graphics technologies and bashing Intel.
Randy Allen, the SVP and GM for AMD’s computing solutions group, cited benchmarks showing AMD’s Opteron processors in good light, like the SPECweb 2005 benchmark showing Opteron Model 2356 and Model 8356 hold the top x86 Web performance records for two-and four-processor servers.
Allen also touted AMD’s virtualization assist technology, AMD-V with Nested Page Tables, which recieved high praise from a VMware engineer recently, and he noted that quad-core Opteron is being used in a total of seven of the top performing systems in the most recent Top 500 Supercomputers list, including the No. 1 IBM’s RoadRunner.
“We have our swagger back,” Allen said.
He failed to note, however, that AMD’s Opteron chips were used in only 56 systems (11.2%) on the list, which is down from 78 systems six months ago. Intel processors were used in 74.8% of the world’s supercomputers (about 374 systems), up 4% from six months ago.
When a reporter raised this issue during the press conference, Allen said that having Opteron used in the top performing computer and systems high on the list is more notable than the slip in the number of total systems on the top 500.
In addition to hyping AMD products, Allen also spent plenty of time directly attacking Intel, saying the company has an easy time innovating because it simply mimics AMD’s work.
“Intel adopted our power efficiency technology, our multicore technology and you will see them copying the Direct Connect Architecture and HyperTransport technologies we developed five years ago. … Imitation is the sincerest form of flattery, but it is also annoying.”
Surely, IDF conference goers will hear similar hype about Intel products from Intel executives next week and negative remarks about AMD.
The only mention of AMD’s processor on deck, 45-nanometer chip code-named Shanghai, was that it is scheduled for release later this year and will be delivered on time. Shanghai will consume 20% less power at idle than Barcelona and will have 6 MB of L3 cache (compared with Barcelona’s 2MB of L3 cache).
All in all, the press conference simply re-stated old AMD news.
Thanks for the re-cap, AMD. That is an hour of my life I’ll never get back. And if you just finished reading this blog, hopefully it’s only a few minutes of your time that you can never get back.
San-Diego based Verari Systems announced its Trade-in Program through its Verari Financial Services (VFS) Group that offers competitive values for old IT equipment.
The VFS Trade-In Program is a great way to recycle aging hardware and can be used to reduce the purchase price or monthly lease payment for new Verari equipment. The Trade-In Program applies to all IT assets including PCs, servers, networking and telecommunication equipment.
Other companies offer equipment recycling programs, including Dell, which recycles unwanted Dell equipment for free.
Ed Lucente, the manager of financial services and partners for Verari said in an email that the VFS Trade-In Program is usually related to new Verari projects, purchase or lease but can be included with or without that connection.
“VFS will provide trade-in services even if Verari computers/storage are not included since customers appreciate the incredible convenience, trade-in credits, and resulting cash infusion to their business,” Lucente said. “Our Trade-In Program can stand on its own entirely based on its value-add.”
The Trade-In Program complies with state-to-state e-waste recycling regulations, and offers customers a certification as proof that VFS is compliant with local regulatory laws
for the proper, responsible removal and disposal of equipment, Lucente said.
What Verari does with the equipment depends on its value in the market at the time of
removal. Lucente said, “1. We can refurbish and sell as used to another customer. 2. We can disassemble and sell parts in the current market. 3. We can do all the above domestically or internationally.”
VFS developed the program to advance adoption of Connexxus desktop consolidation, which centralizes compute and storage capacity in the data center. Using Connexxus eliminates the need for many existing workstations, servers and desktops, which can lead to the removal of hundreds or thousands of pieces of equipment.
VFS manages the pick-up, transportation and recycling of used IT assets based on the customer’s schedule and local labor union provisions. Data-wiping services for hard drives can be provided in compliance with regulatory standards. Logistics and costs related to responsible equipment disposition are included in the lease with one monthly payment.
An article by Steve Denegri over at Serial Storage Wire, The Data Center’s Green Direction is a Dead End, asks some interesting questions about the “green” movement in IT, and has caused blogger Robin Harris to ask more questions about the severity of the issue.
Denegri makes a few comparisons between the storage industry and the auto industry that are a distraction, and misleading. If the storage industry fails to make greater moves toward energy efficiency, it could have dire consequences.
To get a glimpse of the future of storage, the automotive industry saw a major transition to energy-efficient products beginning in the late 1970s. Since that time, the automotive industry has seen its supplier-to-OEM ratio shrink by a factor of five. If your company is one of the many suppliers to OEMs in the storage industry, then you should recognize that this “green” trend, over the long term, bodes poorly for your company’s existence, and consequently, your personal livelihood. The cold, hard truth is that an ample supply of energy is necessary to grow any business over the long-term, and the storage industry is shying away from the harsh reality that a sufficient amount of energy is, unfortunately, not available to keep the industry growing.
While consolidation may happen to cut costs (we see it all the time in many industries), I don’t know if this is necessarily a result of the “green trend” as much as it is a result of the general economic growth model that exists in which companies are expected to be more and more profitable in subsequent years.
In a study by The Uptime Institute called The Invisible Crisis in the Data Center: The Economic Meltdown of Moore’s Law, a report which was published at roughly the same time as the aforementioned EPA study, the authors cite that the three-year cost of powering a server exceeds the purchase cost of the server beginning next year. Imagine buying a new car faced with the dilemma that the gas required over the first three years of ownership will exceed the cost of the vehicle. Now consider how the storage industry would respond to the problem: furnish the consumer with frequent refreshes of new models of vehicles that get more miles to the gallon. Chalk up yet another example of the storage industry furnishing its customers with products that they do not really want. The customer wants cheaper gas, not the financial burden of a new car every few years.
Sure, some customers want to be able to continue to use their old servers, but many are seeing the benefit of new technologies that provide more computing power at less energy cost. To say that the storage industry will be providing customers with products they don’t really want isn’t necessarily a legitimate statement. Customers can’t get cheaper energy, that’s not how the energy market works today – you pay the price that energy deregulation has helped create. Faced with the impending increase in energy costs cited in the Uptime study and the EPA report, companies are going to be looking for better technologies. Hybrid cars are the automotive industry’s response to higher prices at the pump; and, despite Denegri’s assertion that customers don’t really want a new car every few years, they seem to be selling pretty well (especially compared to the passé Hummer and SUV sales). As we know, these technologies don’t immediately advance to the most optimized version in one step. Instead, new versions come out and companies can choose to upgrade when they do, or hang on and wait until the next better version arrives on the scene.
However, data center computing has achieved its favorable reputation and widespread adoption thanks to its performance, not its energy efficiency. Said another way, the Indianapolis 500 isn’t won by the driver who can make the most laps on a single tank of gas. If, going forward, data center computing is hindered by the need to expand performance not unabated but rather at a predetermined rate of consumed electricity, then the industry simply can’t expand much further, it’s that simple.
While this seems true on the face of it, performance can increase at lower energy consumption levels. One example is the use of solid state drives (SSDs) and Flash memory. Fusion io’s ioMemory technology uses less than 1% of the power required by a traditional SAN. HP has integrated the technology into their c-Class server blades. And EMC and Sun have SSD products incorporated into their new servers.
In fact, what it seems that Denegri is actually advocating is a concerted effort by the IT industry to work to increase power supplies in order to decrease the cost of power in the United States. This is a tall order, as the EPA estimated in its report that the U.S. would have to build 10 additional power plants if data center energy consumption were to continue unabated.
Instead of elevating the rhetoric on the essential need to expand the capacity of the power grid, the storage industry is incomprehensibly embracing the energy efficiency paradigm, deploying marketing strategies that resemble those of the oil and gas industry. The websites of storage companies these days make mention of carbon footprints, green initiatives, and environmental stewardship, clearly having no idea that they are using buzz words that highlight the industry’s dire state. A recent press release from one OEM actually boasted of its efforts to generate electricity at its headquarters from burning its employees’ garbage! With this as the most suitable example, the world is deploying utterly ridiculous new strategies to generate electricity, none of which have any scale to them. Unfortunately, the storage industry is buying into this nonsense.
Necessity is the mother of invention. When energy was plentiful and cheap, people thought very little of their utilization of power. Instead of a luxury, it was deemed a necessity. But as energy costs rise, people respond by conserving. With the unintended consequences of energy production to the environment clearly visible, it does not seem to me to be “nonsense” to buy into the energy efficiency paradigm. If there are more efficient means of accomplishing the same tasks, why not embrace them?
Supply-side economists argue that increased demand causes an increase in supply, and thus growth in all sectors. However, it appears that energy industry deregulation has allowed a situation in which demand has increased while supply has not. This results in higher cost to energy consumers, and higher profits for energy producers. So, perhaps re-regulation of energy companies is what IT companies should be lobbying for? I cannot argue that cheaper energy costs wouldn’t be welcome (I know I was happy when the price of gas at my local station dropped from $4.43 to $4.05). But I’m not sure that storage companies have the wrong idea in embracing efficiency. In the long run, more efficient products will help decrease the potential costs. And regardless of how energy is produced, there will be an impact on the environment that could be reduced through efficiency. The way I see it, the more companies that jump on the efficiency bandwagon without significant detriment to performance, the better it is for everyone.
IDG World Expo, which organized the LinuxWorld and Next Generation Data Center Conference & Expo (NGDC) Aug. 4-7, 2008, at the Moscone Center in San Francisco, announced the “successful completion of the show” and claimed that combined, the shows attracted nearly 10,000 participants.
An official audit of the actual number of attendees won’t be available until October 2008, but I doubt there were that many people there, and I’m not the only one. The buzz at the show this year was that there were far fewer attendees than in previous years. I also attended the VMworld 2007 show at the Moscone Center last September, and know what 10,000 people looks like at Moscone. This crowd was much thinner.
Not that having fewer than 10,000 attendees is a bad thing; some say it signals true success for Linux; the buzz about the operating system has fizzled out because it is mainstream. That might be, but I think our toilet of an economy might also be at play in potentially lower attendance.
As for the program, there were 200 combined educational sessions, tutorials and hands-on-labs among 17 tracks, including applications, mobile Linux, virtualization and advanced facilities management in the data center. We covered a handful of them, which can be found on our LinuxWorld/ NGDC roundup site.
The themes throughout the show were mobile Linux, power consumption and green technologies, and virtualization. Keynote presentations from executives at Merrill Lynch, McKesson, Cisco Systems Inc., IBM, Citrix Systems Inc. and Lucasfilm Ltd., explored many of these themes.
On the exhibit show floor were companies including Astaro Corp., Barracuda Networks, Copan Systems, Opengear, Canonical, Access, Oracle Corp., DataSynapse, Cisco, Fujitsu, Intel, Talend, Brocade, Ubucon, Bivio Networks, VMware, SugarCRM, Rackable Systems, Wind River and Dice.
The Golden Penguin Bowl was a contest between three geeks from Novell and SUSE against three nerds from Ubuntu who battled over who can answer the most obscure trivia regarding sci-fi, high-tech, Linux and all things geek.
Next year’s LinuxWorld Conference & Expoand Next Generation Data Center Conference & Expo are scheduled to take place Aug. 10-13, 2009, at the Moscone Center.