LAS VEGAS – zEnterprise (z196) may be doing well with its core audience according to Gartner Research VP Mike Chuba, but reaction to the platform at Gartner was shaky, and the BladeCenter Extension flat-out negative.
Chuba mentioned that IBM projects a strong Q4 on z196 sales, but an instant poll taken at a Gartner Data Center Conference session showed that almost half of the audience, 47%, doesn’t plan to install a z196 by the end of 2011. Only 19% said they would install by mid-year 2011. The results even seemed to catch Chuba off-guard, who mentioned that IBM would certainly be disheartened by the news.
The tepid response to z196 may partly have to do with the high costs associated with mainframe software. Another live poll showed that 26% of respondents felt that single largest inhibitor to the growth in usage of the mainframe in organizations was IBM’s high software costs. Chuba mentioned that for the first time in a while third-party software was not the main inhibitor. Quipped Chuba, “You’ve realized you’re already paying a lot to IBM for their software.”
Reaction to zEnterprise BladeCenter Extension (zBX) as an optional add-on to the mainframe was even more worrisome, despite the z196 blades’ ability to participate in cabinet power management when placed in the mainframe cabinet, among other benefits. Practically the entire audience polled at the mainframe session, a whopping 98%, said that they’re not likely to install a zBX before end of 2011. For one, the reasoning may be that Windows support for zBX isn’t currently in the cards, and more importantly, IBM may need a reevaluation of its marketing strategy with respect to zBX — throwing a bunch of features against the wall and hoping that something sticks may be throwing end users off.
“They need to re-announce the zBX,” said Chuba bluntly.
Phil Robert, Director of Service Management at Canadian-based Scotiabank, is a living, breathing case study of the zBX’s “miscommunication” factor – in fact, he really didn’t know much about it and mentioned that he would have to take a look at the announcement to re-familiarize himself with blade technology on zEnterprise.
In regard to z196 itself, Robert said that his company just recently upgraded to IBM’s z10 system. “We have no plans to even look at it,” said Robert of the z196.
In a roundtable discussion on the future of the mainframe, a group of IT pros found that a little more mainframe lovin’ will go a long way in ushering in the next generation of IT.
Last week, CA Technologies hosted the virtual seminar “From Here to Eternity: The Mainframe as a Mainstay of the Enterprise.” Moderator and former IBM mainframe programmer Michael Krieger went on to call the mainframe the “red-haired stepchild of IT” – a totally unwarranted public notion according to one expert – the platform is a dime a dozen among Fortune 500 companies.
“This is, in fact, the platform that’s powering the global economy,” said Dayton Semerjian, general manager, Mainframe, CA Technologies. The two most important IT developments as of late have “been the mainframe and Apple Computer. Everyone knows about Apple and no one knows about the mainframe. It’s important to remind everyone just how important the mainframe is.”
If the global economy is on to something, then why is the general public out of the loop on the efficiencies of the mainframe? For one, the platform isn’t tabloid fodder.
“It doesn’t run into trouble,” said Semerjian in a follow-up call. “So that doesn’t make it interesting to write about. No one can hack into it. There’s no bad news about a mainframe.”
In addition, and most importantly, notes Semerjian, the mainframe is not widely communicated because it manages just a small piece of the data center puzzle – albeit an important one.
To top it off, a CA Technologies survey of 200 senior-level mainframe execs showed that while 76% of respondents will maintain or increase their investment in mainframe software during the next 12-18 months, 61% percent of respondents don’t believe the IT industry does enough to promote mainframe career opportunities to recent graduates. In other words, this mainframe momentum will be a moot point if there aren’t any from the next IT generation that know about the platform and develop it.
“Folks are beginning to move into retirement. You’ve got to attract people who want to build a career on this,” said Semerjian.
One industry exec also agreed that it’s time to promote, promote, promote, and change the perception of, or lack thereof, the mainframe to recent grads.
“We have to define what the mainframe is to the potential workforce,” said Trevor Eddolls, CEO, iTech-Ed Ltd. “There aren’t many green screens left…the mainframe’s role is changing – it can now scale across different types of hardware.”
So much for the pundits that predicted the demise of the mainframe decades ago – it’s slowly turning into “the mainframe that wouldn’t die.”
In survey results announced recently by BMC Software, 84% of participants expected steady-to-growing MIPS (millions of instructions per second) usage on the mainframe. The reasons for this growth? Availability, security, a centralized data management system and transaction speed, according to participants. Elsewhere in survey results, 60% indicated that the mainframe will take on new workloads over the next year. No surprise, though, on the top concerns of IT managers in weighing mainframe benefits – 65% see reducing costs as a top priority for their data centers.
The results clearly show that the mainframe is the go-to platform for heavy transaction processing in the business space. The launch of zEnterprise, with its cross-platform management in embracing both Unix and Linux, may have also helped the popularity of the mainframe heading into 2011. Survey results were comprised of data from over 1,700 mainframe users, with about half of the companies surveyed having big revenues of over $1 billion. You can also take a look at a full report of the findings.
According to data from The Aberdeen Group, IT organizations running newer mainframe hardware are less intimidated by complexity than shops running older versions of big iron.
Russ Klein, VP and technology research director at Aberdeen said calling the mainframe simple might be an overstatement, but mainframe management is starting to look more like management processes in distributed computing systems.
“If you’re maintaining a farm of Dell servers, you’re encountering a lot of the same challenges with issues like virtualization and redundancy,” Klein said. “And managing a mainframe may even be simpler because you’re not looking at the network running across multiple networked PC servers.”
Aberdeen asked mainframe users whether or not they considered the platform complex, and shops running newer hardware were more likely to rate the mainframe as “not complex.” Around a quarter of respondents said the IBM zSeries and IBM z9 (launched Feb 2006) were not complex, versus 45% of z10 owners (launched Feb 2008). No word from zEnterprise shops (launched in July 2010). Check out the report here.
“It’s not like the WOPR from War Games or the Xerox whole-building computer anymore,” Klein said. “I don’t think thirty years of mainframe experience helps at all with the maintenance and upkeep of current mainframe technology. It’s not the same set of skills it was ten years ago.”
Mainframe programmer and SearchDataCenter.com expert Robert Crawford disagrees.
“Actually, 30 years of experience with mainframes is a great asset. The complicated things mainframes do now started off as relatively primitive features years ago,” Crawford wrote in an email. “To know how things evolve leads to greater understanding for how the systems fit together. IBM is developing mainframe wizards and GUI’s. They have made it possible to keep an application programmer from every seeing a green screen again. But, full adoption is years away and, push come to shove, we may see a problem some day that will require someone to log onto TSO and edit a parameter file.”
The study was funded by financially struggling Novell. The company has a significant mainframe Linux business, but has been hovering on the brink of acquisition for the last year. “I’m not going to comment on whether Novell is a stable organization,” Klein said. “But in terms of Novell’s role going forward, they see themselves as the company that is going to bring the enterprise applications to the mainframe.”
Don’t text while driving. Oh, and now you should probably avoid the temptation to monitor your mainframe while driving.
We previously looked at William Data Systems’ neat Zen z/OS network management suite that allowed for mainframe monitoring from the iPad, and asked a company rep whether there would be future support for other platforms. That wish has been granted with the Zen z/OS network management suite being ported to the iPhone, amid growing use of the smartphone in the enterprise.
According to Williams Data Systems, the product is being demoed in Boston this week at the IBM System z Technical University, and should be available soon. The mainframe monitoring process on the iPhone will be similar to the iPad unit, and the even smaller size than the iPad and ease-of-monitoring was key in the rollout of the software.
“Its portability means that a z/OS support person can receive system network alerts on an iPhone in their pocket, receive simplified diagnostic graphics and run a reduced range of diagnostic routines until they have access to an iPad or PC,” mentioned Graham Storey, Vice President Marketing, William Data Systems.
Below is an image of the monitoring suite in action on the iPhone:
In the United Kingdom, auto insurance companies have seen a rise in the popularity of consumers comparing auto insurance quotes online, and insurance companies’ mainframes haven’t always been up to the task, according to application performance vendor Macro 4.
Each time a consumer applies for a quote from a comparison website, the site will send out a mass of automated requests in the form of XML data streams to dozens of insurance company websites.
Any insurance site that is unable to send back a quote within a relatively short period of time – sometimes as little as a few minutes – is presented as “unable to quote” by the comparison sites, according to Philip Mann, Principal Consultant at Macro 4.
Mann said the insurers’ quotation engines are often part of older mainframe computers, embedded in processes that were designed to be used by real people — such as sales and customer service staff. The insurers’ IT teams are often forced to separate out the quotation processing element of their systems as separate standalone functions, and repackage them so they can respond to automated requests from comparison sites – or invest in more mainframe capacity.
We spoke to Mann about the problem.
If large UK insurers are running into these problems with their mainframes at the online sales portal, has this crippled their business or customer base?
Mann: In the UK, the general insurance market has become very competitive and price-sensitive, with ever-decreasing brand and company loyalty. Price is often the key factor in people’s buying decisions, and the comparison websites are having a major business impact on many providers — from the bigger insurers to smaller ‘niche’ players.
It is the bigger, older and more established companies that are most likely to be heavily dependent on mainframe-based systems. And it is these insurers who are at risk of having problems with processing automated requests from the comparison sites.
Why is it that the companies are experiencing such a headache with the insurance quote requests? Big companies such as banks rely on mainframes because of the reason that they are effective in crunching huge numbers effectively.
Mann: Mainframe hardware and operating systems are indeed a powerful, reliable platform for high volume number crunching. The problem is not the mainframe platform, but the fact that many insurance companies are making use of legacy mainframe application code which was originally designed to provide quotations to real people — such as sales and customer service staff – to pass on to customers.
These older applications have had to be adapted to respond to the automated requests coming in from comparison sites, but are struggling to handle the greater workloads which they were never originally intended to accommodate.
Part of the challenge is that this new approach to ’selling’ insurance means the insurance companies have to perform far more quotations for every policy sold. For example, in the past a company might have performed 3-5 quotations for every new auto insurance customer sold. Now, through the comparison website, this has risen to 30-50 automated requests — around a tenfold increase. This puts pressure on the overall system’s ability to handle transaction loads and rapidly exposes any performance problems in the legacy quotation application code itself.
How are insurance companies rectifying the situation? Are they looking at options such as the new zEnterprise from IBM or now steering away from mainframes for the future?
Mann: Most companies are not planning to move away from mainframes to rectify the situation. There is just not the time or inclination to consider and implement this type of drastic solution to the problem. While the mainframe programs could be old, most companies are in general using the latest versions of IBM’s mainframe hardware technology (zEnterprise), which is as up-to-date and technologically advanced as any other hardware in the marketplace.
The normal response to a problem like this would be to, reluctantly, buy more processing power in the form of new mainframes or mainframe upgrades, or ‘throw hardware at the problem.’ While this might help in the short term, it is not always guaranteed, because it does not really get to the heart of the problem. And of course hardware upgrades are highly expensive.
What’s Macro4’s take on how to handle the performance management issue?
Mann: Most people when talking about the performance of computer systems are thinking of the computer hardware and operating systems and how they perform when running application processes. But it’s also very important to look at performance from the point of view of the applications themselves, and where and how they are utilizing computer resources. Companies like ours specialize in looking at things from this application point of view, using application performance measurement tools and methodologies.
With this approach, the resources required to run a transaction, such as generating insurance quotes, can be profiled, and any areas of poor performance or high resource utilization (within the application down to line of code level) can be highlighted for further investigation and tuning. This process has proven very productive in reducing overall processing requirements, delivering better response times and allowing much higher transaction levels to be handled without the need for expensive hardware upgrades. It is the sensible alternative to the more general approach of throwing more hardware at the problem.
Are you seeing this problem crop up in the U.S. market? Weigh in on this conversation in the comments or on Twitter @DataCenterTT.
Huge reinsurer Swiss Re is the first company to get their hands on the zEnterprise (or z196) mainframe from IBM — in fact, they ordered two.
Yesterday, Big Blue announced that they shipped the first two available mainframes to the reinsurer founded in Switzerland and now operating in 20+ countries. Swiss Re CIO Markus Schmid cited that the “ability to integrate and manage workloads running on multiple servers as a single system” to be the key benefit of zEnterprise and one of the reasons they decided to go with z196. The z196 was originally announced in July and became available on Sept. 10. The mainframe’s processor is dubbed “the world’s most powerful computer chip,” by IBM, and features Linux support, 96 processors running at 5.2 Ghz, and a capacity of 60% more than System z10 while using the same amount of electricity.
In addition, CA Technologies announced same-day support for IBM’s new z/OS v.1.12 operating system and the zEnterprise. CA’s Mainframe Software Manager and Mainframe Chorus are designed to ease support for z/OS and its hardware platforms.
Is your company considering the zEnterprise as well? Have you already ordered one? Is the hefty price tag a deterrent, or is the ability to marry Unix and Linux workloads on z196 appealing? Let us know @datacenterTT on Twitter.
IBM’s new, huge mainframe that becomes available tomorrow is cooling data center hotspots with water – something that an IBM mainframe hasn’t seen in 15 years.
The zEnterprise or z196 can be shipped with the water cooling heat exchanger as an optional add-on, and IBM says that the system can reduce overall energy consumption by up to 12% and won’t change the data center’s footprint. In addition, according to IBM, the water cooling connects directly to existing chilled water systems and does not require an external water conditioning system. The mainframe is the first since the ES/9000 family to utilize water cooling. Big Blue launched its Rear Door heat exchangers for distributed servers back in June 2005.
Below is the rear of the z196 (image courtesy of IBM) showing the water cooling unit:
The brawny mainframe that you’ve been waiting for is ready for launch.
IBM announced today that initial shipments of the zEnterprise mainframe with the z196 processor, dubbed by Big Blue as “the world’s most powerful computer chip,” will commence on Sept. 10. As previously reported, the mainframe boasts 96 cores running at speeds of up to 5.2 GHz. The mainframe offers 60% more capacity than the z10, while still using the same amount of electricity, and the processor chip contains 1.4 billion transistors. According to IBM, the mainframe can handle 17,000 times more instructions than the company’s first system. Clearly, this is not your father’s mainframe. Only time will tell, though, if customers find the new z196’s capabilities to manage across multiple distributed server platforms relevant.
Interesting article from Vivek Wadhwa posted over at TechCrunch. It details the paradox of the droves of unemployed engineers ready to pounce at a job and the companies who say they’re having a difficult time finding the right talent. Wadhwa notes that the reason for this conundrum is the one that companies don’t want you to hear: if you’re old, you’re not getting the job over young, cheap labor.
Wadhwa says that the young can learn new technology faster, can get paid less and don’t carry any baggage – both of the personal kind, with families that take away from long hours, and technology-wise – young people are a “clean slate” that can be molded and changed.
Holding to the fact that IT is an “up or out” profession, Wadhwa ends with some advice for IT professionals – either move up the ladder into management positions, switch to sales or take your skills elsewhere and run a start-up. And if you’re going to stay in programming, says Wadhwa, make damn sure to keep skills as current as possible and realize that the odds are against you to make a boatload of cash down the line; companies can always find an entry-level worker to train on the cheap.
The article certainly makes a ton of sense from a corporate standpoint – companies are always looking for ways to cut costs, especially in a down economy, and there are plenty of young programmers who would jump at a chance to get their foot in the door. But it also shortchanges and cheapens the skills and leadership that seasoned IT engineers bring to the table.
What do you think? Is IT like modeling, with time and age working against you? Is there a method to the IT job market structure – a “game” you have to play to stay relevant? Leave a comment or reply on Twitter at @datacenterTT.