Mainframe software giant CA today announced a new version of IDMS, its mainframe database management system, that in particular takes advantage of the zIIP specialty processor on big iron to save users money.
The zIIP, which stands for z Integrated Information Processor, is a mainframe processor designed specifically to run database-like workloads. DB2 has been a popular choice for the zIIP. It joins other specialty processors like the Integrated Facility for Linux (IFL) and z Application Assist Processor (zAAP), which are geared toward Linux and Java apps.
One of the benefits of these specialty engines is that you don’t have to pay software licensing costs for the workloads that run there. That is probably the most direct savings you’re going to get.
But as I’ve written before, some of the real savings can be getting workloads off the central processors, because that can free up enough space there so that users can defer having to buy another mainframe. It’s that seven- or eight-figure capital cost of a new mainframe that can be the really big, possible savings when moving workloads to these side engines.
CA also announced that its Datacom software, another database management system, is in beta release and will also be able to run on the zIIP.
In a move taken from auto dealerships and furniture companies, IBM is offering a way for mainframe users — or wannabe mainframe users — to get big iron now without having to pay for it until later.
Last week IBM announced the z10 Business Class, a smaller counterpart to its big honcho, the z10 Enterprise Class. The z10 BC starts at about $100,000. It’s far less than a seven- or eight-figure EC, but still, it’s not pocket change in this current economic climate. As part of the announcement, IBM is offering a financing deal. Order one now, and you don’t have to make a payment for 90 days.
Can you see the commercial now? Can you see the different color balloons floating around the mainframe retail store, luring unknowing customers into buying a mainframe? Actually, though, financing deals in the IT world are nothing new, said Mike Kahn of The Clipper Group.
“It’s an incentive to bring business in this year,” he said. “They’re ramping up product and getting it ready to ship. A lot of people would say they don’t have any budget left. The economy is down and they’ve blown their budget for this quarter. They might not have any money this quarter but they might have it next quarter.”
Kahn added that IT vendors often have end-of-the-year deals, where they might throw in extra memory at no cost to try to get product out the door by the end of the quarter. IBM’s move is just another end-of-year deal.
Specialty engines cut in half
The other big financial incentive that IBM announced at the same time was that the specialty engines for the z10 BC would be half price — so about $45,000 instead of almost $100,000.
The specialty engines include the zIIP, zAAP and IFL, which are geared toward running database, Java and Linux applications, respectively. Those less fond of IBM and its mainframe have called them stripped-down z/OS engines, and to an extent, that’s true. Either way, the specialty engines have offered a way to get traditionally non-mainframe applications — Linux and Java especially — running on big iron in a consolidated fashion on top of z/VM, big iron’s virtualization operating system.
During the announcement, IBM officials said they are gearing the z10 BC to be a big consolidation play, where users take a bunch of their x86 Linux servers and stuff them onto the mainframe. Reducing the cost of the specialty engines could help it work.
“They believe this is a real opportunity for them and customers to rethink what they’re doing with these more open applications,” Kahn said.
Kahn added that if you’re running a z9 Business Class with specialty engines, and you decide to upgrade to the z10 Business Class, IBM will give you the upgraded specialty engine at no charge.
“That’s like if somebody says they’ll take old used tires off my car and give me better tires,” he said. “I could use a few of those deals in my life.”
Andi Baritchi, who is listed as being a senior security consultant for a Fortune 100 company, recently wrote that cloud computing is the completion of a loop back to the mainframe.
How so? It takes the computing power back out of the hands of consumers and puts it back into the hands of the internal IT employees and third-party vendors. Take a look:
Since history does tend to repeat itself cyclically, I see this whole cloud computing movement as nothing more than a reincarnation of the classic mainframe client-server model. People want painless access to their data and applications from wherever they are, from whatever electronic gizmo they happen to be using. Sometimes we’re on a computer. A laptop. A desktop. A smartphone. A smartTV. A smartfridge. A smartcar…?
There has been a lot of talk about cloud computing being like utility computing, or grid computing, or other types of recent client-based computing infrastructures. Baritchi is saying here that it goes back even further, to the days of dumb terminals and mainframes.
According to Baritchi, the vendors out there that are doing the best job of it are Apple with the iPhone, and Google with Android. Either way, he’s excited about the reversion:
As a side note, this is a very exciting time for me being a security and privacy guy. This feverish movement
back to client serverto the cloud will see many security lapses. It’s gonna be a fun ride.
On the hardware side, mainframe revenue for the fiscal third quarter performed far and away better than other IBM server platforms, presumably because Big Blue just came out with the System z10 earlier this year.
System z revenue was up 25%. Meanwhile, the “converged System p” platform, which is the System i and p combined, rose 7%, although that increase is somewhat misleading because it compares to only the System p platform from last year. System x revenue plummeted 18%.
IBM added that there was “double-digit growth in all geographies” for mainframe server revenue. Total delivery of mainframe computing power, measured in millions of instructions per second (MIPS), increased 49%.
IBM will hold a System-z related announcement at 10am Eastern next Tuesday, Oct. 21. You can register for the System z webcast here.
The event will include Karl Freund, VP of System z marketing, and Mike Augustine, the System z product manager, as the featured speakers. Others involved in a live Q&A at the event will be:
- John Eells, z/OS Technical Marketing
- Allen Marin, System Storage Market Manager
- Nancy Scala, Linux and z/VM Product Manager
- Kurt Johnson, System z Software Market Manager
- Greg Hutchison, IBM System z, Certified IT Specialist and Advanced Technical Support
Here’s a summary of the event:
There is new System z technology on the horizon that could change the way your organization thinks about mainframes. Technology that delivers the granular scalability, flexibility, and resiliency you need — at the lower capacity entry point you want. Its advanced technology fights old myths and perceptions – it’s not just for banks and insurance companies.
For any organization that wants to ramp up innovation, boost efficiencies and lower costs – pretty much any enterprise, any size, any location, z Can Do IT.
The Webcast description repeats “z Can Do IT” enough times that I’m wondering whether Rob Schneider might make a guest appearance.
Anyway, my guess is that the announcement will be for the “Business Class” version of the System z10 — a smaller version of the big boy. But I don’t have any definitive knowledge on it.
The Mainframe blog on Typepad had a blurb about this but for some reason has taken it down.
SearchDataCenter.com mainframe columnist Robert Crawford wrote the following column on “Futures of the Past”. The big ideas in IT over the years and how they affected mainframe operations.
Through the years I’ve seen many things touted as “the future of IT.” Some worked out, some didn’t. Here’s a partial list of what I’ve seen come and go. If you don’t see one of your favorites, please post a comment.
1. Structured Programming
The idea of structured programming grew out of frustration with spaghetti code that no-one, not even the guy who wrote it thirty minutes ago, could understand. Structured programming imposed rules including neat IF/THEN/ELSE statements, DO loops, no procedures longer than 30 lines and no, I mean absolutely no, GOTO’s.
I would claim that structured programming is so deeply embedded in everything we do and the new languages we use that it no longer needs a special name. Mark this one a success.
2. Fourth Generation Languages (4GL)
The idea was to create languages so simple your end-users could write their own programs. Some 4GL’s went so far as to be non-procedural, meaning that one didn’t write logic so much as “describe the problem.”
Unfortunately 4GL’s performed poorly because they were interpreted and had to make too many assumptions. Besides, the end-users had better things to do than write their own programs. The end came with the arrival of spreadsheets and GUI report generators.
3. Personal Computers (PC)
In the middle 1980’s the much heralded PC was going to bring processing to the people. Not only were they easier to use, they wrested the power from the IT glass house and put it on everyone’s desk. By everyone I mean mainly executives who used their PC’s for e-mail and the occasional game of Snake. Before long, every Fortune 500 company was going to be running the whole corporation from a bank of PC’s.
Twenty years later we can say the deaths of the mainframe and UNIX were greatly exaggerated. However, I don’t think any of us are ready to relinquish our PC’s with graphical interfaces and powerful tools.
4. Relational Database Management Systems (DBMS)
IBM started it in the 80’s in announcing DB2 on the mainframe. Some of us shook our heads over the extra direct access storage device (DASD) space and processing relational databases required, but these were supposed to be offset by ease of use, easier programming and retrieval. Relational DBMS’ are now ubiquitous on every conceivable platform and have all but routed their hierarchical and network brethren. Definitely a winner here.
5. Client Server
The PC revolution made processing cheap and brought on the idea of distributed computing involving loosely coupled machines asking for information from each other. If done intelligently, data and software could be shared across the enterprise in manageable pieces. In addition, if the network was quick enough the distance between the computers wouldn’t make a difference. IBM also embraced this notion and built powerful distributed capabilities into CICS.
This is another one that’s so deeply embedded in IT that it no longer has a name. The dream is not fully realized as islands of computing and machines that can’t talk to each other, but the hope is still alive in web services (see below).
6. Object-Oriented Programming (OOP)
OOP changed the way we think about programming. Older programs had main procedures that called subroutines, passing static structures and working linearly from top to bottom. Now we had classes, attributes and methods and, best of all, reusability.
OOP is definitely the dominant programming model of today. It still has a few snags (for instance, just because the CalculatePi class has a square root method doesn’t mean you should include it in the general ledger system) but has more than delivered on its promises.
7. Graphical User Interface (GUI)
A great leap forward for usability, but sometimes you have to wonder if it’s worthwhile to click on the pull-down menu for “copy” when it’s easier to hit control-C
Not only was a Java a more pure implementation of an object oriented language, it was designed from the git-go to run on any platform supporting a java virtual machine (JVM). It quickly became a standard and a handy way to poke Microsoft in the eye. By the turn of the 21st century our colleges were churning out Java programmers by the thousands.
No question Java is the language to use if you can. It still has a few problems with portability (“write once, debug everywhere”) and performance, but its universality makes it hard to resist.
9. The Internet
The Internet was not only going to revolutionize IT, it was going to change the way the world worked. Shopping, socializing, business were all going to be done over the Internet. Start-ups popped up like mushrooms amid dire predictions that any brick and mortar company bereft of a web presence was going to be out of business by next year.
It’s easy to be smug after the tech bubble burst earlier this century, but I am still a big fan of the Internet. Enormous amounts of information are at my fingertips and nearly anything can be bought online. Few of us would dare think of a world without it. The key is having reasonable expectations and remembering we still need the human touch.
10. Web Services
Web services are the latest twist on the client-server concept from the 80’s. Now the idea is to communicate through simple object access protocol (SOAP) messages written in eXtensible Markup Language (XML). If done properly, the disparate machines needn’t worry about the implementation or platform of the other.
This is the current future of IT and we can’t know how successful it’s going to be while we’re in the heat of battle. My notion is this will be another great idea that will become second nature to the next generation of programmers.
John Garing, the CIO and director of strategic planning for the Defense Information Systems Agency, reveals in an interview that the Department of Defense is increasing its mainframe spending by 10% a year.
The Q&A by The Wall Street Journal touches on a host of issues, including how the DoD has cut its data center spending 25% by consolidating facilities and only paying for the processing power it needs. Here’s what Garing had to say about the department’s mainframe use:
First of all, mainframe security is very good. Second, they are very fast and powerful machines. Third, they can do things that it would take racks and racks of servers to do. So while there aren’t very many new applications being written for the mainframe, we’re still finding new ways to use them.
An example is our payrolls, which already ran on a mainframe. We [wrote software to let people access their payroll information] so we just did it for the mainframe. We’re buying new mainframes [when the old ones wear out]. There’s no plan to go wholesale to client server like there was years ago.
The rest of the interview, though unrelated to mainframes, is still interesting.
When you think of mainframe application modernization, you tend to think of two things: SOA (Web services) or migration. That is, put a pretty face on it or get off. Well, Tim Pacileo, the principal consultant at Compass America Inc., said it doesn’t have to be that way. Compass is an IT and business operations consultant based in Naperville, Ill., just outside of Chicago. Pacileo sat down with me to talk about application modernization on the mainframe. There’s no doubt that a lot of the talk is about Web services, but that’s not the only option.
Compass just came out with five tips to build your application modernization case. Talk about tip 1 — justifying the cost — and how it relates to mainframes.
On the mainframe, when a particular application runs out of capacity or fails to meet the needs of the business, rather than touch that application and go through several reversions and updates, most organizations will just throw people at it. On the other end, we’re seeing a lot of core legacy applications with a lack of flexibility preventing the environment from doing what it needs to do. We’ll look at various options, such as replacing part of the core application or migrating off the platform.
Is it always about getting rid of the application or migrating?
There are some options. One company I worked with a few years ago, their goal was to not get off the mainframe for another 10 years. There was no compelling business driver to get off the mainframe. What we looked at there was enhancing that application with some Web-based solutions so people could get in with a browser and look at their data.
Are migration and browser-like functionality the only options?
No. You can change the nature of the mainframe. With another customer, we are moving them from Cobol CICS to turning the mainframe basically into a large database server and enhancing that with DB2 applications.
We’ll see some customers come to us and say that they want to stay on the mainframe. It’s a proven environment. I can have hundreds of applications on a mainframe environment. We’re talkinga bout virtualization now, but the mainframe has been virtualized for decades. And we do see organizations running multiple Linux applications on the mainframe. We’re seeing some growth in that area.
Others say that they want to get off the mainframe because they don’t want to be the last person standing. Some of those core applications are too expensive to modernize or migrate, and so they’re looking to move to some off-the-shelf applications.
What are some of those core applications?
Normally they’re homegrown applications. Health care is one. Government has a lot of homegrown applications, and they don’t have the functionality they need. The legislature keeps making changes, and it’s a Herculean effort to keep up with it.
Here it is: “Virtual Systems of Tomorrow Could Take Cues from Today’s Mainframes.” No way, you think? Seeing as the virtual systems of today took cues from yesterday’s mainframes, it makes sense that the trickle-down effect will continue.
This story, from Virtualization Review, basically outlines what most mainframers already know: that the mainframe had virtualization first, and that it continues to innovate server virtualization.
Those are the macro points, and they are obvious. But the story does do a good job delving into the micro points as well – the dirty details. Mainly with the help of Gordon Haff, a great analyst at Illuminata, the story’s author goes over several features of mainframe virtualization today that are either becoming, or will become, the x86 virtualization of tomorrow. Here’s a list:
- Embedded hypervisors such as VMware’s ESXi and Citrix’s XenExpress are similar to Start Interpretive Execution (SIE), a specialized virtualization instruction that IBM “first enabled on its System/370 mainframes back in the early ’80s.”
- The move in x86 server hardware toward larger, more highly virtualized systems with more and more processor cores is much like a large mainframe, which can host hundreds of virtual server images.
- Virtualized Ethernet. Haff: “[L]arge numbers of Linux guests [running on z/VM] don’t need to communicate with each other over a standard network interface. Oh, they think that’s what they are doing. However…the traffic never enters any physical networking hardware.” It’s where x86 virtualization from VMware and XenServer are going, he added.
Linn County, Iowa residents will likely get about a one-month reprieve on paying their property taxes, thanks to the mainframe.
But this isn’t a good story.
The flood earlier this year that devastated many parts of the state also damaged a mainframe that the county runs many of its tax applications on. Normally, property tax bills are mailed in August and due by the end of September. Now the county treasurer, Mike Stevenson, “anticipates” that they’ll be mailed in September, and residents will then have 30 days to pay them.
Perhaps the county should think about deploying a parallel sysplex, or at least devise a more-robust disaster recovery plan?