IT business management software company BMC Software last week announced a new product for IBM IMS that aims to reduce critical amounts of database downtime and thus improve availability of certain IMS applications. The product, Fast Path Online Restructure/EP, can restructure IMS Fast Path databases in minutes and also allows databases to remain online during the restructuring process, essentially reducing outages and allowing for availability of IMS Fast Path applications.
Also on the same day last week, ASG Software Solutions rolled out its ASG-Records Manager. The electronic records management system allows IT organizations to manage all records (such as invoices), regardless of environment, in one consolidated view. The tool captures records currently in use and creates an all-encompassing record that doesn’t need to be migrated to a new database. For the mainframe angle, it is especially useful because of the management of pre-existing records – new mainframe systems don’t have to be employed to lessen the risks of potential audits. The product is available right now.
A water main break flooded a data center in Dallas, Texas this week, soaking the county’s mainframe equipment and data center.
According to Government Technology News: Water poured Monday night into the Dallas County Records Building’s basement levels, the location of the power supply for a fifth-floor computer mainframe that houses critical data such as marriage licenses and jail records of Dallas-Fort Worth Metroplex residents are stored on servers. Officials were forced to shut down the building and the data center.
According to local media, a 2008 audit recommended that the county remedy its lack of a failover site for disaster recovery, but than no action was taken on the study after a few attempts at building such a capability fell through. Fortunately, the county does regularly back up its data.
Last year CA updated scores of mainframe software products, and upgraded its Mainframe 2.0 initiative, which is meant to help CA users not hate having to deal with CA software so much. According to end users at the time, it was a step in the right direction, but there were still plenty of steps to go. Now CA has added another step.
The whole idea behind Mainframe 2.0 is automating mainframe software tasks that CA users used to do manually. Updates last year included a Web-like front-end look to entice new mainframers and reduce the headaches for those who have dealt with clunky mainframe software for decades.
Last year, Mainframe 2.0 focused on helping download and install the many products most CA users have. This year, its main product called Mainframe Software Manager is focused on deploying that software to mainframe LPARs. This is obviously a crucial step, as I doubt there is a mainframe out there that isn’t running LPARs.
Last year, CA put together a study group that included some mainframe experts, and some novices. Normally, installing a certain set of 10 CA products would take the experts six hours and the novices about 21 hours. According to CA, its Mainframe Software Manager cut that down to 51 minutes and 75 minutes, respectively.
Now, they haven’t put a stopwatch to the new LPAR deployment feature. And their effort is far from over. Dayton Semerjian, corporate senior VP and GM for CA’s mainframe business unit, said next year they’ll focus on configuration, which is one of the most — if not the most — time-consuming of the tasks.
That was something that Richard Resnick, the information services manager of systems and operations at the University Community Hospital in Tampa, Fla. told me last year.
“Then you have to customize it with user parameters and so forth,” Resnick said. “What I’ve seen from Mainframe 2.0, it handles the first part of that. It’s not handling all the customization yet. When it does that, then it will be complete.”
IBM Corp. has spent $30 million to renovate a facility in Poughkeepsie, N.Y. for the manufacture of its System z mainframes, according to reports.
According to a story in the Poughkeepsie Journal, the “updated building has recently been put to use making the first new, but as-yet-unannounced, models for testing,” which presumably mean the z11 mainframes due out later this year. The z11s are expected to move from 65nm to 45nm processors running around 5 GHz with simultaneous multithreading resulting in a 20-25% performance improvement.
The renovated 56,000-square-foot facility doesn’t mean more local jobs, just continued work, according to the story. Though a manufacturing facility, the building was rebuilt to run like a data center so that IBM could fully test the new mainframes.
The lack of new jobs is leading to a lack of a round of applause, at least from some readers of the story. One commenter quipped:
This article is about a big empty room (with nice airflow and plumbing) that will be used to provide a function they already do in another building somewhere on site. No added jobs, no innovation. This is not news, it is a boring planned upgrade. Yes, they spent money because they wanted an ROI somewhere. It is unlikely that the return they are looking for is the joy and happiness of the community. I would hold off on the parade in their honor until you see how solid their “commitment to the area” really is.
TurboHercules, a mainframe emulator software company based in France, has joined the list of companies who have filed antitrust complaints against IBM for its mainframe practices.
The company filed a formal complaint against IBM yesterday with the European Commission’s Directorate General for Competition in Brussels. The complaint accuses IBM of preventing customers from using Hercules, the mainframe emulator software, to run customers’ applications on non-IBM mainframe computers.
TurboHercules joins a group of companies who have filed legal claims about IBM’s mainframe practices, a list which includes the former PSI (which IBM bought), T3 Technologies, and Neon Enterprise Software. All of them make the same basic complaint, that IBM shouldn’t be able to tie its z/OS mainframe operating system so tightly to its mainframe hardware. IBM, meanwhile, contends that it has a right to protect its own intellectual property.
Roger Bowler, co-founder of TurboHercules, wrote in a recent blog postthat he doesn’t consider himself or his company to be an enemy of IBM’s. He just wanted IBM to license its product to TurboHercules’ customers and pay the amount necessary to do so. IBM’s response was that TurboHercules was violating IBM’s intellectual property, and when Bowler asked exactly what property was violated and how, there was no response.
“As the founder of the Hercules project, I can state with confidence that our emulator is in no way an enemy of IBM,” Bowler wrote. “In fact, the Hercules project is made up of some of the biggest mainframe fans on the planet. We are people who have spent our entire careers learning the ins and outs of this architecture, and we want nothing more than to see it thrive far into the future. Mainframes are now so deeply embedded in the infrastructure of modern society that they are too important to be left in the hands of a single company (IBM).”
“The outcome that we at TurboHercules hope for is a return to the competitive market for mainframe technologies that existed in the ‘80s and ‘90s, where IBM licensed its operating systems to customers of the Plug Compatible Mainframe (PCM) manufacturers such as Hitachi and Fujitsu/Amdahl,” Bowler continued.
According to Chris Reynolds, lead trial counsel for Neon Enterprise Software, IBM and Neon will either agree on a trial schedule, or file competing schedule proposals to U.S. Senior District Judge James Nowlin by March 28th.
Neon filed a lawsuit in U.S. District Court in Austin, Texas in December 2009, claiming unfair competition and intimidation of prospective clients by IBM.
The lawsuit stems from Neon’s zPrime software, which allows users to offload workloads from the mainframe’s central processors to specialty mainframe processors. This in itself isn’t unusual, but what zPrime does is allow users to offload more work to these specialty processors than IBM intended.
IBM started warning customers that zPrime could cause mainframers to violate software agreements they had with IBM. In its lawsuit, Neon claims that IBM’s unlawful actions could cost potential Neon customers more than $1 billion in software licensing fees.
Neon is eager to get its day in court with IBM, Reynolds said. “Our goal is to get the case to a final decision and trial in Austin by early next year. IBM doesn’t want the case to come to trial until 2012.”
Reynolds said IBM is stalling because of its deep pockets. “IBM is better positioned to stand an ongoing war,” Reynolds said. “My concern with such an extended schedule is that as long as the litigation pending, IBM can tell customers to wait and see how the litigation pans out. It wants to postpone that day of reckoning.”
At the mainframe user group’s Share conference in Seattle this week, Geoff Smith, senior software engineer IBM said mainframers can start test driving a new documentation format called IBM Information Center.
IBM’s mainframe documentation currently exists in the Book Manager Library. Smith said modernizing the documentation format to the Information Center will help users more easily find the information they’re looking for.
“Everything is on the internet,” Smith said. “People want to be able to use Google and find their information and that is the biggest benefit. You can make things more interactive and Web-friendly.”
Smith said he doesn’t have a deadline to discontinue Book Manager for System z yet, but other related product areas are discontinuing Book Manager already like DB2 and CICS.
“Right now different products are in different pillars of documentation, DB2 in one Info Center, CICS is in another. We’re working on fixing developing some cross product documents,” Smith said. “One of the problems is that it’s funded across divisions – the way IBM is arranged now, DB2 is in the software group. But everybody understands the value of bringing this together.”
Mainframe users can check out the new Information Center format at the new zFavorites for System Z Website.
Yesterday I had a chance to talk to Jim Porell, a distinguished engineer at IBM who also occasionally writes for The Mainframe Blog. We talked about two patents IBM recently received that, despite an initial look, could be applied in some way to the mainframe.
The first is called a “method and apparatus for managing multi-stream input/output requests in a network file server.” According to Porell, the patent originated with IBM’s System p, Power-based Unix systems in mind. The basic idea is the ability to access data quicker when multiple users are trying to to it at the same time. With Power, that capability can be beneficial for high performance computing and video streaming, for example. In the System z environment, it could be good for, say, an insurance company trying to process claims that requires repeated access to the same kind of form.
“If there are a lot of people looking at a similar data stream, you can take something off the physical media and put it into a memory buffer so users can have quick access to it,” Porell said. “Why do multiple reads of the same item? Multiple people are going after the same stuff.”
The second patent Porell talked about was being able to dynamically assign levels of access to employees, either increasing access levels or decreasing them depending on the situation. The patent is titled “Determination of access rights to information technology resources.”
Interestingly enough, the patent was designed to help local emergency response teams and law enforcement officials deal with emergency situations through administrative access levels. But it can also be used in the IT world.
“The problem is that if you give everyone in the world superuser authority, stuff is going to happen because someone will use it in an illicit way,” Porell said. “This is a way for lesser-skilled people to get access to things when they have an urgent need but then also to remove that access when their task is completed.”
Of course, any usable technology that derives from these patents could still be a ways off. Porell couldn’t set a specific timeline for the two above, but said it could be as quickly as a year and as long as three years, with a lot of it probably implemented in stages.
IBM is bringing deduplication to its System z mainframes via a hardware and software appliance that it says can help compress certain tape backup data by 25 times.
Its product is called the IBM System Storage TS7680 ProtecTIER Deduplication Gateway for System z, and includes a virtual tape library, data deduplication technology, and disk-based storage target options. It is available now, runs on z/OS 1.9 and higher, and will cost about $300,000. Some other details:
- FICON attach to System z host
- Use of any IBM disk for back-end storage
- Up to one petabyte of storage capacity per device
- A single virtual tape library image with up to 256 virtual tape drives
- Up to 1 million virtual tape volumes
Part of the dedupe technology is called HyperFactor, which IBM got through its acquisition in 2008 of Diligent Technologies, a storage company out of Framingham, Mass. that specialized in de-deduplication. As early as May 2008, just a month after being acquired, Diligent officials were talking about how IBM’s roadmap for the company included eventually integrating mainframe support for its ProtectTIER dedupe technology.
Deduplication isn’t new to the mainframe world. Data Domain, now owned by EMC, offers it as a joint package with Luminex, and has been doing it for about four years now. Another company that offers it is Bus-Tech.
The new predictive failure and analysis features of z/OS 1.12, due out in September, continue to intrigue IBM mainframe end users.
Last week IBM previewed the new version of z/OS 1.12. Just last night I heard from Robert Rosen on his take of the new z/OS version. Rosen is a past president of IBM computer user group Share, currently serves as the CIO of the National Institute of Arthritis and Musculoskeletal and Skin Diseases of the National Institutes of Health, U.S. Department of Health and Human Services, and is also a member of SearchDataCenter.com’s data center advisory board. Here is what he had to say:
The parts that appeal to me relate to the predictive failure and analysis pieces. Anything that improves reliability or enables me to avoid outages is always important.
Also like the cryptography improvement.
Like to see some more integration with open environments. They’ve made great improvements but the more the merrier.
Predictive Failure Analysis was first introduced in z/OS 1.11. In z/OS 1.12 due out in the fall, PFA will be able to monitor the rate of systems management facilities (SMF) record generation. If it’s abnormally high, it will send a warning message that could potentially prevent an outage. It will also be able to take into account normal activity spikes to prevent false alarms. Run Time Diagnostics, another feature, will help mainframers determine problems that are affecting the performance while the machine is live.
In terms of cryptography, z/OS 1.12 is planned to support elliptic curve cryptography, a form of cryptography that has been endorsed by the National Security Agency, which if you don’t know already, likes to keep secrets.
Thanks for Rosen for chiming in.