Posted by: Leah Rosin
BCD Clover, BI solutions, cloud computing, COMMON, IBM System i user groups, ILE, Linux, Power processor, Query/400, RPG, Smart i, SSD, Talend, Valence, virtual tape, Vision Solutions, WDSC, Web tools
Day two at COMMON 2009 is under my belt, and the level of news and information was again a bit more than I could take in.
COMMON educational session tidbits
I attended a session at 8 AM (without coffee) on “Taking Advantage of Capacity on Demand” for POWER Systems. The session was led by Mark W. Olson, an IBM Power Systems World Wide Product Manager out of Rochester, Minn. I didn’t know what I was getting in for — maybe I should have read the abstract:
This session digs into how IBM’s Capacity on Demand offerings really work for the Model 570 and 595 processors and memory starting with how they are ordered all the way through how they are paid for. Topics include temporary and permanent activations of processors and memory, contractual requirements, pre-pay or post-pay, trial capacity, how to enable, and more.
On the bright side, if you want to know if you should get the daily or minute-based capacity on demand offering from IBM, just ask me and I’m a fount of knowledge. The session was likely useful for those considering paying for more capacity for their 570 or 595 Power Systems, but it didn’t answer what I consider the first step question, which is: Do I really need more processing power, or are there other tweaks to performance I can make? Again, no fault of Mark’s, just my own lack of reading comprehension.
I asked Olson about how cloud computing offerings might compete with limited capacity of System i on POWER6. He shared the story of a trial arrangement with a customer in which the customer’s POWER System has all the cores enabled and pays a flat fee for an agreed upon base percentage utilization, and a premium for higher use periods. Olson said that the company is working toward a “pool” solution in which the customer could take advantage of available processor power regardless of specific partition or machine dedication. The question that I asked in my head is why would IBM do that when they can make more money charging for capacity? In answer to my own question: competition. In a world in which it’s all too easy to spin up an Amazon EC2 instance of an SQL server, CIOs may start feeling economic pressure to get off an expensive licensing scheme.
Next I wandered into Paul Tuohy’s award-winning “Considerations for Succesful ILE Implementations” session. I was half there to meet Tuohy in the flesh, and half there to learn a little more about ILE. First, Tuohy is a great speaker and I learned so much that I can’t possibly repeat it all here. Here are some highlights:
- Although ILE looks more complicated, it’s easier to maintain.
- Legacy RPG code was great… 15 years ago. Now it’s an “unmaintainable load of crud.”
- Programmers do things out of habit. ILE is an unlearning process.
- DB2 now does things for you that you used to have to code: this is a good thing, take advantage of it. Resist pressure to move to SQL server.
- When writing ILE, accept that you’re not going to get it 100% right. The good news: ILE is easy to change.
- When embarking on an ILE project, make sure you have the right tools. WebSphere Development Studio Client (WDSC) is essential. If you’re not using it, it’s the equivalent of doing your word processing operations in Notepad…
- You DEFINITELY NEED a change management system. Really. Don’t try to do it on your own, or go without. It will cost you more in the long run.
IBM rolls out new blades, servers, and virtualization technology for System i
IBM held a press conference with Ian Jarman, Power Systems Software Manager, in which he went over much of Jeff Howard’s opening session presentation from yesterday, filling in the details for us journalists. Some highlights:
- Blades are growing.
- Expanded virtual tape support allows backup of tapes connected to BladeCenter S.
- Solid state disk technology: Bridge memory and traditional hard disk drives in a “hybrid” technology deployment in POWER Systems, and the IBM i has built-in virtual storage that can exploit SSD. This works by automating placement of objects on SSD drives.
- PowerVM has been shipped with 65% of POWER6 systems. This is 55% more than with POWER5.
- Linux on POWER is growing (see product demo of PowerVM Lx86 tool).
- DB2 WebQuery enhancements are coming with BI extension to Query400 which will dramatically improved Web-based queries.
- XML support is coming in DB2.
RPG to Web products
I managed to get another product demo from the exhibit hall, this time from Richard Milone of CNX who demonstrated the Valence Web application framework.
[kml_flashembed movie="http://www.youtube.com/v/PIVj4b3JGas" width="425" height="350" wmode="transparent" /]
Business Intelligence for i and Query400 report enhancement products
I would be remiss if I didn’t at least mention a couple of the new product announcements released today.
- Key Information Systems, in partnership with Systech Solutions and Talend announced the general availability release of their Smart i appliance, which was the foundation for the Fashion Institute of Design and Marketing’s IBM/COMMON Innovation Award winning effort. Smart i is a stand-alone BI appliance in a rack mount or blade server IBM chassis that brings business analytics to small and medium-sized businesses.
- BCD released version 1.6 of Clover, BCD’s real-time IBM i Web reporting and querying tool. The new version of the tool imports query definitions from IBM Query/400 and outputs them as real-time Web reports, graphs and spreadsheets.
- inForm Decisions announced the release of their iDocs SmartRouter software module that can direct or redirect output, burst, sort and group reports and forms based on spool file attributes or spool file content.
- Vision Solutions announced the release of iTERA HA 6.0 and MIMIX HA 6.0 including advanced autonomics coupled with full-featured high availability enhancements. New features of iTERA HA include library auto-discovery, environment configuration, journal creation, data and object synchronization, and automated mirroring, auditing and monitoring. Additionally, Auto Action Messages allow users to define and automatically trigger corrective action routines when specific messages are generated, and enhanced audits that provide deeper examination of objects.