The iSociety, an online social group for IBM System i users and followers, will be holding a fireside chat later today.
The group, which the user group COMMON started more than two years ago, has had a bunch of these so-called “fireside” chats, which are basically scheduled online chats. In the past they’ve had these chats on i5/OS, MySQL and PHP, and System i Developer.
This time around the chat will be on “Modern CL Programming” and will be led by Kevin Forsythe of DMC Consulting. The email I received said that it will be a “detailed look at CL Programming that incorporates recent enhancements to the language,” including “subroutines, pointers, Loops, structured programming options, and more.”
The online event starts at 1p.m. Eastern and is free, but you have to be a member (which is also free). Go to the iSociety chat page to learn more.
Thanks to the automatic recommendation program at Amazon, those who buy the “Starter Kit for the IBM iSeries and AS/400″ are offered the chance to buy another recommended (and presumably related) tome: “Reducing Stress-related Behaviours In Persons With Dementia.” See here:
Paul Lamere, a Sun Microsystems engineer, pointed out this iSeries-dementia connection first on his blog, so a hat tip goes to him.
At first it might seem strange that Amazon would do this. But if you look closely, you’ll notice that the name of one of the authors of the iSeries book is the same as the illustrator of the dementia book. Sorry, but Amazon is not saying System i folks have dementia. But it’s still funny anyway.
Aldon, a configuration management software company for System i users, has announced awards in two categories: one for a developer that averted a potential catastrophe, and another for best project turn-around.
The winner in the first category, for most frightening development nightmare, went to Brad Abernathy, a senior developer for Sunbelt Rentals. In a previous position, Abernathy was a developer for a major manufacturer of bedding and towels and a colleague was working on a shipping application project. The developer started a bug when he created a logical file right on top of the production machine. The bug turned product orders for two or three towel bundles into two or three million orders. Due to a lack of quality check, the systems nearly broke down and the company had to pay a lot of workers a lot of overtime to get it fixed.
The next category was for most remarkable project turnaround. That award went to Manoj Dhamu, a senior programming analyst with DST Health Solutions. He was working on business application that handled member enrollment, claim processing and claim billing for managed care organizations. But they realized a glitch that would require them to manually run other custom programs every time they rebuilt a file. Using the Aldon software, the company was able to incorporate a bunch of important external code into the development cycle.
The bloggers over at iDevelop recommend updating your DB2 lingo so all the Oracle and SQL Server folks don’t think you’re from the 1970s.
Why? Well, not just because using the old lingo makes you seem old, but also because it makes DB2 seem old, outdated, and not as powerful as iDevelop believes it to be.
Oracle and other database users are often convinced that what we have is little more than a flat file system on top of which has been cobbled some half-baked database mechanism. If you think about it, it’s hardly surprising. We constantly talk old-technology terms like files and records, so we shouldn’t be surprised if others think that that they are the foundation of the system.
Of course nothing could be further from the truth. What we have is a fully relational database system which, when called upon, can cleverly disguise itself as a flat file system! Those of us who use the platform can understand what a terrific advantage this is–but it’s understandable that others would view it with suspicion.
So here is the translation dictionary, with the DB2 word followed by the “updated” DB2 word:
- A library should now be called a schema or collection
- A file should be called a table
- A record should be called a row
- A field should be called a column
- A logical file should be called a view when talking about how the program views the data
- A logical file should be called an index when talking about performance
About a month ago, we reported that Frank Soltis was leaving IBM at the end of the year. Soltis is considered the father/grandfather of the System i platform, having been with it basically since it started 40 years ago. First it was the System/38 and System/36, then it was the AS/400, then the iSeries, then System i, and now it’s all part of Power Systems at IBM.
Soltis is indeed retiring from IBM, mainly because the merger of i and p eliminated any jobs, like his, that focused solely on the System i. But he’s not going away. He will continue teaching at the University of Minnesota, and said he wants to get more involved with user groups like Common.
In the interview, Soltis does express his displeasure with how the System i revenue has been reported since the merger (it makes it look bad). But overall he said he’s pleased with how his 45 years with Big Blue have gone, and he’s looking forward to staying involved.
David Vasta at the System i Blogger blog wonders what all this talk about the new IBM Smart Cube is:
The Smart Cube from IBM has been getting lots of talk around the web, just not much from IBM? I want to see some real details and when I go to IBM’s web site and search for “Smart Cube”, I get nothing. Once again IBM DOES NOT KNOW HOW TO MARKET ANYTHING THEY SELL!
As it turns out, IBM published an announcement letter that said it wasn’t an announcement letter. As Timothy Prickett Morgan recaps, the non-announcement letter announced (empty link) a new system called the Smart Cube Power System 520. Sounds intriguing doesn’t it?
Here’s a description of the Smart Cube in the announcement letter, according to Morgan:
The IBM Smart Cube is a powerful and integrated server (server family) designed to run the business applications (finance and accounting, ERP, CRM, IP telephony, and others) that a small to medium-sized business needs, with virtually no IT complexity. Smart Cubes remain connected to IBM’s Smart Market that offers remotely delivered services, including help desk and solution support, monitoring, backup and recovery, security, and business collaboration.
IBM Smart Cubes come with the preloaded IBM Smart Business Software Pack that includes what you need to run business applications and workloads.
- Application servers and Java support
- Database servers
- Web servers
- File and print servers
- Directory servers
- Network and application security
- Built-in backup and recovery
- Intel server with storage, memory, and more
Wow, sounds interesting. Maybe some readers would like to know more. Maybe they, like Vasta, would like to get their hands on one to test it out. Well, IBM retracted the announcement letter (which it never apparently wanted to announce in the first place), and now they are trying to roll it out very quietly, mostly in India but also with a few select customers in the United States. Some more Smart Cube details:
- Three Power-based configurations, with one, two or four processing cores activated.
- Can run IBM i, AIX or Linux using 4.2 GHz Power6 chips
- Includes a stack of systems and application software called the Smart Business Software Pack for i, which runs on the IBM i 6.1 OS
- The big launch won’t likely happen until the second half of next year
Last week IBM announced Rational Team Concert for i, a bunch of systems management software geared toward the System i platform. Available electronically the day before Thanksgiving, the software is available for a free 60-day trial.
According to the announcement letter, the software “allows teams to simplify, automate, and govern application development on IBM i.” Some details:
- Includes source control, change management, build, process management and governance
- Integration with IBM Rational Developer for i
- Support for source control, change management and builds of RPG and COBOL
- Support for application development using RPG, Java and EGL
- Supports IBM i native Library file system and integrated file system (IFS)
- Build agent which runs on the IBM i operating system, running IBM i commands and call programs
Hat tip to Alex Woodie for the heads-up.
A recently published tip on performance tuning the AS/400 was created to respond to Search400.com reader feedback. Raymond Johnson answered a series of questions submitted by a reader, and since publication, the reader has provided a few more questions that Johnson has kindly answered.
The article is fantastic, to say the least. Our system has fault rates in the hundreds and page rates in the thousands. I assume that this indicates that the system is thrashing and it is spending more time moving data in and out of storage than it does processing it. Is that true? When the high page and faulting rates were brought up, it was mentioned that high fault rates are not a big contributor to poor performance, and so are no longer a concern as much as they had been in the past. Is that true? It makes sense to me when the system spends more time moving data around than it does processing the data, then of course, the response time will take a hit.
Is there anything you can share about the comment that high fault rates are not as big a concern now as they were in the past?
Ray responded with an explanation of thrashing and page faulting on the AS/400:
Because the answer to the questions was not really straight forwarded, I have tried to share a little insight about thrashing and page faulting.
High faulting rates can mean thrashing and poor performance. It can also mean that some new task has just started running and none of the code or data was in memory and had to be moved from disk to memory. It can also be “normal”for the particular system, time period and workload.
Thrashing typically occurs on a system when batch and interactive work share the same memory pool. Interactive work typically processes a small amount of information and then sends a response back to the user and then waits for a response. The key point here being that the interactive job has completed a small task and is waiting for the user. On the other hand, batch gets control of the CPU and is processing a file that can be millions of records long. A typical batch process doesn’t relinquish control of the CPU until it is forced to by a parameter called “time slice end.”
What can happen is that a batch program pulls hundreds or thousands of records into memory, starts to process the data, and then hits time slice end. Next, several interactive jobs with higher priority all get to run. These interactive jobs essentially flush memory so the data that the batch job was using has been completely paged out of system memory by the work of the interactive jobs. When the batch job gets the CPU back, it starts loading memory all over again, only to be kicked out at time slice end by more interactive jobs that have a higher priority. Repeat this cycle and this is what I call thrashing. If a batch job shares memory with other batch jobs or similar work, the thrashing typically does not occur or occurs much less frequently.
I recommend that you look at the WRKSYSSTS screen when the system is busy and everyone is happy (i.e. no complaints about a slow system). Press F5 and F10 several times and take a few screen shots. This should be your baseline of good performance. Next, observe the WRKSYSSTS screen when many users are complaining of poor performance. Now you have some real information to work with. Hopefully what you see now will make sense. I think of the WRKSYSSTS screen as the system dashboard. With this information you can start to analyze system performance.
An additional metric that I didn’t really address was the ratio of DB page faults to DB pages and the ratio of non-DB page faults to non-DB pages. At first glance, I would say that if the number of “pages” is at least a factor of 10 larger than the number of “page faults,” this could be normal.
The age old answer of “it depends” comes into play here. As the system performs more work, the value of the parameter “pages” increases. This is a very good indicator of the amount of data being read from disk to supply transactions with the requested data. Big numbers in the Pages column is a good indication.
Regarding the question: When the high page and faulting rates were brought up, it was mentioned that high fault rates are not a big contributor to poor performance, and so are no longer a concern as much as they had been in the past. Is that true?
Hopefully you now know the answer, however I did want to emphasize one point – generally high faulting rates (high being a relative number) are a big contributor to poor performance. The only reason that high faulting rates are not as big of a concern now as they were in the past is that fast machines with lots of brute force can hide horrible performance. New machines have faster disks, faster IOP/IOA’s, faster CPU’s and often more memory. Because of the reduced cost of hardware performance it appears to me that system performance tuning has become a lost art. Both commercial software programs and technicians with knowledge of performance can dramatically improve system performance in some situations with no additional hardware. However both software and human resources generally are more expensive than more hardware.
Because every machine is truly unique, and every workload and number of users at any given time is also unique, only you can observe what constitutes good performance on your system.
The reader then asked a follow-up question regarding pool data:
When I enter the WRKSBSD command for a particular subsystem, I enter an option #5 for that subsystem to display its parameters. Then I enter option #2 for pool definitions. That screen lists the POOL ID, STORAGE SIZE, and ACTIVITY LEVEL.
Then I enter a WRKSHRPOOL command. That screen lists POOL as the left column, but also has a POOL ID column. I need to find out: What is the relationship of the POOL and POOL ID columns on the WRKSHRPOOL command, to the POOL ID column on the WRKSBSD POOL DEFINITION screen? Does the POOL ID columns on the WRKSBSD display referring to the POOL ID column on the WRKSHRPOOL command?
I believe what I need to figure out is:
- The size of each pool.
- Whether that size can be automatically changed or always stays the same.
- What subsystems use each pool. In other words, I need to see each pool and what subsystems [and therefore jobs], feed into that pool.
I would think that if I get the total of the DEFINED SIZE column on the WRKSHRPOOL command, it would equal the MAIN STORAGE SIZE amount. On my screen, it does not. In fact, there is a difference of 2433 M. Is that difference normal? Or does the difference represent memory we have physically installed, but not used for anything?
Thank you very much for all of your time in this matter. I realize that faster processors, memory, and disk, hide performance issues. But, if the performance issues were addressed, we would really see throughput increase without additional hardware expense
Ray’s briefly explained how to understand pool numbers on the AS/400:
Pool numbers are one of the most confusing issues when dealing with memory on i. I have added a few notes in his questions and a couple of screen shots. This can get pretty deep pretty quickly for an email. See the two screen shots below. Looking at them together usually helps put the pieces together.
In the WRKSYSSTS screen shot note that the “Sys Pool” numbers 1 – 5. System Pool numbers 3 and greater are assigned arbitrarily when the system IPL’s by which subsystem starts first. Note on the second screen shot of WRKSBS screen that you see the subsystem pools 1-10.
QINTER and QSPOOL come defined with the OS. Separating batch and interactive is a manual process.
Rule #1 of tuning – all subsystems should have System Pool #2 defined for the first subsystem pool since that is where the “task dispatcher” runs by definition (you can’t change it). Nothing gets done until the task dispatcher dispatches the work.
So you always want pool 1 and 2 to be running well. If they are not running well, no one is running well.
A different reader submitted this question regarding non-DB faults more than 10:
I was interested in the section regarding non-DB faults will all be less than 10.0. Our system regularly see the a much higher figure. Following on from the explanation given later in the article is the only fix for this adding more memory or is it a case that there could be a problem with the way an application has been coded.
Ray explained a quick fix and expands on page faulting rates and what they mean:
The quickest way (not the only way) to fix is to add memory. I believe I that I discussed moving memory from other pools and changing the Max active value later in the same article. Adjusting these numbers can often improve performance. Caveat – if the System Value QPFRADJ is turned on, all of the changes you just made will be unmade when the Performance tuner deems necessary.
However we need to first backup and ask – are you experiencing performance issues. A page faulting rate above 10 may provide superb performance for your machine. It all depends on the CPU speed, the amount of memory, the speed of disk access, the workload, and often the network connections.
On my small (P05) system performance starts to slow down when my page faulting rates go above 10. This is a guideline that has worked well for me as a good starting point when analyzing system performance.
Businesses with 100 to 1,000 employees may want to take a look at a free security assessment tool that has been released by the Aberdeen Group and IBM. A sample report representing what you can expect to get after filling out the survey is provided. The report shows how your organization stacks up against similar organizations (over 30,000) that have also been evaluated. This information could be helpful in determining what to focus on in your security set-up. Of course, by filling out the survey you allow IBM and Aberdeen to contact you regarding your security infrastructure.
Let us know if you take the survey and find any surprises or what your opinion is about this tool.
SkyView Partners has announced a new release of its Policy Minder software for IBM i and i5/OS. It includes a new graphical user interface (GUI) to help users in doing security policy compliance.
“The power of Policy Minder has really been harnessed by the enhanced interface as it provides a new level of point-and-click usability, even for a green screen power user like me,” said Robin Tatum of MIS.
Suhas Narayan of International Rectifier also tested the software ahead of its ship date, which is scheduled for Monday. He called it “security with simplicity,” adding that “the GUI interface provides an easy way to manage single or multiple systems.”