Reducing Impact of increasing a field length of AS400 file

40 pts.
Physical File
I am currently working on a project where there is a file which has a field of length 3 (packed decimal). Under the project, we need to increase that length to 5 packed decimal.But this change is causing a lot of impact on the system as lot of programs and queries need to be changed/recompiled. Is there any other alternative which can be thought of in this case? Just an FYI...the file does not have a DDS and is created using SQL script.

Software/Hardware used:
As400 PF

Answer Wiki

Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

Discuss This Question: 4  Replies

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • TomLiotta

    DDS or SQL is not relevant. If the field needs to allow 5 digits, then programs and queries that use that field also have to handle 5 digits. That is, they need to be recompiled.

    However, for any program (or query) that uses the file but does not use that field, the program (or query) should use a VIEW that does not include that field. Of course, the program (or query) still needs to recompiled after the source is changed to reference the VIEW. That doesn't help your current need, but it should help in the future when you need to change lengths for a different field.


    125,585 pointsBadges:
  • sanras123
    I agree. But my question is, is there any other alternative to increasing the field length. Maybe I was not clear in my concern. Let me clarify. Current field is 3 packed decimal and its a key field of that file. Now that field is slowly approaching the value '999'. The moment 1000 comes, this structure wont work. So can we handle this situation in some other way which will reduce the impact on other programs/queries? 
    40 pointsBadges:
  • CharlieBrowne

    If it is a key field, you either need to change it to 5P or add a new field that you can concatenate to it. That would be ugly. Without knowing all the details, I say bit the bullet and change it to 5P.

    Do a CHGPF, the recompile all the LFs.

    Check all DSPF & PRTF that use that field and make necessary adjustments.

    Recompile all programs using the PF or LFs. If that field is used within the program you will need to check to see if any changes are necessary. (It may be as simple as just checking those programs where the compile fails).

    You should be able to get this all setup and tested ahead of time. Create a new lib and create a CLLE that performs all the about steps outputting to your new library.

    You may need to manually check for SQL and queries that need to be modified.

    62,385 pointsBadges:
  • TomLiotta

    You don't need to clarify. Your problem was well understood from the beginning. You either have the tools needed to help with the work, e.g., a 'Change Management' product sophisticated enough to do most of the work for you, or you don't. If you do, we don't know about it and can't discuss it until we do. And if you don't, then the work must be done manually.

    If the field is currently a 3-digit packed-decimal field, it's almost guaranteed that programs/queries are going to expect packed-decimal values. The field requires two bytes of memory to hold the values. There are other data types that can store more values in the same 2-byte space, but those values will cause unexpected problems (various likely program crashes) if non-packed data is stored in the same memory.

    E.g., an unsigned integer (binary) field can be defined over a 2-byte space. The  packed field can hold values ranging from -999 up to 999. A 2-byte unsigned integer can hold values from 0 up to 65535. As such, many more unique values can be stored in the same space.

    However, a program will expect the four low-order bits to represent the sign of the value. The other bits must all represent half-byte representations of decimal digits, i.e., they must all be hex digits from x'0' through x'9'. Binary values will cause hex values to include x'A' up through x'F', and those cannot be reliably processed by a program that is working only with decimal digits.

    Decimal-data errors will likely be thrown by many programs.

    You possibly could begin to use negative values in the field. That could double the number of unique values, but you're the only one who can know what effect that might have. It could cause sorting problems. And it could cause incorrect values to show up anywhere that the sign of the value isn't displayed or printed or allowed to be entered.

    You mentioned a "lot of ... queries". If those are Query/400 queries (WRKQRY), it's an additional problem. (It's also one more reason why they should have been converted to QM queries many years ago.) They will all need to be recreated if they're going to be used after the field definition is changed. (A QM query might adapt automatically and not need to be recreated.) Practically speaking, no one should be running Query/400 queries anymore.

    One way to "handle" the problem could be to use some 'Change Management' product that could automate much of the work. Of course, that means choosing, buying, installing, learning and configuring the product before getting it to do anything. Depending on how many objects are involved, that could take more effort than doing it all manually. The advantage would be that this type of project would be easier if (when) it ever happened again in the future.

    You could also automate some of the work yourself. You could generate lists of program objects. Read the lists in a program and run DSPPGMREF to an *OUTFILE for each program. Then read the *OUTFILE and submit compiles for each program that includes the file, Note that *MODULEs may need to be created first, and then programs created from those if multiple *MODULEs were used.

    Any code that handles the new value could need to be examined in case work fields are used to manipulate the value. If the work fields were not defined LIKE() the changed field, then those work definitions will need manual adjustment. Any logical tests for hard-coded values, e.g., IF FLD > 999, will need to be located and modified also. Those ought to be changed to use a constant that might be coded in a COPY member or some other external object.

    There is no good way to do it unless the programs and queries were properly coded in the first place. Ideally, the programs would use SQL for database access and all fields that are used to manipulate the value are defined LIKE() the original field. In such a case, simple recompiles could be enough. In the very best cases, even recompiles wouldn't be needed.

    Your DSPFs and PRTFs should use field references to describe fields. That could allow simple recompiles in many cases. In cases where not enough space was allowed between fields on a display or print line, then some redesign will be needed. They will need to be recreated before programs can be successfully recompiled.

    There is no short-cut if the application wasn't created in ways that help. And for Query/400 queries, there is no short-cut at all.


    125,585 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: