SETLL vs. LOOKUP

65 pts.
Tags:
*INZSR
iSeries applications
LOOKUP/XFOOT/MOVEA operations
SETLL
Fact based opinions requested. Is it faster to perform a SETLL on a file to do an existence test or read the file (note: it has a very small number of records) into a program array during *INZSR and then do a LOOKUP to do the existence test?

Answer Wiki

Thanks. We'll let you know when a new response is added.

Better way is to use LOOKUP for execution of test on Program array and if it is physical/ logical file then you can use Chain.

=================================================================

A SETLL will possibly be faster and use less overhead than a LOOKUP regardless of how many or few records are in the file. Unless…

You don’t give information to be sure in all cases. The biggest unknown is how many lookups will be performed? Depending on the size of the key, there is likely to be some large number of lookups where the efficiency becomes greater for lookup than for SETLL. But it might have to be a large number. A SETLL is, after all, little more than a lookup against an index.

If the index that SETLL is using is small and it’s referenced often, especially if it might be referenced by other processes too, it will likely be memory resident anyway. (You could use SETOBJACC to force it to stay memory resident.) It might not have much more likelihood of being paged out than an array in your program will anyway.

Tom

================================================================

Being curious about where the breaks might be, I created some tests that you can use. They can be modified in a variety of ways for different results. First, here’s a REXX procedure to create and populate a table:<pre>
signal on error name FAILURE
signal on failure
signal on syntax

RC = 0
mytbl = ‘MYLIB/TSETLL’
limit = 5000

address ‘*EXECSQL’,
execsql ‘drop table ‘ mytbl

address ‘*EXECSQL’,
execsql,
‘CREATE TABLE ‘ mytbl ‘ ( TKEY INT NOT NULL WITH DEFAULT, ‘,
‘ TVAL CHAR ( 25 ) NOT NULL WITH DEFAULT)’

sql_Stmt = ‘INSERT INTO ‘ mytbl ‘ VALUES( ?, ? ) with NC’
address ‘*EXECSQL’,
execsql ‘PREPARE S1 FROM :sql_Stmt’

do i = 1 to limit
V = ‘Val’ i
address ‘*EXECSQL’,
execsql ‘EXECUTE S1 USING :i, :V’
end

‘commit’

address ‘*EXECSQL’,
execsql ‘ALTER TABLE ‘ mytbl ‘ ADD CONSTRAINT PRIKEY PRIMARY KEY (TKEY)’

exit

/* —————————————————————- */
FAILURE:

trace off

parse source system start srcmbr srcfile srclib
say ‘Unexpected error at line ‘sigl’ of REXX program ‘,
srcmbr’ in ‘srclib’/’srcfile’. The exception ID is ‘rc’.’

exit(right(rc,4))
/* —————————————————————- */
SYNTAX:

trace off

parse source system start srcmbr srcfile srclib
say ‘Syntax error at line ‘sigl’ of REXX program ‘,
srcmbr’ in ‘srclib’/’srcfile’. The error code is ‘rc’.’,
‘ The description is :’
say errortext(rc)

exit(rc)</pre>
On a slower system, that will take a minute or two to run. The limit variable determines how many rows are inserted. Make it larger or smaller to test different lookup array sizes.

Note that the first SQL DROP TABLE function will fail the first time the REXX procedure runs. That’s because no table exists, so the DROP can’t run. For the first run, this statement should be commented out, or error handling should be added to ignore that specific error.

The ILE RPG test program runs a SETLL test then a similar %LOOKUP test over the same table data. The MAXLIM constant can be set to adjust the number of searches. The ARY constant sets the physical size of the %LOOKUP array; in the source I have it a little larger than the REXX limit value.

The PRIME constant is just a number used to pick various values to feed to SETLL or %LOOKUP — I wanted a distribution that would hit all areas of the index as well as the array, and also to give a few “not found” conditions. The %REM function will give values outside of the valid range as long as PRIME is large enough. And if a prime number is used, there’s less chance of hitting the same results too many times.

Hardly perfect, but good enough to test a lot of combinations:<pre>
H Debug( *yes )
H dftactgrp( *NO )
H actgrp( *CALLER )
/SPACE 3
FTSETLL if e k disk usropn
f rename( TSETLL : TSETLLR )

RKO DProgStatus sds
d Pgmq## *PROC
d ErrorSts *STATUS
d PrvStatus 16 20s 0
d SrcLinNum 21 28
d Routine *ROUTINE
d NumParms *PARMS
d ExcpType 40 42
d ExcpNum 43 46
d MsgId 40 46
*
d PgmLib 81 90
d ExcpData 91 170
d ExcpId 171 174
d LastFile 175 184 Full file name
d FileErr 209 243
d ProcPgm 334 343
d ProcMod 344 353
/eject
d MAXLIM c 5000
d PRIME c 5051
d ARY c 5010

d lim s 10i 0 inz( MAXLIM )
d i s 10i 0
d n s 10i 0
d kTst s 10i 0
d fLkup s n

d pk s 10i 0 dim( ARY ) ascend

/free
// Begin SETLL…

open TSETLL ;

dsply ( %char(%timestamp()) + ‘ SETLL’ ) ;

for i = 1 by 7 to lim ;
kTst = %rem(i:PRIME) + 1 ;
setll ( kTst ) TSETLLR ;
fLkup = %equal( TSETLL ) ;
endfor ;

dsply ( %char(%timestamp()) + ‘ SETLL’ ) ;

close TSETLL ;

// Begin %LOOKUP…

open TSETLL ;

dsply ( %char(%timestamp()) + ‘ LOOKUP’ ) ;

for i = 1 to ARY ;
read TSETLLR ;
if not %eof( TSETLL ) ;
pk(i) = tKey ;
else ;
n = i – 1 ;
leave ;
endif ;
endfor ;

for i = 1 by 7 to lim ;
kTst = %rem(i:PRIME) + 1 ;
fLkup = (%lookup( kTst:pk:1:n ) > 0 ) ;
endfor ;

dsply ( %char(%timestamp()) + ‘ LOOKUP’ ) ;

close TSETLL ;

*inLR = *on ;
return ;

/end-free</pre>
Timestamps are displayed at the beginning and end of the two tests. It’s easy enough to see which method gives the best result under a set of values.

A few things might be varied. The number of rows can be raised or lowered to adjust the number of potential search elements. The number of searches can be raised or lowered. The number of ‘found/not found’ conditions can be changed. By restructuring the CREATE TABLE and the INSERT, rows can be made more complex to see the effect of READing more data for the %LOOKUP test. If you want prime numbers, there are various <a href=”http://doc.trolltech.com/2.3/primes.html”>prime number tables</a> available.

With some work, a few variables could be externalized. That would allow automating it all in order to run multiple series of tests. Results could be graphed to see trend lines.

It only takes a few tests to see that the best (fastest) method depends on the actual situation. Either SETLL or %LOOKUP can be “fastest”.

Tom

Discuss This Question: 2  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Andy Mansell
    This very much depends on the requirements. Will you need to access the data for every record on the file? Will the file ever get bigger than the small number of records. With the speed of todays processors the overhead will be very low any way. A general rule of thumb tho is ' if you need to use the data , retrieve the data from the file. It may be quicker to access to an array and read the data from that array. Depending on the number of reads you will be doing from the file.... Is it possible you will read the same record numerous times etc. If you never need to read the data use SETLL to verify the record exists, the save will be small but it will be easier for other programmers to see what and why you are doing what you are doing
    55 pointsBadges:
    report
  • Splat
    If you only need to check for existence, without actually retrieving the data, SETLL will win hands down (the file buffer doesn't get loaded with SETLL).
    7,565 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following