SQL Server 2005 Query Very Slow in the first run

155 pts.
Tags:
SQL queries
SQL Server 2005
Hi, I have an sql query which takes 8 seconds in the first run. The next run there after takes 0.5 seconds and all consecutive runs take 0.5 seconds. Is the plan getting cached? How do i make this query run in 0.5 second in the first run itself? Please find the query below.
select isnull(joblabor.IDNumber,-1) 'ProcessTransactionID',EmpNo 'EmployeeID',case(joblabor.lJob) when '-1' then '0' else joblabor.ljob end 'JobID',
joblabor.ProcNo 'ActivityID',process.Process 'ActvityName', joblabor.shift 'Shift',case (isnull(suspend,0)) when 1 then 'true' else 'false' end 'Suspended',
joblabor.StartTime 'StartDateTime',joblabor.starttime 'StartTime', joblabor.updated 'UpdatedDate',
ProcQuant 'Quantity',Prochours 'Hours',isnull(Remarks,'') 'Remark',IsNull(JCNotes,'') 'Notes',IsNULL(JobFormSpecs.PartDes,'') as FormDesc,
IsNull(Job.EstDes1,'') 'JobDesc',0 'Standard',0 'MinimumStd',0 'MaximumStd',JobLabor.PartNo 'FormID',joblabor.CostDate 'CostDate',isnull(JobLabor.linenum,1) 'ProcessTransactionIndex',case(joblabor.suspend) when 1 then 1 else 0 end 'ProcessType'
,(joblabor.timepct*100) 'Percent',IsNull(Job.FCustNo,'') 'CustomerID', IsNull(Job.FCompany,'') 'CustomerName' , joblabor.EndTime 'EndDateTime',process.ProcGroup 'ActivityGroup',
case (LEN(LTRIM(SUBSTRING(Job.remark1, 1, 25)))) when 0 then 'false' else 'true' end 'HasJobNotes' ,
case (isnull(ProcessRemarks.ContainsProcessRemarks,0)) when 0 then 'false' else 'true' end 'HasProcessRemarks' ,
case (isnull(ChangeOrder.ContainsChangeOrders,0)) when 0 then 'false' else 'true' end 'HasAlteration'
,isnull(joblabor.costcode,'') 'CostStatus'
,isnull(Complete,'') 'CompletionCode',GRYield 'GrossYield',NetYield 'NetYield'
from BBJobCST joblabor (nolock)
Left Outer Join bbjthead Job (nolock) on (joblabor.lJob = Job.lJob)
inner join SSProces as Process (nolock) on( process.ProcNo = joblabor.ProcNo AND process.archive = 0)
left outer join bbPthead JobFormSpecs (nolock) on (ltrim(rtrim(joblabor.Ljob)) = ltrim(rtrim(JobFormSpecs.LJob)) and ltrim(rtrim(JobLabor.PartNo))=ltrim(rtrim(jobFormSpecs.PartNo)) )
left outer join (
SELECT Count(bbchghdr.ljob) 'ContainsChangeOrders', bbchghdr.ljob FROM bbchghdr (nolock)
inner join bbchglin (nolock) ON bbchghdr.ljob = bbchglin.ljob AND bbchghdr.changeno = bbchglin.changeno
WHERE type = 'P' AND descript NOT LIKE '' group by bbchghdr.ljob) ChangeOrder on ChangeOrder.ljob=joblabor.ljob
left join (
SELECT Count(bbjobcst.ljob) 'ContainsProcessRemarks', bbjobcst.ljob
FROM bbjobcst (nolock) WHERE ( LEN(LTRIM(SUBSTRING(bbjobcst.jcnotes, 1, 25))) > 0 or LEN(LTRIM(remarks)) > 0)
Group By bbjobcst.ljob) ProcessRemarks on ProcessRemarks.ljob=joblabor.ljob
where joblabor.empno = '000002013' and
(isnull(joblabor.endtime,'') = '' or suspend=1 ) and (joblabor.ProcHours = 0 or suspend=1 )
and joblabor.ljob <> 0
order by joblabor.Costdate desc,joblabor.starttime desc,[linenum] asc
Thanks in advance. Thanks, John

Answer Wiki

Thanks. We'll let you know when a new response is added.

It isn’t the query plan that is getting cached – it is the actual data and indexes which are being cached. This is common behavior of any database.
If you ran the query, then waited a while and ran again (keeping the session active), you would see the query take longer again, based on the cached data/indexes being flushed to make room for other data.

You can see if you can reduce the 0.5 seconds repeat time, and/or you can see if you can reduce the intermediate result sets (which are usually what get cached and speed up the queries the second..nth runs).

If the query is producing a large intermediate result set (perhaps a large join where most records are then discarded), you may be able to speed it up by changing parts of your query. Also, sometimes just adding the right index can solve issues like this.

Look at the execution plan and see if there are any “table access full”, “index access full”, “cartesian”, or similar joins indicating inefficient join/indexing.

Discuss This Question: 6  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Jsql
    Hi Kccrosser, Thanks for your valuable answer. I can see this kind of effect in any simple sql query. Can you point me to the right source where I can find detailed information on how to read the execution plan and do the performance tuning. Thanks, John
    155 pointsBadges:
    report
  • carlosdl
    70,220 pointsBadges:
    report
  • Kccrosser
    Some specific areas to look at in your query above are all the joins that include functions on the underlying columns - ltrim, rtrim, len, and substring. Any time you apply a function to a column in the where/join clause, you eliminate the ability of the query optimizer to use indexes on the column. Further, if the query optimizer doesn't have other good indexes to use, it may "invert" the query (as I call it) where it uses the primary key of the table to do a cartesian product join with another table, and then apply the functions to the resulting giant intermediate result set. Avoid functions on columns in a where/join clause like the plague... Some of your joins are using "ltrim(rtrim(<column>))". Much better would be to enforce removal of leading/trailing blanks on data entry (with a trigger, perhaps), which would allow you to do direct compares of the data in the columns without the functions, and therefore allow use of indexes on those columns. Also note that "len(ltrim(remarks)) > 0" should be the same as "remarks is not null" if you eliminate leading/trailing blanks on entry. The "len(trim())" expression could cause a table scan, while "remarks is not null" *could* use an index on that column (it might not, depending on the actual distribution of data in the records). Finally, if there is no good way to avoid leading/trailing blanks in the data, you might consider creating materialized views of the data where the view contains the blank-stripped data and has a column containing the field lengths where needed. Then, your query can go against that view. Depending on the volume of updates to the table and view, this could dramatically speed up the query. (Note that a "materialized" view is a "real" table containing the massaged data from the underlying tables, and that this table can be indexed like any other tables - just creating a standard "view" will NOT generally gain any performance, as the view will be constructed at query time with no advantage over just running the query.)
    3,830 pointsBadges:
    report
  • Jsql
    Thank you very much Carlosdl and kccrosser. This will help me to research further.
    155 pointsBadges:
    report
  • Findsarfaraz
    Also, you can use store procedure instead of query. As in stored procedure you can use temp tables which will definitely enhance your query performance. Regards, Sarfaraz Ahmed Microsoft Excel help
    1,395 pointsBadges:
    report
  • Kccrosser
    Actually, if the performance is due to initial loading of the data rows into memory (which is what it sounds like), using a stored procedure won't make that much difference. The real key is to understand the query operation and make sure it is only loading those rows that it really needs.
    3,830 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following