SQL Server with Mr. Denny

Dec 5 2012   2:00PM GMT

Databases With Large Tables Should Use Auto Stats Async Update Feature

Denny Cherry Denny Cherry Profile: Denny Cherry


A pretty cool feature that was introduced in SQL Server 2008 R2 SP1 is the ability to change the auto stats algorithm from the default of 20%+500 rows to a sliding scale.  This feature is only available by turning on trace flag 2371 so it won’t be turned on by default for you.

When turning on this trace flag because you’ve got a large database the goal obviously is to use autostats so you’ll need to have auto stats turned on.  In addition you’ll want to turn on the “Auto Update Statistics Asynchronously” setting for the database or databases which hold the super large table.  The reason that you’ll want to turn on the auto stats async update feature is that if you don’t you may see queries time out when auto stats starts to kick in.

Auto stats as we all know update the statistics when the correct number of rows in the table have changed.  When you’ve got very large tables that then trigger auto stats to run if it takes more than 30 seconds for the update stats command to run the query that triggered the auto stats to time out, which causes that transaction to roll back, which means that the auto stats command will also roll back.  So the next query will then fire the auto stats update and the process will repeat over and over.

The symptoms that you’ll see on the SQL Server are queries which are timing out at random even though the execution plan looks totally normal.  You’ll also see a massive amount of IO being generated on the disks which are hosting the database as auto stats does a lot of querying of the table and as auto stats is running over and over you’ll be thrashing the hard drive pretty quickly.

When you turn on the async auto stats setting on the database when the auto stats is triggered by the SQL Server the query in question doesn’t have to wait for the update stats command to finish.  Instead the update statistics command runs in the background letting the query continue to run as normal.  Now the query will run using the old statistics which is probably OK in this case as they were ok 2 seconds earlier so if they are used for a few more seconds it probably isn’t all that big of a deal.

Now I don’t recommend turning this setting on for every database on the server.  All of the smaller databases will update statistics just fine within the timeout period.

So the table that I ran across in my case where I had to turn this setting on took over 2 minutes in order to manually run update statistics on the table so using synchronous statistics updates via auto stats was basically useless.


3  Comments on this Post

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.
  • MichaelSwart
    The auto statistics asynchronous setting is useful... I learned that lesson a little while ago but only with a lot of pain
    Thanks for sharing
    10 pointsBadges:
  • MickOpalak
    I am interested in turning on the asynchronous statistics updates, but I need a metric to measure before and after the change to show a performance improvement.  What metric should I capture and how can I capture it?  Thanks.
    10 pointsBadges:
  • Denny Cherry
    Measuring and qualifying this becomes an interesting situation.  Realistically unless there's an issue with stats running during the day users shouldn't see it.  When users do see blocking, it'll show pretty quickly.

    In most cases, this is going to be one of those settings we want to change to prevent the risk of the issue rather than wait until there's an issue.  Because auto stats can fire at random times if you were looking for cases when an issue was being raised it'd probably be random when issues come up.
    69,015 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: