SQL Server with Mr. Denny


July 12, 2011  2:00 PM

Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’.

Denny Cherry Denny Cherry Profile: Denny Cherry

So I hit the error in the title the other night while converting a heap to a clustered index on a production system.  After this error poped up, some other nasty stuff showed up in the error log, which I’ve included below.

Error: 9002, Severity: 17, State: 4.
The transaction log for database ‘%d’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 5.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 1.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 5.
During undoing of a logged operation in database %d, an error occurred at log record ID (87:9933:1). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

At this point crash recovery ran and the database started back up.

The first error is that the log filled up, followed by the nasty part.

The good part is that this is going to be a pretty rare problem to run into.  This will only happen on update operations on a heap, or adding a clustered index to a heap and you run out to transaction log space.  The reason that this can happen is that updates against a heap and clustered index operations are a two phase operation.  First deletes of the old rows is done, then an insert of the new rows is done.  When the insert of the new rows is started an internal flag is set which states that the insert operation is a “must complete” operation.  When an unexpected error happens between the delete operation and the subsequent insert operation, the database is taken offline as a precautionary measure to ensure consistency and prevent concurrent operations to the record that was being modified.

The downside to this problem, is that it can show it’s ugly face in SQL Server 2008 and SQL Server 2008 R2 and there is no fix available for it in either version.  The only way to fix the problem is to change the table from a heap to a clustered index or to add more transaction log space.

I have been told that this has been fixed in SQL Server “Denali” as “Denali” will handle the heap updates differently than prior versions.  If you have SQL Server 2005 you shouldn’t see this problem.

In my case the problem was solved by letting the database come back online, and creating the clustered index.  My database was in simple recovery mode so after recovery the log was marked for reuse automatically meaning that there was enough room in the transaction log for the operation to create the clustered index to complete.

If you run into this problem, hopefully this helps explain what is happening and why.

Denny

July 11, 2011  2:00 PM

Transaction log growth best practices

Denny Cherry Denny Cherry Profile: Denny Cherry

There is a lot of talk online about transaction log growth and what you should do. There are two answers to this question, and in this post I’ll give you both answers.

The theoretical answer is that you want to grow your transaction logs in 8 gig chunks. This will maximize the amount of space that make up each VLF within the transaction log file. This will allow SQL Server to to minimize the number of VLFs that are created which is a performance improvement. Unfortunately there is a problem with this which leads us to the practical answer…

The practical answer is that there is a bug in SQL Server which causes the database to grow he transaction log by 32k the first time you issue a growth which is a multiple of 4 Gigs (which 8 Gigs obviously is). Because of this bug, which has not yet been fixed in any released versions of SQL Server it is recommended to grow the transaction log by a slightly smaller number such as 8000 megs at a time. The VLFs will be almost as large, keeping log performance where it is expected to be without having the SQL Server create a 32k log growth and 32k VLF.

With this bug there is no error or message that the log was grown by 32k instead of the requested amount. You can check the log growth information by looking at the DBCC LOGINFO output.

Now don’t forget that you don’t want to auto grow by is large size. If you do transactions may be killed by client side timeouts while waiting for the log to grow as the transaction log doesn’t support instant file initialization no matter what. You’ll therefor want to pre-grow the transaction log to the needed size ahead of time, with a smaller auto grow size that your storage can accept quickly. Personally i like to use a 1 gig or 1000 Meg log auto grow size for emergency auto grows. This gives me enough log space to work with, without having to wait for the storage to take to long to write the file.

Denny


July 7, 2011  2:00 PM

How HyperBac worked for me

Denny Cherry Denny Cherry Profile: Denny Cherry

I recently installed HyperBac on a clients SQL Server to reduce the amount of space needed to store their backups.  Specifically we were looking to reduce the amount of space the log backups took, but I was curious to see how much space the full backups would be reduced by.

The full backups were ~53 Gigs in size, and the transaction logs averaged about 800 Megs every 12 minutes.  After installing HyperBac, we reduced the full backups down to ~14 Gigs, and the log backups down to about 150 Megs every 12 minutes.  Additionally the full backup went from 57 minutes down to 14 minutes greatly reducing the amount of time that the production disks have extra IO load on them.

Installation and configuration was very simple.  I just installed the software on the server, thankfully with no reboot required and changed the backups to use hbc as the file extension instead of .bac and .trn.

Overall the space savings for 1 days worth of backups went from 107 Gigs to 35 Gigs.  This allowed us to save a very large amount space on the backup drive, removing the need to purchase more storage for the back drive for a long time to come.  All in all, well worth it for the few hundred dollar price tag of the software.

Denny


July 5, 2011  2:00 PM

What does WITH (NOLOCK) actually mean

Denny Cherry Denny Cherry Profile: Denny Cherry

Contrary to popular belief WITH (NOLOCK) isn’t actually the SQL Server “Turbo” button.  While it may appear that way, there’s actually some stuff going on in the back that makes it look this way.  Understanding what is happening is critical to using WITH (NOLOCK) correctly.

When you use the NOLOCK query hint you are telling the storage engine that you want to access the data no matter if the data is locked by another process or not.  This is why is can make it appear that the query is just running faster as you are no longer waiting for other processes to complete their writes, you are simply reading what ever is in the buffer pool or on disk at the time that you get to it.  This leads to a problem called dirty reads, meaning that you may not be getting the same values that you would get it you were to run the query again.  This isn’t necessarily a bad thing, just something to be aware of.  Usage of the NOLOCK hint may be just fine in your application, or it may be incredibly bad.

When other users are making changes (insert, updates and deletes all do this) to the data in the tables within the database, they take locks on the pages which they are modifying.  These locks tell the SQL Server that no one else can use these pages until the change has been completed.  Once these locks have been released your select statement can continue until it either finishes or comes to another page which is locked.

When using the NOLOCK hint you ignore the locks and read what ever data is available at the time.  This can cause problems when joining foreign keys as you can end up with different data being returned if the foreign key value is updated (depending on the execution plan that your SELECT statement uses).

Many people believe that one reason that NOLOCK works is because the execution plan changes when you use the NOLOCK hint.  This isn’t true.  The NOLOCK tuning hint isn’t used by the query optimizer, but instead by the data access engine as this isn’t used to change the way that the data is found, but instead it simply changes the way the locks are accessed as the data is read from the buffer cache and/or the disk.

Hopefully this will give a little insight into what is happening when you use the “SQL Turbo button”.

Denny


July 1, 2011  2:00 PM

SQL Server Consolidation at #SQLExcursions

Denny Cherry Denny Cherry Profile: Denny Cherry

The fifth and final session that I’ll be presenting at SQL Excursions is on SQL Server Consolidation.  There are lots of ways to consolidate SQL Server databases, and some times the wrong method of consolidation is selected by people.  Knowing all of the options when working on a SQL Server Consolidation project is key to a successful consolidation project.  In this session we’ll look at how to evaluate which consolidation option is best for different kinds of databases so that you can maximize your SQL Server infrastructure and minimize consolidation costs without impacting post consolidation performance.

I hope that the sessions that Tom and I are presenting at SQL Excursions sound interesting and would help you and your employer succeed in your future projects.  Personally I feel that the costs to go to an event like this are minimal compared to the massive savings that a company could realize over time from the information which their DBA would be able to bring back to the office.

If you haven’t signed up yet, now is the time.

See you in Napa September 22nd-24th,

Denny


June 30, 2011  6:13 PM

Take a virtual motorcycle ride around Ketchikan, AK

Denny Cherry Denny Cherry Profile: Denny Cherry

A few weeks ago I went on SQL Cruise with Kris (my wife) and a bunch of other SQL geeks.  When the ship was stopped in Ketchikan, Alaska while a bunch of people were going in the water (crazy right?) I hopped on the back of a Harley and went riding through town as part of a motorcycle tour.  Well Kris was smart enough to purchase a little high def video camera that I was able to strap to the front forks of the motorcycle so I could record the video, and here it is.  It’s hosted by Facebook as they allow for longer videos than YouTube, but it’s in three parts because they only allow 20 minute long videos.

All told the tour was about 3.5 hours long (the SD card ran out of space a little before the end), but after cutting out where we stopped to look at stuff, take pictures, etc. there is about 48 minutes of actual video of me riding around the city (the camera tipped down for about 10 minutes as well, so I cut that out as well as it wasn’t much fun to look at the front fender).  There are about 38 miles of paved road in the city, and we road for about 45 miles so we covered most of the town.

[kml_flashembed movie="http://www.facebook.com/v/10150290696208638" width="400" height="300" wmode="transparent" /]

[kml_flashembed movie="http://www.facebook.com/v/10150290738158638" width="400" height="300" wmode="transparent" /]

[kml_flashembed movie="http://www.facebook.com/v/10150290752693638" width="400" height="300" wmode="transparent" /]

Denny


June 30, 2011  2:00 PM

SQL Excursions and Index Internals #sqlexcursions

Denny Cherry Denny Cherry Profile: Denny Cherry

The fourth session that I’ll be giving at SQL Excursions will be on index internals.

In this session we’ll dig into the internal structures of indexes.  We will explore the differences between clustered and non-clustered indexes, what’s laid out within each page of the indexes and how the SQL Server uses the data within the indexes to find rows quickly.

Knowing how indexes actually work is key to understanding why indexes improve query performance.  Knowing that indexes will help performance is good (and hopefully you know this), but knowing how the index actually works is even more important so that you can make proper indexes so that you aren’t making extra indexes that aren’t needed as having to many indexes will actually slow down SQL Server not make it faster.

If you haven’t signed up yet, now is the time.

See you in Napa September 22nd-24th,

Denny


June 29, 2011  2:00 PM

High Availability and SQL Excursions

Denny Cherry Denny Cherry Profile: Denny Cherry

My third SQL Excursions session I’d like to talk about is going to be about high availability.  There are lots of high availability options for SQL Server, especially when SQL Server “Denali” is one of the high availability options.  In this session we’ll be looking at all the available HA options so that you understand all the options, and when each of the various high availability options should be used.  Not all high availability options are created equal, and using the wrong high availability option could be just as bad if not worse than not having a high availability option.  When you don’t have a high availability option you know it, when you have the wrong one you have a very false sense of security.

If you haven’t signed up yet, now is the time.

See you in Napa September 22nd-24th,

Denny


June 28, 2011  2:00 PM

Lets learn about Virtualization at SQL Excursions

Denny Cherry Denny Cherry Profile: Denny Cherry

The next session from SQL Excursions that I want to talk about is my session on virtualization.  During this session, we will be talking about some best practices about running SQL Server in a virtual machine so that you can get the best performance possible out of your virtual SQL Servers.  We will also look at when is a good time to not visualize your SQL Server.  With VM World happening just a few weeks before there should be some great new VMware (and probably Hyper-V as well) announcements happening just before SQL Excursions, so you can be sure that the information we’ll be presenting is very up to date, and we’ll probably to talking about some futures for one or both platforms as well.

Being able to get SQL Server up and running within a virtual machine is key in today’s IT world.

If you haven’t signed up yet, now is the time.

See you in Napa September 22nd-24th,

Denny


June 27, 2011  7:39 PM

SQL Excursions Sessions

Denny Cherry Denny Cherry Profile: Denny Cherry

As SQL Excursions is getting closer, I wanted to talk a little about the sessions that I’ll be presenting at the event which I’ll be doing over this week.  The sessions that I’ll be presenting are:

  • Storage
  • Virtualization
  • High Availability
  • Indexing Internals
  • Server Consolidation

All told I’ll be presenting for about 8 hours over the two days, with Tom LaRock (blog | @sqlrockstar) presenting over the other 8 hours.  Because this isn’t a traditional conference not all the sessions will be locked in at an hour long.  Some may be longer, some shorter depending on how the day moves along.  The goal with these sessions is to get the questions answered, not to get the session presented in the time allotted.

The first session on the list above is Storage, so I’ll start with that session for today’s blog post.  During this session on storage I’ll be talking about some of the more advanced storage options which are available for storage array tuning as well as data replication and how these can be used to improve your storage performance not just for your SQL Servers, but overall for your entire environment as well as to reduce your backup windows.

If you haven’t signed up yet, now is the time.

See you in Napa September 22nd-24th,

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: