SQL Server with Mr. Denny

January 24, 2008  8:00 AM

Best way to get assistance on the messages boards.

Denny Cherry Denny Cherry Profile: Denny Cherry

There are literally thousands of different message boards on the Internet where you can find technical information from fellow IT professionals.  When asking questions on these boards there are some basic guidelines which most people like to have followed.  They are sometimes informal, but they are fairly common across most of the boards.

  1. Don’t use topics like “Help!!!” or “It doesn’t work”.  Using a descriptive topic will make more people want to read your thread and help you out.
  2. Be polite when talking to the people who are trying to help you.  They aren’t getting paid to do this, so be nice to them.  It’s always nice to get a thank you from the poster.
  3. Check your spelling and grammar.  It doesn’t have to be perfect, but many people prefer proper spelling and grammar. (Mine isn’t the best, but I do try.)  If English isn’t your first language and people are giving you a hard time about your spelling or grammar some people will be a little more forgiving once they know that you are not a native English speaker.
  4. If the message board is in English and you are posting an error message which is in another language, please translate it to English.  More people will be able to read it that way.  You may not get the translation perfect, but it’s usually close enough to get the point across.
  5. Don’t post just the error number.  There are thousands of error messages and we need the entire error message.
  6. Don’t just say that it’s broken.  Be specific about what isn’t working.
  7. Let the readers know what you have done to try to fix the problem already.  That will save you time as the people trying to help won’t have to have you try stuff twice.
  8. If you find the solution your self post it to the thread.  This will help the next person who has the same problem.  These forums are all about helping others and this is a great way to give back to the community.
  9. Please remember that the people responding aren’t computers.  It may take time for someone to read your thread who knows the answer.


January 21, 2008  8:00 AM

Virtualizing SQL Server? When to and when not to.

Denny Cherry Denny Cherry Profile: Denny Cherry

Virtualizing servers is huge right now.  And thanks to VMWare it is very easy to build virtual machines.  In most of IT virtualizing servers is a good think.  In the database world, that isn’t always the case.  As we know SQL Server uses a lot of memory and CPU resources.  If you try to virtualize a high load SQL Server you could end up with all the resources of the virtual server being used by the single virtual server.

 Now don’t get me wrong.  If you have a small server with low load which has a SQL Server on it, such as a Blackberry Enterprise Server, that may be an excellent server to virtualize.  Your production CRM database, probably wouldn’t be a good server to virtualize as it probably needs more CPU and memory resources than your Virtual server would be able to afford to give it.  Now that said; if your VM Servers are huge servers with hundreds of Gigs of RAM and lots of CPUs (such as the new HP DL 580 and 585 models) and rack space is tight, then virtualizing SQL Servers might be a good plan provided you can deal with any disk IO issues.

In addition to the CPU and memory concerns there are the disk performance issues to look into.  Typically when setting up a virtual machine, the disk drives are also virtualized.  This means that to the virtual server host the virtual machines disks are simply files stored on the file system.  This probably isn’t the best way to store SQL Server databases (or any system which uses the disk subsystem a lot).  If you can setup your disks to be mapped directly to physically SCSI arrays you can remove this potential problem.  Check your VM solution to see if this is supportd (VMWare does, I don’t know about Microsoft Virtual Server).  If you can provide information about this feel free to post as such in the comments.


January 17, 2008  12:00 PM

MS KB Article with all the hot fixes listed

Denny Cherry Denny Cherry Profile: Denny Cherry

I found this great little MS KB article today.  It’s got all the SQL Server 2005 post SP2 host fixes including bug numbers and descriptions of the bug.  The MS KB article ID is 937137.


January 14, 2008  8:00 AM

Log Shipping without SQL Server Enterprise Edition

Denny Cherry Denny Cherry Profile: Denny Cherry

Microsoft’s Log Shipping is pretty good.  But it requires that you have SQL Server Enterprise Edition on both the machines.  This makes the solution fairly expensive.  Because of this I’ve written a replacement which can be used on any edition of SQL Server including SQL Server Express Edition (as long as you use a different scheduler).

I recommend keeping the drive letters the same on the two machines, but this isn’t required.

First backup the database to log ship and restore it to the backup server using the WITH STANDYBY option.  You are now prepped to start shipping the transaction log.

Setup a job on the primary server which backs up the log every few minutes to a network share on the backup server.  I use code alone these lines.

backup log Northwind to disk='\\backupsql\BackupFolder\northwind.log' with NOINIT, NOSKIP, NOFORMAT

Add a second step to the job which uses osql to start a job on the backup server.  Something like this.  (The on failure action should be to Quit with Success for this step.)

osql -S BackupSQL -E -Q "msdb.dbo.sp_start_job 'Restore Northwind Log'"

Setup a restore job on the backup server (making sure that the name matches the job name in the second step of the backup job.  The restore job will have four steps in it.

Step 1 (T/SQL):

/*This first part of the code ensures that no one is using the database that we are about to restore.  If we don't do this then the restore will fail.*/
declare @spid as varchar(10)
declare @CMD as varchar(1000)
declare cur CURSOR for select spid from master.dbo.sysprocesses where dbid =
        (select dbid from sysdatabases where name = 'Northwind')
open cur
fetch next from cur into @spid
while @@FETCH_STATUS = 0
    set @CMD = 'kill ' + @spid
    exec (@CMD)
    fetch next from cur into @spid
close cur
deallocate cur

Step 2 (OS Command):

del d:\RestoreFolder\Northwind.2.log
REM /*This removed the last file we processed.*/

Step 3 (OS Command):

move d:\RestoreFolder\Northwind.log d:\RestoreFolder\Northwind.2.log
REM /*This moves the current file into place for processing.*/

Step 4 (T/SQL):

declare @i int
declare @j int
set @j = 1
set @i = 0
restore headeronly from disk='d:\RestoreFolder\Northwind.2.log' /*This tells us how many transaction log backups there are in the file that we need to restore.*/
set @i = @@ROWCOUNT
while @i+1 > (@j)  /*This loop runs us through the file restoring the logs in order.*/
    restore log Northwind from disk='d:\RestoreFolder\Northwind.2.log'
        WITH FILE = @j,
            STANDBY = 'F:\MSSQL\Backup\RMDBArchive.sby' /*This keeps the database in standby mode ready for more restores.*/
    set @j = @j + 1

If your folders don’t exist between the two servers you’ll need to add the MOVE parameter to the restore commands.

If you do being using this please post a note to the comments if you can.  I like to know who is using this log shipping code.

January 12, 2008  1:26 AM

SQL Server 2008 – Resource Governor Questions

Denny Cherry Denny Cherry Profile: Denny Cherry

The engineers from Microsoft PSS has posted some excellent information on thier blog about SQL Server 2008’s Resource Governor.

It’s some pretty hard core information but when it comes to SQL 2008 it’s going to be some useful information to have.


 Update 2008.01.14: Now with correct URL.  Sorry about that.

January 11, 2008  6:40 PM

New Article: Determining SQL Server database storage requirements

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve published a new tip over on SearchSQLServer.com entitled “Determining SQL Server database storage requirements“.


January 10, 2008  8:00 AM

SQL CLR: The What, When, Why and How.

Denny Cherry Denny Cherry Profile: Denny Cherry

There are two camps when it comes to SQL CLR.  The DBA camp, which says don’t use it, it’ll kill your SQL Server and the Developer camp which says that it will save you loads of time and that you should use it for everything.  These two different camps also represent the different messages which Microsoft is giving out as well.  I highly recommend that DBAs sit in on a Microsoft developer track session and that developers sit in on a DBA track session some time.  You’ll see a wide difference in the message about CLR in the sessions.

SQL CLR while powerful can not save the world as we know it from T/SQL.  SQL CLR has it’s uses, and it has times when it is the wrong option.  It can most definitely not do everything.  First of all, you’ll notice that I call it SQL CLR not just CLR.  This is because only a subset of the .NET CLR is supported by SQL Server.  This subset it called the SQL CLR.  The official list of supported libraries can be found here.  If it isn’t on that list it’s not officially supported by Microsoft.  What do I mean by supported?  Well in basic terms if you are using a .NET CLR library which isn’t on that list and you start having performance issues, Microsoft can require that you remove the CLR code from the SQL Server before troubleshooting the issue.  The official support policy from Microsoft can be found in MSKB article 922672.

Now with all that said what exactly can SQL CLR functions and procedures be used for?

SQL CLR code is extremely powerful, and when it comes to some kinds of work can be much faster than T/SQL code.  Some examples are advanced math, string manipulation, string searching, pattern matching, etc.  Places where SQL CLR will be slower than native T/SQL will be when general data manipulation and data searching.  When writing SQL CLR functions and procedures do remember that if the .NET code has to go back to the database for any reason that data access will not be slower than doing the data access natively within T/SQL.  If nothing else you have to account for the additional time to connect to the SQL Server, check credentials and move into the correct database.  All of which takes time.  When updating records from .NET code you have to process the records one at a time.  While .NET is great at processing records row by row, SQL isn’t.  SQL will be very slow when it comes to actually processing the updates as it is optimized for record set processing not row by row processing.

Now that I know what I can do with it, when should I?

When deciding to write a .NET SQL CLR function or procedure you first need to decide; Is the SQL Server the right place to do this?  Often times the front end, or middle end (if you have an N tier application) may be a better place to do the work.  For example formatting phone numbers via a .NET function should be done on the front end, not in the database.  Encrypting a credit card number should be done in the middleware or the SQL Server (if you have a middleware layer do it there).  You probably don’t want to install the encryption method and certificate that you use to encrypt data on the end users computer.  The general rule that I like to live by is to put as much .NET code outside of the SQL Server as possible, either in the middleware or in the front end.  Now there are also times when putting the code within the SQL Server is the right place to put it.  Say that you are doing a data warehouse load and you need to parse the text of a NVARCHAR(MAX) field looking for specific key words or phrases, and if these key words or phrases exist you need to add rows somewhere else.  T/SQL may not be the best option for this.  A Full Text search will be inefficient as you will have to search the entire table every time.  A regular T/SQL LIKE command will be very slow as indexes can’t be used, and searching through large pieces of text takes SQL a lot of time.  But if you use a SQL CLR function this can be done with very little CPU time using what are called Regular Expressions.  Regular Expressions are a basic .NET function which allows you to quickly and easily search a block of text.  (Since I’m not a .NET programmer I’m not going to dive into using regular expressions, however here is an MSDN article on the topic.)

Other excellent uses of SQL CLR would include procedures or functions which involve advanced mathematics functions which SQL Server does not include.  These functions could be easily put into a SQL CLR procedure or function and executed within the .NET environment with the result then being used within the SQL Server.  While normally I would recommend putting this in the client tier or middle tier if the function or procedure was needed for in row processing of a query then the SQL Server may be the right place for the CLR code.

While the SQL CLR isn’t the end all solution that some people were looking for, and want it so desperately to be, when used correctly it can be an extremely powerful tool.  But it must be used carefully as when used incorrectly it can hamper performance of your SQL Server.


January 7, 2008  8:00 AM

XML in the database. What’s the big deal any way?

Denny Cherry Denny Cherry Profile: Denny Cherry

So, we’ve had the ability to use XML within the SQL Server databases since SQL 2000 came out.  What’s the big deal you ask?  How can I use it to make my life easier you also ask?

When I first started looking into this whole XML thing I was thinking the same thing.  How is this blob of data going to make my life easier?  Well when used correctly it most certenly can.  In SQL Server (up through SQL 2005 at least) we don’t have a way to pass more than one record into a stored procedure.  If you have an order entry system and the custom wants to order 3 items you have to run three commands on the SQL Server, doing three inserts one at a time.  If you instead passed those three items from the UI to the database as an XML document you could then process all three items in a single command thereby improving performance.

declare @XML XMLSET @XML = '<ROOT><Customer ID="12">
<Order ID="7498">
<Item ID="415" Qty="1" />
<Item ID="87547" Qty="2" />
<Item ID="4125" Qty="1" />
exec sp_xml_preparedocument @hDoc OUTPUT, @XML
FROM OPENXML (@hDoc, '//Customer/Order/Item')
WITH (CustomerId INT '../../@ID',
OrderId INT '../@ID',
ItemId INT '@ID',
Qty INT '@Qty')
exec sp_xml_removedocument @hDoc

As you can see by the 0utput we have a table which we can process in bulk.  Instead of running a sample piece of code if we were to build this into a stored procedure we could do something like this.

Here is our LineItems table.

(OrderId INT,

ItemId INT,
Qty INT,
ItemPrice NUMERIC(6,2))

And here is our procedure which adds the XML data to the LineItems table.  In a production system business logic would need to be added to ensure stock on hand, and to return estimated ship dates to the client.
@Items XML

exec sp_xml_preparedocument @hDoc OUTPUT, @Items

SELECT OrderId, ItemId, Qty, ItemPrice
FROM OPENXML (@hDoc, '//Customer/Order/Item')
WITH (CustomerId INT '../../@ID',
OrderId INT '../@ID',
ItemId INT '@ID',
Qty INT '@Qty',
ItemPrice NUMERIC(6,2)
exec sp_xml_removedocument @hDocGO

Since we don’t want to create a web based order system for this little demo here we can run the procedure and verify the output.

declare @XML XMLSET @XML = '<ROOT><Customer ID="12">
<Order ID="7498">
<Item ID="415" Qty="1" ItemPrice="12.95"/>
<Item ID="87547" Qty="2" ItemPrice="16.25"/>
<Item ID="4125" Qty="1" ItemPrice="8.25"/>
EXEC usp_AddLineItems @XMLGO
FROM LineItemsGO

As we can see this is much more efficient than having to run three insert statements as they would require at least three rights to the disk, plus how ever many reads are needed to do the business logic. By using the XML we can do it all with a single write to the disk, and a single set of reads for the business logic. In this example we have reduced our disk activity by 33%. If we were processing 10 line items we would have reduced our disk activity 90%. And since disk performance is usually the bottleneck of any database reducing the disk IO any way we can is a good thing.


January 3, 2008  8:00 AM

Using a foreach loop to process an XML document.

Denny Cherry Denny Cherry Profile: Denny Cherry

Receintly I was working on a project where I needed to use a foreach loop with an SSIS project, but couldn’t for the life of me get it to properly process the XML document which I was giving it.  Well with some major work and digging I was able to get it working correctly, but it took me for ever to get all the little setting correct so I figured that I’d throw the info up here for anyone else who is looking for it.

Some background on what the process is that I’m working on.  Basically I’ve got a table with catagorized data in it.  I need to export all the data from the table info one file per catagory (don’t ask, I didn’t design it, I’ve just got to automate it; and it actually makes sence in the grant schem of things).  Well I figured that the easiest way to do this was to use a foreach loop and give it an XML document with the list of catagories to process.  (This was better than looping through and getting the next value from the database.)

 So needless to say, I get started on my little process.

The query which I’m using within an Execute SQL Task is below.  The Execute SQL Task puts the XML data into an SSIS variable called v_CategoryList with a data type of string.

SELECT WebsiteBlockCategoryId as id
FROM dbo.WebsiteBlockCategory cat with (nolock)
where WebsiteBlockCategoryId  <> 0

The XML document looks like this. (SSIS seams to be wrapping it within <ROOT></ROOT> tags for me which is why I’m not doing it my self.)  My XML document is actually must longer than this, but you get the idea.  It’s a very basic XML document.

 <cat id=”18″ />
 <cat id=”19″ />
 <cat id=”20″ />
 <cat id=”21″ />
 <cat id=”22″ />
 <cat id=”23″ />
 <cat id=”24″ />
 <cat id=”25″ />
 <cat id=”26″ />
 <cat id=”27″ />
 <cat id=”28″ />

As you can see on the screenshot of the forloop properties I’ve set the source to be my variable and the EnumerationType to Element Collection.  Since I know where the data is within the XML document I use DirectInput for both the outer and Inner XPath strings.  For the Outer XPath string I’m using “//cat”.  Because I’m not putting in the ROOT level name it doesn’t matter what gets put in there as long as there is a parent level.    For the Inner XPath string I’ve got the Element name with an @ sign in front of it “@id”. 

Collection Screen

Over on the Variables page of the UI I’ve got my v_CategoryId variable mapped to Index 0 of my document.  If you need to output more than one element from your XML document set your Inner XPath to “*”.  This “should” allow you to bring back all the elements and refer to them by index number starting with 0.  I havn’t actually tried this, as I’ve always only needed a single element hense the “should”.

Variable Mappings Screen

And to think, that little thing took me a couple of days to get working right.  I can only imagine trying to do this in SQL 2000 and DTS.


December 31, 2007  8:00 AM

Have a Happy New Year

Denny Cherry Denny Cherry Profile: Denny Cherry

This will by the last blog entry for the year (this being new years eve and all). 

I hope that everyone has a happy new year and that no one needs to have to work tonight.

Don’t forget to spend tonight with family and friends, and not your computer working.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: