I’ve been asked to come back to the San Diego SQL Server Users Group on September 18, 2008. This time around I’ll be speaking about Federated Databases, and some various techniques which you can use to federate your systems.
I’m still finishing up the slide deck and demos. I’ll try and get them posted in advance. If I can’t I’ll post them shortly after.
A new tip of mine has just been published on SearchSQLServer.com. This tip, “Get SQL Server log shipping functionality without Enterprise Edition” is all about writing your own Log Shipping code without using Microsoft’s, allowing you to use Log Shipping on editions of SQL Server other than Enterprise Edition.
The error reporting in SQL Server Replication isn’t all that great. This is a well known issue that pretty much everyone knows about. Something that I don’t know if a whole lot of people know about, is that there is a way to get a lot more information from replication about what’s going on, and what’s going wrong.
This is most easily done by running the replication job manually from within a command window on the distributor. This will allow you to add switches or change values as needed and easily see the output, or redirect the output to a file for analysis, sending to Microsoft, your consultant, etc.
Replication is run by jobs, with job steps of some funky types. These step types simply mask what is happening in the background. SQL is shelling out and running a command line app and passing it all the switches as they are within the job step.
All the command line apps which replication uses are in the “C:\Program Files\Microsoft SQL Server\90\COM” folder by default (for SQL 2000 replace the 90 with 80, for SQL 2008 replace the 90 with 100). In that folder you will find a few apps which are of interest. When you run the snapshot job snapshot.exe is called. When you run a distribution job DISTRIB.exe is called (I’ve got no idea why it’s uppercase). When the log reader is running logread.exe is run. When you are running merge replication replmerg.exe is run.
All of these can be run manually from a command prompt. For starters open up the SQL Job and edit step two, the one which actually does the work. Copy all the text in the command window and paste is after the filename in the command prompt window and press enter. You will need to stop the SQL Agent job before you can actually run the command from the command prompt, as replication is designed so that you can only run the commands one at a time.
Now the whole point of this was to get more log info because the replication is failing. This is done by added the “-OutputVerboseLevel” switch to the command. This switch has between 3 and 5 logging levels depending on which command you are running. 0 (zero) is basically no logging, and as the number goes up more data will be shown. The distrib.exe, replmerg.exe and snapshot.exe takes 0-2, while the logread.exe takes 0-4.
You should only do this when replication is failing and you can’t figure out why, and all SQL is telling you is some cryptic error message.
Hopefully you’ll find this information useful.
The webcast which I did for Quest Software called Don’t Consolidate Yourself Into Performance Problems is now available for viewing after the fact.
So you’ve been tasked with setting up a quick and dirty reporting server. The goal is to restore the log files from the production server to the reporting server nightly.
The backups are simple, use the SQL Maintenance plan to backup the logs, and then copy them to the remote machine. But how do you restore the logs to the reporting server nightly.
Well I’ve got a two step SQL job which should help you out.
Step 1 kills all current sessions in the database, and step 2 does the actual restores.
The code for step 1 is:
declare @spid varchar(20)
declare cur CURSOR FOR
where dbid = db_id('Your Database Name Here') /*<---Put your database name here*/
and spid > 50
fetch next from cur into @spid
while @@FETCH_STATUS = 0
exec ('kill ' + @spid)
fetch next from cur into @spid
The code for step 2 is:
create table #Files
insert into #Files
exec xp_dirtree 'd:\', 1, 1
delete from #Files
where IsFile = 0
declare @FileName nvarchar(4000)
declare cur CURSOR FOR SELECT FileName from #Files
fetch next from cur into @FileName
WHILE @@FETCH_STATUS = 0
SET @FileName = 'D:\Path\To\Your\Log\Backups\' + @FileName
RESTORE LOG YourDatabaseNameHere FROM DISK=@FileName WITH STANDBY='D:\Path\To\Your\StandBy\File.standby'
IF @@ERROR <> 0
exec xp_delete_file @FileName
fetch next from cur into @FileName
I hope this makes your process easier. Now this code only works on SQL Server 2005 and up as the system stored procedures which I use were not included until SQL Server 2005.
The Code Camp is the weekend before PDC, and right near where PDC will be held. So if you are coming to PDC this year, come a couple of days early and check out what is hoped to be the largest SoCal Code Camp ever.
I’ve got four sessions scheduled, all of which are tagged with “SQL Server“.
- Back To Basics: Getting Back To The Basics of SQL Server (Part 1)
- Back To Basics: Getting Back To The Basics of SQL Server (Part 2)
- Scaling that database bigger than ever (Part 1)
- Scaling that database bigger than ever (Part 2)
- Storage For the DBA
If you are planning on attending the Code Camp, be sure to mark the sessions you want to attend with the interest check box. That way the Code Camp staff knows how big of a room each session will need.
If you are interested in speaking at the Code Camp, feel free to enter more sessions. The more sessions we have, the more popular the event will be.
As the date gets closer additional sessions will be entered onto the site. Be sure to check back often.
I’ll do my best to get the slide decks and sample code posted before the Code Camp starts. Odds are I’ll get them posted on Saturday morning as I did last time. If you missed any of my sessions from prior Code Camps, let me know in the comments and I’ll see what I can do about adding those sessions to the new schedule. See you at the Code Camp.
Last night was my talk with the NJ SQL User Group. I would jut like to thank them for the invite, I had a great time coming and speaking with them. It was a very interactive night, with a lot of questions. Quest Software was nice enough to not just send me out there, but one of the local sales reps also came, with a massive amount of give aways for the members. He brought T-Shirts, demo disks, and the new Quest Tote Bags which were a huge hit.
The presentation went great, and there were a ton of questions.
I’ve published another tips on SearchSQLServer.com called “Tuning SQL Server performance via disk arrays and disk partitioning“. This is sort of a part one of a two part series of tips on tuning the database server at the hardware level. In this tip I’m focusing on getting the disks setup just right. I also show how to use diskpart.exe to see if the disks are correctly aligned.
A while back I was flipping through the Microsoft news groups and found a link to an excellent story that networkworld.com put together on how Microsoft is protecting it self from brain dump sites and exam cheaters, and how they are protecting the investment that we all have made in getting Microsoft certifications.
After months of waiting the time is finely here. I’ll be speaking at the New Jersey SQL Server Users Group next week.
The topic of the talk will be SQL Server Service Broker in the Real World. You can download the slide deck and sample code which I’ll be using at the presentation.
You can read up more about the New Jersey SQL Server Users Group at their homepage.
See you there.