As of Monday it’s official, that I’m now one of the two moderators over at ServerFault.com. From what I understand this means I basically go through the list of posts that people have tagged as not being relevant to he site and remove them, as well as making sure that the user base is following the rules.
I think that its a great site, and I’m thrilled that Jeff trusts enough in me to put me into this position on the site.
The site is still in public beta. If you’d like to join the beta you’ll need to use the password “alt.sysadmin.recovery” to get into the site.
Today was day 2 of EMC World 2009. There were some great sessions today. I’m focused on two tracks this year, VMware and the CLARiiON product as we have just deployed both of these in our data center migration project. Continued »
Thanks to my wife Kris reminding me that the SD card in my camera will also fit in my blackberry I’ve gotten the photo’s uploaded to flickr.
This morning is Day 1 of EMC World, so it’s a perfect time to review yesterday.
Day 0 is all about getting to the show, and getting checked in. And of course the party.
The food was pretty good, as was the beer. The band sounded ok, but the sound guy wasn’t all that great.
I’ll be posting photo’s probably when I get home, since I’m a dork and I forgot the cable for my camera.
Well tomorrow begins my annual trek to EMC World. This year I’m headed back to Orlando as EMC World is being held at the Orange County Convention Center. As I’ve done the last couple of years I’ll post as often as I can during the conference both on here on my blog, as well as on Twitter.
This years EMC World event should be a blast and very educational. They’ve got tons of sessions on VMware, and one that I’m really looking forward to on setting up Exchange under VMware using a CLARiiON for the storage. This is something that I was hoping to get done before EMC World, but when I saw that session on the schedule I decided to hold off on our Exchange Migration until afterwords so that I could get some additional best practices first.
I’m also looking forward to the sessions that I’ve found about SQL Server on the CLARiiON. I haven’t found all that many of these up there, something that I’ll be sure to mention in my eval this year as I would assume tha the bulk of data stored on SANs is database data, and contrary to popular believe database servers are not file servers and should not be treated as such.
If you will be at EMC World come on over and say hi. I’ll be on twitter so shoot me a message or a DM or find me in the Web 2 lounge or the EMC returning attendees lounge, or the exhibit hall somewhere.
Don’t forget to check back here for photo’s of the event. I can’t upload in real time as my phone doesn’t have a camera, so I have to wait until I get back to the hotel or the convention center to upload them.
Something that I think that Microsoft should include with the SQL Service Broker is an adapter so that MSMQ messages (and other messaging systems as well) will flow automatically into the SQL Service Broker. Since Microsoft hasn’t gotten around to writing one I’m going to start.
It shouldn’t be all that hard. Setup a Windows Service which reads from a predefined MSMQ and have it take the message and send it to a SQL Service Broker queue.
Then setup a Windows application that allows you to setup the config file with the source you want to read from and the SQL Service Broker objects you want to send to.
Since I have little to know experience reading from other queues I’m putting a feeler our there for some assistance on this project. Since I don’t know C#, the project will be written in VB.NET using Visual Studio 2008 on the .NET Framework v3.5.
I’ll be starting with MSMQ, and they other queuing systems as needed.
I’ve setup a project site on CodePlex. There’s not much up there at the moment, just a basic framework of the project. (Yes I know I now have two unfinished projects running, but this one will hopefully have others working on it as well.)
The official answer is to delete the subscriber and recreate it pushing a new snapshot to the subscriber.
The much quicker and easier method is as follows.
1. Stop the distribution agent on the machine that it’s currently running on.
2. Disable the SQL Agent job that runs the distribution agent.
3. Script out the SQL Agent job from the old server and create it on the new server.
4. Enable the job on the new server.
Done. You have just changes replication from being a push to a pull (or from being a pull to a push).
If you wanted to you could even setup your distribution agent on a third computer, but it is easier to keep track of everything if it’s running on the distributor or the subscriber.
This is a “it depends” sort of question.
These are my recommendations, your mileage may vary.
Your distributor is on the same system as your publisher – Pull is probably for you
Your Subscribers are a very high transaction count – Push is probably for you
You need to manually copy the subscription over the network to the subscriber and load it up from the local drive – Pull is probably for you
Your distributor is on a separate from the publisher – Push is probably for you
The distributor is on the same server as your subscriber – Either, as the agent will be running on the distributor either way
You have a slow network link – Either, slow networks aren’t overcome with either technique
If you have specific’s you’d like to ask about, post your questions below, or in the ITKE forum.
All to often we end up with duplicate rows in a table. The best way to keep duplicate rows out of the database is to not let them in. But assume that they are there. This bit of sample code shows how to delete those duplicate rows quickly and easily in a single statement. No temp tables required (I use a temp table to put the data into for example purposes). This code is for SQL 2005 and up as it uses some features which were introduced in SQL Server 2005. SQL Server 2000 would require a totally different technique.
CREATE TABLE #DuplicateRows /*Create a new table*/
INSERT INTO #DuplicateRows /*Load up duplicate rows*/
FROM #DuplicateRows; /*Check that the data is actually hosed*/
WITH Cleaning AS (SELECT ROW_NUMBER() OVER(ORDER BY Col1, Col2, Col3) as row,
DELETE FROM Cleaning /*Delete the rows which are duplicates*/
WHERE Row NOT IN (SELECT row FROM (SELECT Col1, Col2, Col3, MIN(row) row
FROM Cleaning a
GROUP BY Col1, Col2, Col3) b)
SELECT * /*Check the table to see that it is clean*/
DROP TABLE #DuplicateRows /*Clean up the table*/
Hopefully you find this code useful.
Yes, for crying out loud yes.
Every server that can access the Internet or be access from the Internet, or that can be accessed from a computer that can access the Internet should have an anti-virus on it. Preferably a corporate wide solution like Trend Micro, McAfee, Norton, etc. so that the server reports back to a central server to make it easier to find out if a machine has a problem.
Next comes what should be scanned. I prefer to exclude the mdf, ndf, and ldf files. I don’t like to exclude the entire folder as this creates a hiding place where a virus could stick infected files. If possible have it exclude the mdf, ndf and ldf files from old the correct folders only. Even if a virus scanner wanted to scan the database files it wouldn’t be able to as the files are locked open by the SQL Server so that nothing else can access them. By not excluding the files all you are doing is throwing alerts to the monitoring server that files couldn’t be scanned.
Odds are a full scan doesn’t need to be done against the server all that often as the files on the hard drive of the server aren’t going to change all that often. Any virus that comes in from the network should be caught by the real time engine that is running at the time. You will want to do a full scan every once and a while (every couple of weeks or so) incase something came in over the network was saved and setup to launch at the next reboot but wasn’t yet in the virus definition file.