So I recently upgraded by VMware vCenter server from 4.0 U1 to 4.0 U2 and ran across a little problem. After I upgraded I couldn’t attach any hosts to the vCenter server. It kept telling me that vCenter couldn’t talk to the host even though it could.
After digging into the logs we saw that there was a problem with the SSL handshake. It turns out that another restart of the vCenter services was all that was needed to clear up the problem.
So if you are having problems attaching hosts to vCenter after your upgrade just restart the vCenter services on the vCenter server and you should be fine.
Southern California’s second SQL Saturday is coming up soon, and is looking for more speakers and more attendees. If you’ve wanted to try your hand at presenting this is a great chance. The event is September 18th, 2010 down in San Diego, CA. Continued »
EMC has a very nice, and very expensive piece of software called Replication Manager. Replication Manager is basically a giant scheduler that helps you create storage clone and snapshots on a regular schedule. Pretty much everything that it does, can be handled via batch files if you have the time to get everything written the way you want. Replication Manager simply gives you an easy interface to set this stuff up in, and it’ll email you on failures, that sort of stuff.
Now the product works very well (as well it should for what it costs), but I recently had to reinstall it. The reason for the reinstall is that the machine that it was installed on was a Windows 2003 x86 machine, and the reason for keeping it as Windows 2003 was gone, so I decided to upgrade it to Windows 2008 x64. Because I was moving from x86 to x64 a full format was needed taking upgrading out of the picture. I also wanted to increase the size of the C drive from 20 Gigs to 50 Gigs, so a format was going to be needed anyway.
So being a DBA I backed up the data folder under c:\Program Files\EMC\rm\serverdb\ and wiped the machine and got the new OS up and running. I reinstalled Replication Manager and dropped the data folder back where I got it from. But this time the services failed to fire up. It basically told me that files were missing. So I put the old files back and started the services. They started up this time. Apparently the folder c:\Program Files\EMC\rm\serverdb\log is also important as those appear to be the transaction log for the database, not just the normal log files. (I opened the file in notepad and it was in binary.) To safely backup all of everything, I’d recommend grabbing everything just to be safe.
Now the next problem I ran across was when my job ran which mounts the clone to another server. Since there was already a LUN there with that drive letter Replication Manager failed the job because it didn’t know what to do as it wasn’t in control of the drive that was mounted. Even though the LUN was mounted by the old RM server, because the new RM server didn’t know about the old job it didn’t know that it could safely remove the LUN from the server and present the new clone.
So I guess the big piece of this to remember is that if you have to rebuild your Replication Manager server, you’ll need to go to all your machines which you have presented the clone or snapshots to, and manually remove them from those servers so that Replication Manager can post those LUNs to the destinations correctly.
So you use LiteSpeed and expect that at some point you will be using object level recovery. Without any changes you can do this no problem, LiteSpeed will simply need to create what is called the OLR Map first. This is simply a map of where OLR can find each object within the backup file. This is done by adding the @OLRMap=1 parameter to the backup job. Currently there’s no way in the backup wizard to do this (something which will hopefully be changed soon).
Now if your database is smaller, then creating this map is no problem. It just takes a few seconds, or a few minutes. However if your database is say 1 TB in size like mine is, then you’ll want to allow the SQL Server to create the map while the data is being backed up.
Now creating the OLR Map is a pretty quick operation when done as part of the database backup, somewhere about 10-20 seconds if you believe Quest (I didn’t really see any change in database backup time). When done after the fact this can take a very long time. In my case creating the map afterwords takes over 12 hours. Now the good news is that if you do it on object level restore you only need to do it the first time you restore an object from the file, as the object map is written to the backup. Now the downside to this is that you have to write to the backup file. In my case I found of this the hard way, I had the volume with the backup mounted in read only mode on the server that was running the OLR process. Because of this the OLR process had to be stopped and the volume remounted, then the OLR process restarted.
So in short.
- Add the @OLRMap=1 parameter to your backups.
If you can’t, or need to restore from an existing backup.
- Make sure the volumes are read/write
Or do a full restore, then use SSIS to move the data to the production server.
Over the weekend I went to SQL Saturday 28 over in Baton Rouge, LA. I gave two sessions this weekend. The first was “Getting SQL Service Broker Up and Running” and the second was “Deciding if Virtualization is a good choice for your SQL Server“. I hope everyone had a great time at the event, personally I had a blast.
If you want to catch more about virtualization (and storage) options for your databases be sure to check out my pre-con at the PASS summit.
Thanks for coming to my sessions.
That’s right, Mr. LaRock himself (Blog | Twitter) is giving me the chance to present during the 24 Hours of PASS (aka 24 HoP) virtual conference this time around. I’ll be doing my “Storage for the DBA” session to give you a taste of what you’ll be able to get during my pre-con at this years summit, and best of all the virtual conference is totally and completely free. Continued »
In case you missed the dozen blog posts and Twitter messages about it over the last couple of weeks yesterday was a dry run of the Storage and Virtualization session that I’ll be presenting at the SQL PASS summit in Seattle this November. We had a pretty good turn out, and I’d like to thank everyone that came out to watch me talk for 8 hours (turned out it was closed to 6 hours, so that’s the first thing to fix). I got great feedback from everyone.
A big thank you to the sponsors who the event couldn’t have been done without. Those sponsors for yesterday were Microsoft, Emulex, Quest Software and Red Gate.
Several people asked for the slide deck, so here it is in its current form. The deck for the pre-con at PASS will be quite a bit different, as more material will be added, and some of this material will be moved around to make it flow better as I move from topic to topic.
Congrats to the drawing winners. For the folks that won the Windows 7 licenses, as soon as we can track down the Windows 7 licenses I’ll get them mailed out to you.
Thanks again to everyone for coming. Hopefully I’ll see most of you at PASS or Connections.
Latency on a fiber channel network isn’t normally something you worry about. But something that you need to remember is that every meter of fiber optic cable that your data has to travel through takes time. Every fiber channel switch that you have to go through adds more latency.
When you are setting up something like synchronous replication between two storage arrays this latency starts to become more important as every millisecond that you spend waiting for the storage to respond is time that your application isn’t responding to your clients requests.
So, the basic math is that every meter of fiber optic that your data travels takes 5 nanoseconds. So if you have your server connected to your storage array via a one meter cable there will be 10 nanoseconds of delay. 5 nanoseconds for the data to get to the array, and 5 nanoseconds for the response to get back to the server from the array.
So using this math, for each 100 meters of fiber optic cable there is 1 microsecond of latency. For every kilometer of cable there is 10 microseconds. For every 100 kilometers of cable there is a 1 millisecond delay, and for every 1000 kilometers of cable there is 10 milliseconds of delay. So if you are replicating data from LA to New York that’s about 2778 miles, or 4470 kilometers which gives us a delay of about 44 milliseconds for each command which is being sent.
Now there is something else which needs to be taken into account when figuring out the storage latency time. And that is the fiber channel switches. If the ports on the fiber switch are on the same ASIC then there is no measurable latency through the switch, however if the two ports on the fiber switch are on different ASICs then there is an additional latency of 2 microseconds in each direction. While this isn’t much, if you keep in mind that between LA and New York there are probably hundreds of switches, those 2 microseconds are going to really start to add up.
Because of these numbers when using synchronous replication 30 miles is about as far as you want to replicate data. And farther than that and you’ll start to see network latency problems with your application. These problems will be amplified if you use something like SQL Server as with SQL Server and other databases, every nanosecond counts.
Hopefully this math will help you make more informed storage design decisions.
OK, so the odds are getting an XBox and a Windows 7 license are basically zero, but you could win one of them. How can you win some of this fantastic stuff? Well that’s the easy part, all you have to do is come and attend my free day of training on “Storage and Virtualization for the DBA“. Submit a survey, and you’ll get a ticket for the drawing.
Thanks to some great sponsors the lunch plans have changed a little for this event. Instead of everyone heading out for lunch a group of vendors has gotten together and sponsored lunch for everyone. The Lunch sponsors who I can’t thank enough are Microsoft, Emulex, Quest Software and Red Gate.
So head on over to the Event Bright page and get signed up. Seating is limited, as are the number of lunches, so be sure to get signed up quickly.
Tomorrow I’ll be speaking at the Orange County SQL Server Users Group. I’ll be presenting two sessions at the meeting. One will be “Exploring the DAC and everyone’s favorite feature the DACPAC“, and the other will be “Reading the SQL Server Execution Plan“.
The meeting starts at 6:30pm, and I believe that there will be pizza provided and who doesn’t like pizza and a DACPAC discussion.
The user group meets at the New Horizons Computer Learning Center in Anaheim.
1900 S. State College Blvd.
Anaheim, CA 92806
It’s right behind Angel’s stadium (or what ever its called this month), you can’t miss it.
I’ll have some SWAG with me, but not a whole lot (my supplies are starting to run low).
See you there.