Powering up a single server is easy. Turn on the direct attached storage, then turn on the server. Done. When powering your storage array down or back up there is quite a bit more to it than that.
The proper procedure for powering down your array is:
- Power off the servers.
- Power down the storage processors.
- Once the storage processors have powered down completely you can power down the disk drive shelves.
- Power down the switches.
The proper procedure for powering up your array is:
- Power on the switches
- Power on the disk shelves
- Power on the storage processors
- Power on the hosts
Now the good news on the power up is that you can do steps 1, 2 and 3 together. The switches will take 1-2 minutes to boot up and become operational. The disk shelves will come online within a few seconds. The storage processors will take 8-10 minutes to come online (I’m talking about an EMC CX array here, different arrays will have different power up times).
On the power down process there aren’t a whole lot of shortcuts you can take. Which is usually a problem, because if you have to power down the array there is usually a major power problem going on, which involves no power coming into the array.
Hopefully you never need to know this information, but if you do need to this is useful information to have.
So there have been a lot of talk recently about plagiarism in our little corner of the Internet. If you haven’t read up on it you can read up here, here, and here. Sadly this wasn’t the first time that plagiarism has reared its head recently. The prior incidents can be found here, here, here and here (among others).
While I was up at the SQL 2008 R2 Tweet Up my friends Sean and Jen from MidnightDBA (@MidnightDBA) and Jorge Segarra (@SQLChicken) and I got together to talk a bit about plagiarism in general and this recent incident. The talk was recorded and they published it over the weekend as part of the DBAs @ Midnight series of videos. The video was broken down into 4 sections when they uploaded it (Part 1, Part 2, Part 3, Part 4).
Hopefully you’ll have as much fun watching the video as we had making it. During the videos we are pretty critical of some people within the community (if you’ve read the blog posts I link to above you can probably guess who).
Oh, and some of the language isn’t kid/boss safe. Consider yourself warned.
This week I’ve got a few speaking events to let everyone know about.
On Tuesday April 6th, at 6pm I’ll be speaking to the LA C# User Group. The meeting is held at the Manhattan Beach Library at 1320 Highland Ave in Manhattan Beach, CA 90266. The group asks for $7 (or $5 is you RSVP by 3pm).
On Wednesday April 7th,, at 6pm I’ll be speaking to the SoCalDotNet User Group. The meeting is held at Partners Consulting at 7101 Village Dr, Buena Park, CA. The group asks for $7 (or $5 is you RSVP by 3pm).
At both user group meetings I’ll be giving the same presentation “Back To Basics; Getting Back To The Basics of SQL Server“.
Also on Wednesday I’ll be attending and answering questions at the SSWUG vConference. My sessions are from 9:30a-10:30a, 10:30a-11:30a and 12:00p-1:00p (all times are pacific). If you haven’t registered yet, don’t forget to use my discount code DCSPVC10 to save $30.00.
Now on the 6th and 7th I’ll be work at Starbucks in the afternoons so that I don’t have to sit in traffic for ever. On the 6th I’ll be at the Starbucks at 233 Manhattan Beach Blvd, Manhattan Beach, CA. On the 7th I’ll be at the Starbucks at 7711 Beach Blvd, Buena Park, CA. I’ll probably get there about 2pm or so on each day if anyone wants to come and hang out before the meeting. Keep an eye on my Twitter stream (@mrdenny) as I’ll be posting where I’m at as I get there.
On Thursday I’ll be up in Redmond, WA at a Tweet Up with Donald Farmer on the Microsoft Campus. Sadly its a private event so I can’t invite everyone to show up and hang out.
Sometime on Friday (or on Thursday after the Tweet Up) my friends Jorge Segarra (@SQLChicken), Sean and Jen McCown (@MidnightDBA) and I will be recording a pod cast. The topic of our DBAs@Midnight recording will be plagiarism; a topic which there has been a lot of discussion about recently on Twitter. Should be a good time recording this. Once I know when Sean and Jen get it posted I’ll be sure to send out the link.
Hopefully I don’t have to get a whole lot of my normal work done this week. If I do, I’m not sure when I’m actually going to be able to.
A couple of days ago a new article went live over on SearchSQLServer.com where I dig into setting up Master Data Services. This is the first in a series of articles which will be either 2 or 3 articles long where I’m going into MDS in some detail.
The first article “Getting started with Master Data Services in SQL Server 2008 R2” is the one which went up, with the others coming in the following months.
God that sounds boring doesn’t it.
Anyway, I was installing SQL 2008 CU 6 for SP 1 the other day and I was greeted with the dreaded “some process is running which will cause your SQL Server to reboot after install” screen. The app which was running was fshost.exe which is part of the Full Text Search process.
So I naturally stopped the SQL Full Text Service, gave it a few minutes and the process was still running. As this was a production server I didn’t want to just kill the process (the only thing which uses Full Text on the server is the MS CRM database, and we don’t even use MS CRM yet).
So I moved forward and opted for the reboot. After the installer finished however there was no reboot prompt. Apparently when the SQL Service went down for patching the fshost.exe also went down so there was no reboot needed.
So if you are installing a CU, and it tells you that you’ll need to reboot because of fshost.exe you probably won’t need to reboot.
During my SQL Service Broker sessions I talk about using SQL Service Broker as a replacement for SQL Replication. But I’ve never really documented how to do this other than bits and pieces here and there. So here I’m going to try and spell out how I put a system like this together for an auto finance company that I used to work for.
Why Service Broker
Before I get into the code a little reasoning behind why we went with SQL Service Broker over SQL Replication is probably good to talk about.
The main reason is that we wanted to be able to do ETL on the data as is moved from the production OLTP database to the reporting database. We also wanted the ability to easily scale the feed from one to many reporting databases with the reporting servers being in different sites.
Below you’ll find some SQL Code that we’ll use to create some sample tables. In our OLTP database we have two tables LoanApplication and Customer, and in our Reporting database we have a single table LoanReporting. As data is inserted or updated in the LoanApplication and Customer table that data will then be packaged up into an XML document and sent to the reporting database. In the case of my sample code here everything is on one server, but the databases could be easily enough moved to separate servers.
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Sample_OLTP') DROP DATABASE Sample_OLTP GO IF EXISTS (SELECT * FROM sys.databases WHERE name = 'Sample_Reporting') DROP DATABASE Sample_Reporting GO CREATE DATABASE Sample_OLTP CREATE DATABASE Sample_Reporting GO ALTER DATABASE Sample_OLTP SET NEW_BROKER ALTER DATABASE Sample_Reporting SET NEW_BROKER GO ALTER DATABASE Sample_OLTP SET TRUSTWORTHY ON ALTER DATABASE Sample_Reporting SET TRUSTWORTHY ON GO USE Sample_OLTP GO CREATE TABLE LoanApplication (ApplicationId INT IDENTITY(1,1), CreateTimestamp DATETIME, LoanAmount MONEY, SubmittedOn DATETIME, ApprovedOn DATETIME, LoanStatusId INT, PrimaryCustomerId INT, CoSignerCustomerId INT) GO CREATE TABLE Customer (CustomerId INT IDENTITY(1,1). FirstName VARCHAR(50), LastName VARCHAR(50), EmailAddress VARCHAR(255)) GO USE Sample_Reporting GO CREATE TABLE LoanReporting (ApplicationId INT, CreateTimestamp DATETIME, LoanAmount MONEY, SubmittedOn DATETIME, ApprovedOn DATETIME, LoanStatusId INT, PrimaryCustomerId INT, PrimaryFirstName VARCHAR(50), PrimaryLastName VARCHAR(50), PrimaryEmailAddress VARCHAR(255), CoSignerCustomerId INT, CoSignerFirstName VARCHAR(50), CoSignerLastName VARCHAR(50), CoSignerEmailAddress VARCHAR(255)) GO
Service Broker Objects
With this system I use a single pair of service broker queues to handle all the data transfer. This way transactional consistency can be maintained as the data flows in. These SQL Service Broker objects should be created in both the Sample_OLTP and the Sample_Reporting database..
CREATE MESSAGE TYPE ReplData_MT GO CREATE CONTRACT ReplData_Ct (ReplData_MT SENT BY ANY) GO CREATE QUEUE ReplData_Source_Queue GO CREATE QUEUE ReplData_Destination_Queue GO CREATE SERVICE ReplData_Source_Service ON QUEUE ReplData_Source_Queue (ReplData_Ct) GO CREATE SERVICE ReplData_Destination_Service ON QUEUE ReplData_Destination_Queue (ReplData_Ct) GO
In the OLTP database you create a route like this (just change the BROKER_INSTANCE to match your server).
CREATE ROUTE ReplData_Route WITH SERVICE_NAME='ReplData_Destination_Service', BROKER_INSTANCE='566C7F7A-9373-460A-8BCC-5C1FD4BF49C9', ADDRESS='LOCAL'
In the reporting database you create a route like this (just change the BROKER_INSTANCE to math your server).
CREATE ROUTE ReplData_Route WITH SERVICE_NAME='ReplData_Source_Service', BROKER_INSTANCE='A4EC5E44-60AF-4CD3-AAAD-C3D467AC682E', ADDRESS='LOCAL'
Stored Procedures on the OLTP Database
In the OLTP database we need just a single stored procedure. This stored procedure will handle the sending of the message so that we don’t have to put that same code in each table.
CREATE PROCEDURE SendTriggerData @XMLData XML AS BEGIN DECLARE @handle UNIQUEIDENTIFIER BEGIN DIALOG CONVERSATION @handle FROM SERVICE ReplData_Source_Service TO SERVICE 'ReplData_Destination_Service' ON CONTRACT ReplData_Ct WITH ENCRYPTION=OFF; SEND ON CONVERSATION @handle MESSAGE TYPE ReplData_MT (@XMLData) END GO
OLTP database Triggers
The triggers that are on each table on the OLTP database are kept as small as possible so that we put as little additional load on the OLTP server as possible. Obviously there will be some additional load on the OLTP database, but we want to keep that to a minimum.
CREATE TRIGGER t_LoanApplication ON LoanApplication FOR INSERT, UPDATE AS BEGIN DECLARE @xml XML SET @xml = (SELECT * FROM inserted as LoanApplication FOR XML AUTO, ROOT('root')) EXEC SendTriggerData @xml END GO CREATE TRIGGER t_Customer ON Customer FOR INSERT, UPDATE AS BEGIN DECLARE @xml XML SET @xml = (SELECT * FROM inserted as Customer FOR XML AUTO, ROOT('root')) EXEC SendTriggerData @xml END GO
Procedures on the Reporting Database
The reporting database is where the real work happens. Here we take the XML document, identify which table the data is from then pass the XML document to a child procedure which then processes the data and updates the table.
CREATE PROCEDURE ProcessOLTPData_LoanApplication @xml XML AS DECLARE @hDoc INT EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml UPDATE LoanReporting SET ApplicationId=a.ApplicationId, CreateTimestamp = a.CreateTimestamp, LoanAmount=a.LoanAmount, SubmittedOn=a.SubmittedOn, ApprovedOn=a.ApprovedOn, LoanStatusId=a.LoanStatusId FROM OPENXML (@hDoc, '/root/LoanApplication') WITH (ApplicationId INT '@ApplicationId', CreateTimestamp DATETIME '@CreateTimestamp', LoanAmount MONEY '@LoanAmount', SubmittedOn DATETIME '@SubmittedOn', ApprovedOn DATETIME '@ApprovedOn', LoanStatusId INT '@LoanStatusId', PrimaryCustomerId INT '@PrimaryCustomerId', CoSignerCustomerId INT '@CoSignerCustomerId') a WHERE a.ApplicationId = LoanReporting.ApplicationId INSERT INTO LoanReporting (ApplicationId, CreateTimestamp, LoanAmount, SubmittedOn, ApprovedOn, LoanStatusId, PrimaryCustomerId, CoSignerCustomerId) SELECT ApplicationId, CreateTimestamp, LoanAmount, SubmittedOn, ApprovedOn, LoanStatusId, PrimaryCustomerId, CoSignerCustomerId FROM OPENXML (@hDoc, '/root/LoanApplication') WITH (ApplicationId INT '@ApplicationId', CreateTimestamp DATETIME '@CreateTimestamp', LoanAmount MONEY '@LoanAmount', SubmittedOn DATETIME '@SubmittedOn', ApprovedOn DATETIME '@ApprovedOn', LoanStatusId INT '@LoanStatusId', PrimaryCustomerId INT '@PrimaryCustomerId', CoSignerCustomerId INT '@CoSignerCustomerId') a WHERE NOT EXISTS (SELECT * FROM LoanReporting WHERE a.ApplicationId = LoanReporting.ApplicationId) EXEC sp_xml_removedocument @hDoc GO CREATE PROCEDURE PRocessOLTPData_Customer @xml XML AS DECLARE @hDoc INT EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml UPDATE LoanReporting SET PrimaryEmailAddress = EmailAddress, PrimaryFirstName = FirstName, PrimaryLastName = LastName FROM OPENXML(@hDoc, '/root/Customer') WITH (CustomerId INT '@CustomerId', FirstName VARCHAR(50) '@FirstName', LastName VARCHAR(50) '@LastName', EmailAddress VARCHAR(255) '@EmailAddress') a WHERE PrimaryCustomerId = a.CustomerId UPDATE LoanReporting SET CoSignerEmailAddress = EmailAddress, CoSignerFirstName = FirstName, CoSignerLastName = LastName FROM OPENXML(@hDoc, '/root/Customer') WITH (CustomerId INT '@CustomerId', FirstName VARCHAR(50) '@FirstName', LastName VARCHAR(50) '@LastName', EmailAddress VARCHAR(255) '@EmailAddress') a WHERE CoSignerCustomerId = a.CustomerId EXEC sp_xml_removedocument @hDoc GO CREATE PROCEDURE ProcessOLTPData AS DECLARE @xml XML DECLARE @handle UNIQUEIDENTIFIER, @hDoc INT WHILE 1=1 BEGIN SELECT @xml = NULL, @handle = NULL WAITFOR (RECEIVE TOP (1) @handle=conversation_handle, @xml=CAST(message_body AS XML) FROM ReplData_Destination_Queue), TIMEOUT 1000 IF @handle IS NULL BREAK EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml IF EXISTS (SELECT * FROM OPENXML(@hDoc, '/root/LoanApplication') ) BEGIN EXEC ProcessOLTPData_LoanApplication @xml END IF EXISTS (SELECT * FROM OPENXML (@hDoc, '/root/Customer') ) BEGIN EXEC PRocessOLTPData_Customer @xml END EXEC sp_xml_removedocument @hDoc END CONVERSATION @handle END GO
You can now set the queue to run the ProcessOLTPData Stored Procedure as an activation procedure and it will process the data as it comes in.
Hopefully someone finds this useful as a framework in their shop. If you have any questions please feel free to post them here.
So until just a few minutes ago the problem that everyone had with the blog written by John Dunleavy, who is the owner of SQLTech Consulting, was that he was taking the first part of whitepapers and posting them, with links to the original work, which John defended.
However then Paul Randal pointed out John’s entry into this months TSQLTuesday. It would appear that a good portion of John’s post were taken directly from an older article which I wrote. Below are screen shots from John’s blog. I’ve underlined the parts which are taken directly from an article which I wrote a couple of years ago titled Optimize Disk Configurations in SQL Server. You’ll find the matching content about half way down.
Now if you compare those sections to the RAID level descriptions that I posted in my article you’ll find that they match way to closely to be by accident. Hell the RAID 5 one matches word for word, and the RAID 10 one is pretty damn close.
The best part is John apparently didn’t finish my article, he just got to the parts he needed to swipe and stopped because in his RAID 10 description he says that if you are attached to a SAN you’ll be using RAID 10 for everything, and he even says that RAID 0 is find for ultrahigh write databases. Where in my article I say that RAID 5 or RAID 6 will work fine for most, then move up to RAID 10 if you need the higher write throughput.
Now, yes you could say that it’s RAID, how many ways are their to describe it, so it could just be chance. However note that the punctuation matches, including the placement of the semi-colons. It really isn’t very likely that this was an accident.
Even if posting unattributed copies of others works with links back was ok, flat out stealing content from their site isn’t.
Needless to say I was less than impressed when I saw this. In case you were wondering stealing someones content is not flattery. At all.
P.S. In case you are looking for the unchopped version of this blog post, I’ve included screen shots below in case his post is removed from his site.
Today’s conversation all started when I pointed out to John that posting other people’s white papers under your own name wasn’t exactly the best of ideas. Below is a screen shot of the post that caught my eye.
The biggest performance issue when virtualizing your SQL Server is storage. One way to fix this is to present storage directly to the virtual machine instead of placing the virtual machines storage within a virtual disk. In VMware this is done by using a Raw Device Mapping.
In a nut shell what we are doing with a Raw Device Mapping we take a LUN, which is presented to the VMware host. Instead of formatting this LUN with a file system and making it a datastore within VMware you detect it as normal, but leave it unformated. You can then attach the now mounted device to the virtual machine as a SCSI device.
This can be done either when the virtual machine is being created, or it can be done after the fact. In the case of these screenshots the RAW device is being added to an existing virtual machine.
Edit the virtual machine’s properties and click the Add button at the top to add a new hardware device to the virtual machine. Select Hard Disk and click Next.
On the next screen select Raw Device Mappings instead of creating a new virtual disk, or using an existing one.
On the next screen select the raw unformatted device which you wish to present to the guest. (Yes I realize that this LUN which is showing up shows as 0 bytes, but this is just a demo, you’ll want to select a LUN which has a LUN ID which isn’t 0 and has a size over 0 bytes.)
The next screen asks what data store you want to store the LUN mapping. Typically you’ll store this with the Virtual Machine, but you can select any datastore to store this mapping on. (In my case I’m selecting to store with the Virtual Machine.)
On the next screen you’ll need to make a decision as to weather you want the ability for VMware to take a snapshot of the raw device or not. Personally when it comes to SQL Server I don’t see this as a function that is all that useful. SQL has full and transaction log backups to handle the point in time recovery, so I would say that most of the time you’ll want to select Physical. If you need the ability to take snapshots of the raw device then select virtual.
On the next screen you need to select the SCSI address that you want to mount the disk to within the virtual machine. If you selected Virtual on the prior screen then you’ll also have the option to make the disk Independent as well as persistent or nonpersistant (this isn’t shown, but are the same options which are available when creating a virtual disk).
The last screen will present you wish a basic summary of what is being added to the virtual machine.
After Clicking Finish, click OK on the Virtual Machine properties. Log into the VM and rescan the SCSI bus as you would to detect any new disk being adding to the virtual machine. Create a partition (with the offset set correctly) and format the partition normally. You can now access the drive, and the virtual machine will have direct access to the SAN over the fiber channel network.
So it turns out that the code that I posted last week for discounts to SSWUG as stopped working for some reason. SSWUG has created a new code for $30 off. The new code is DCSPVC10.
Everything else is my old post is still valid.