SQL Server with Mr. Denny


August 3, 2015  5:00 PM

PASS 2015 1st Timers Webcast

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

Like I’ve done for the last several years, I’ll be presenting this years PASS Summit 1st Timers webcast, which really needs a new name as it really isn’t just for PASS Summit First Timers anymore. But anyway, during this webcast I’ll be giving some information about the convention center, specifically where to find things. Last year PASS moved a bunch of stuff around, so this was pretty important information for last year. I don’t have the final plans for the summit yet (but I will before the webcast).

This FREE webcast will be Monday September 7th, 2015 at 1pm Pacific Time.

In order to register for the session please visit this url. Once you have registered you’ll be able to get the conference call details as well as download a calendar reminder.

If you haven’t ever attended the PASS Summit before we’ll cover all the important things you need to know about the event, such as how to get from the airport to the convention center (hint, you don’t want to rent a car). Where the good food in town is, how to find things inside the convention center (it’s a big place if you’ve never been there before), and much more.

So get signed up for the webcast, get the calendar entry put into your phone, and I’ll see you on Monday September 7th.

Denny

July 31, 2015  7:20 PM

Recommended reading from mrdenny for July 31, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: ShareIT_CO also known as ShareIT

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


July 24, 2015  3:57 PM

Recommended reading from mrdenny for July 24, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: TheSmilingDBA also known as Thomas LeBlanc

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


July 22, 2015  6:00 PM

Updating All Jobs To Check For AG Primary Replica

Denny Cherry Denny Cherry Profile: Denny Cherry
AlwaysOn Availability Groups, High Availability, Job scheduling, SQL job scheduler, SQL scripts, SQL Server

One of the really annoying things about moving existing SQL Server databases into an Availability Group is setting up the jobs to work within the confines of the AlwaysOn Availability Group. This is because you have to manually create the jobs on each server, then modify the jobs on each server to check and see if the database for that job is on the primary server.

Well, here’s a T-SQL script which will help solve this problem for you. It’ll add a job step to the beginning of each database job that looks to see if it’s on the primary replica or not. If figures out the database to use based on the databases which are used in the job steps (by looking at the database which is set for each job step).

If the databases are in multiple different Availability Groups, this still works fine. After the databases are setup in their Availability Groups, just run this script. If you add more jobs later you can run this script again as it will skip any jobs which have this job step already added.

To be safe, backup your msdb database before running this. Nothing should go wrong, but you’ll notice that this script deletes the job steps then puts them back in. You wouldn’t want to loose all your job steps, so it’s just safer to backup msdb before doing anything like this against it.

USE [msdb]

CREATE TABLE #sysjobsteps(
[job_id] [uniqueidentifier] NOT NULL,
[step_id] [int] NOT NULL,
[step_name] [sysname] NOT NULL,
[subsystem] [nvarchar](40) NOT NULL,
[command] [nvarchar](max) NULL,
[flags] [int] NOT NULL,
[additional_parameters] [nvarchar](max) NULL,
[cmdexec_success_code] [int] NOT NULL,
[on_success_action] [tinyint] NOT NULL,
[on_success_step_id] [int] NOT NULL,
[on_fail_action] [tinyint] NOT NULL,
[on_fail_step_id] [int] NOT NULL,
[server] [sysname] NULL,
[database_name] [sysname] NULL,
[database_user_name] [sysname] NULL,
[retry_attempts] [int] NOT NULL,
[retry_interval] [int] NOT NULL,
[os_run_priority] [int] NOT NULL,
[output_file_name] [nvarchar](200) NULL,
[last_run_outcome] [int] NOT NULL,
[last_run_duration] [int] NOT NULL,
[last_run_retries] [int] NOT NULL,
[last_run_date] [int] NOT NULL,
[last_run_time] [int] NOT NULL,
[proxy_id] [int] NULL,
[step_uid] [uniqueidentifier] NULL
)
insert into #sysjobsteps
SELECT *
FROM msdb.dbo.sysjobsteps

DECLARE @job_id uniqueidentifier, @step_id int, @step_name sysname, @subsystem nvarchar(40), @command nvarchar(max) , @flags int, @additional_parameters nvarchar(max),
@cmdexec_success_code int, @on_success_action tinyint, @on_success_step_id int, @on_fail_action tinyint, @on_fail_step_id int, @server sysname, @database_name sysname,
@database_user_name sysname, @retry_attempts int, @retry_interval int, @os_run_priority int, @output_file_name nvarchar(200), @last_run_outcome int, @last_run_duration int,
@last_run_retries int, @last_run_date int, @last_run_time int, @proxy_id int, @step_uid uniqueidentifier

DECLARE jobs CURSOR FOR SELECT DISTINCT job_id
from msdb.dbo.sysjobs
WHERE NOT EXISTS (SELECT *
FROM msdb.dbo.sysjobsteps
WHERE sysjobsteps.step_name = 'Check Is AG Primary'
AND sysjobs.job_id = sysjobsteps.job_id)
open jobs
FETCH NEXT FROM jobs into @job_id
WHILE @@FETCH_STATUS = 0
BEGIN
SET @database_name = (SELECT database_name
FROM (SELECT TOP (1) database_name, count(*) ct
FROM #sysjobsteps
WHERE job_id = @job_id
GROUP BY database_name
ORDER BY count(*) desc) a)

IF @database_name NOT IN (SELECT database_name
FROM sys.availability_databases_cluster)
BEGIN
GOTO SkipJob
END

SET @command = 'IF [master].sys.fn_hadr_is_primary_replica (''' + @database_name + ''') 1
RAISERROR(''Not the PRIMARY server for this job, exiting with SUCCESS'' ,11,1)'

DECLARE steps CURSOR FOR SELECT step_id
FROM #sysjobsteps
WHERE job_id = @job_id
ORDER BY step_id DESC
OPEN steps
FETCH NEXT FROM steps into @step_id
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC msdb.dbo.sp_delete_jobstep @job_id=@job_id, @step_id=@step_id

FETCH NEXT FROM steps into @step_id
END
CLOSE steps
DEALLOCATE steps

EXEC msdb.dbo.sp_add_jobstep @job_id=@job_id, @step_name='Check Is AG Primary',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=3,
@on_fail_action=1,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem='TSQL',
@command=@command,
@database_name='master',
@flags=0

DECLARE steps CURSOR FOR SELECT step_id+1, step_name, subsystem, command, flags, additional_parameters, cmdexec_success_code, on_success_action, on_success_step_id+1, on_fail_action, on_fail_step_id+1, server,
database_name, database_user_name, retry_attempts, retry_interval, os_run_priority, output_file_name
FROM #sysjobsteps
WHERE job_id = @job_id
order by step_id
OPEN steps
FETCH NEXT FROM steps into @step_id, @step_name, @subsystem, @command, @flags, @additional_parameters, @cmdexec_success_code, @on_success_action, @on_success_step_id, @on_fail_action, @on_fail_step_id, @server,
@database_name, @databasE_user_name, @retry_attempts, @retry_interval, @os_run_priority, @output_file_name
WHILE @@FETCH_STATUS = 0
BEGIN

EXEC msdb.dbo.sp_add_jobstep @job_id=@job_id, @step_name=@step_name,
@step_id=@step_id,
@cmdexec_success_code=@cmdexec_success_code,
@on_success_action=@on_success_action,
@on_fail_action=@on_fail_action,
@on_success_step_id=@on_success_step_id,
@on_fail_step_id=@on_fail_step_id,
@retry_attempts=@retry_attempts,
@retry_interval=@retry_interval,
@os_run_priority=@os_run_priority, @subsystem=@subsystem,
@command=@command,
@database_name=@database_name,
@flags=@flags

FETCH NEXT FROM steps into @step_id, @step_name, @subsystem, @command, @flags, @additional_parameters, @cmdexec_success_code, @on_success_action, @on_success_step_id, @on_fail_action, @on_fail_step_id, @server,
@database_name, @databasE_user_name, @retry_attempts, @retry_interval, @os_run_priority, @output_file_name
END
CLOSE steps
DEALLOCATE steps

EXEC msdb.dbo.sp_update_job @job_id = @job_id, @start_step_id = 1

SkipJob:

FETCH NEXT FROM jobs into @job_id
END
CLOSE jobs
DEALLOCATE jobs

DROP TABLE #sysjobsteps


July 22, 2015  3:33 AM

PASS Summit 2015, Speaker Idol Is Back!

Denny Cherry Denny Cherry Profile: Denny Cherry

PASSLogoBW I’m pleased to announce that Speaker Idol is back for the PASS Summit in 2015. We’re using the same format as last year. There will be three opening rounds, followed by the final round. Each of the three opening rounds will have 4 people competing. In each of these three rounds a winner and a runner-up will be selected. In the final round the three winners will present again along with a wildcard who will be selected by the judges from the three runners up. The sessions should be 5 minutes long, and can be on any topic (preferably something SQL related).

The goal of this years Speaker Idol competition is the same as last years, to give speakers who are known in their region a chance to be seen at the international level so that they can improve their chances of being selected to speak in future years, to get some real time feedback from other speakers about their presentation style, and so that they can improve as speakers.

  1. You have spoken at a SQL Saturday or SQL Rally event (or other large non-pass event, such as SQL Bits, a TechEd Conference, etc.).
  2. You have not spoken at the PASS Summit or PASS Business Analytics Conference for a breakout, spotlight or pre-con session. (If you gave a lightning talk and no other session you are eligible.)
  3. You will be in Seattle during the PASS Summit (attending PASS isn’t needed, and if you aren’t attending PASS we will get you to the session room for Speaker Idol).
  4. You are available to attend which ever speaker idol session you are assigned to.
  5. You have a 5 minute technical presentation which you can give.
  6. You sign up on the registration form before September 31st, 2015 at 11:59:59pm (23:59:59) Pacific time according to Denny Cherry’s Verizon Cell Phone.

In order to enter the 2015 Speaker Idol you’ll need to fill out this handy form. The big difference from last year’s Speaker Idol is that in addition to being able to submit yourself, session attendees can submit speakers that they have seen at prior events that they really liked. Just fill out the same form, just put the speaker’s name (and email address if you have it) and we’ll contact them and let them know that they’ve been nominated to present at this years speaker idol.

Thanks to PASS for allowing me to run with this again.

Denny


July 20, 2015  6:00 PM

If You Thought Database Restores Were Slow, Try Restoring From an EMC Data Domain

Denny Cherry Denny Cherry Profile: Denny Cherry
Backup and restore, backup appliance, Data Domain, EMC, Restore Database, SQL Server

Recently I did something which I haven’t had to do for a VERY long time, restore a database off of an EMC Data Domain. Thankfully I wasn’t restoring a failed production system, I was restoring to a replacement production system, so I was getting log shipping setup.

I’ve worked in plenty of shops with Data Domains before, but apparently I’ve blocked out the memories of doing a restore on them. Because if your backups are done the way EMC wants them to be done to get the most of the Data Domain (uncompressed SQL Backups in this case) the restore process is basically unusable. The reason that we are backing up the databases uncompressed was the allow the Data Domain to dedupe the backups as much as possible so that the final backup stored on the Data Domain would be as small as possible so that it could be replicated to another Data Domain in another data center.

The database in this case is ~6TB in size, so it’s a big database. Running the restore off of the EMC Data Domain, was painfully slow. I canceled it after about 24 hours. It was at ~2% complete. Doing a little bit of math that database restore was going to take 25 days. While the restore was running we tried calling EMC support to see if there was a way to get the EMC Data Domain to allow the restores to run faster, and they answer was no, that’s as fast as it’ll run.

After stopping the restore, I backed up the same database to a local disk, and restored it to the new server from there. This time the restore took ~8 hours to complete. A much more acceptable number.

If you are using EMC’s Data Domain (or any backup appliance) do not use that appliance as your only location of your SQL Server backups. These appliances are very efficient at writing backups to them, and replicating those backups off to another site (which is what is being done in this case). But they are horrible at rehydrating those backups so that you can actually restore them. The proof of this is in the throughput of the restore commands. Here’s the output of some of the restore commands that were running. These are for full backups, so there’s nothing for SQL Server to process here, it’s just moving blocks from point A to point B.

RESTORE DATABASE successfully processed 931 pages in 6.044 seconds (1.203 MB/sec).
RESTORE DATABASE successfully processed 510596 pages in 1841.175 seconds (2.166 MB/sec).
RESTORE DATABASE successfully processed 157903 pages in 440.849 seconds (2.798 MB/sec).
RESTORE DATABASE successfully processed 2107959 pages in 4696.428 seconds (3.506 MB/sec).
RESTORE DATABASE successfully processed 77307682 pages in 118807.557 seconds (5.083 MB/sec).
RESTORE DATABASE successfully processed 352411 pages in 816.810 seconds (3.370 MB/sec).
RESTORE DATABASE successfully processed 8400718 pages in 23940.799 seconds (2.741 MB/sec).
RESTORE DATABASE successfully processed 51554 pages in 111.890 seconds (3.599 MB/sec).
RESTORE DATABASE successfully processed 1222431 pages in 3167.605 seconds (3.014 MB/sec).

The biggest database there was restoring at 5 Megs a second. That was 33 hours to restore a database which is just ~606,816 Megs (~592 Gigs) in size. Now before you blame the SQL Server’s or the network, all these servers are physical servers running on Cisco UCS hardware. The network is all 10 Gig networking, and the storage on these new servers is a Pure storage array. The proof that the network and storage was fine was the full restore of the database which was done from the backup to disk, as that was restored off of a UNC path which was still attached to the production server.

When testing these appliances, make sure that doing restores within an acceptable time window is part of your testing practice. If we had found this problem during a system down situation, the company would probably have just gone out of business. There’s no way the business could have afforded to be down for ~25 days waiting for the database to restore.

Needless to say, as soon as this problem came up, we provisioned a huge LUN to the servers to start writing backups to. We’ll figure out how to get the backups offsite (the primary reason that the Data Domain exists in this environment) another day (and in another blog post).

How could EMC fix this.

Step 1 would be to stop telling people that it can replicate very large databases from site to site. While it technically can, doing so while still maintaining some level of performance while doing restores doesn’t seem possible at the moment.

Step 2 would be to stop telling people to disable SQL Compression to make replication work. Again while this does make replication work, restore performance is horrible.

Step 3, figure out and resolve the performance problem when reading files from the array, especially large files when there are huge amounts of deduplicated blocks within the backup. This hydration problem is the performance killer here. It has to be possible, other array vendors do it within their normal LUNs that have deduplication. EMC does it within their arrays which have deduplicated data, but in the Data Domain the performance sucks. Something needs to be done about this.

Denny


July 17, 2015  6:00 PM

Recommended reading from mrdenny for July 17, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: heigesr2 also known as Rick Heiges

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


July 15, 2015  6:00 PM

Multi-threading Backups

Denny Cherry Denny Cherry Profile: Denny Cherry
Backup and Recovery, Database performance, SQL Server

In the modern world of huge amounts of data backing up databases within a reasonable amount of time has become harder and harder for people which massive amounts of data to do. Several of my clients have so many databases, with so much data on them that they simply can’t complete full backups of the databases within a 24 hour window. This creates some major problems for them as backups are running all through the business day. In these cases where there are lots of databases, which each have large amounts of data the best bet becomes to multi-thread the database backups so that multiple backups can be taken at once.

But doing that gets tricky, fast. If you create a job for each database you have to track that, and manually add new database backup jobs when new databases are added, then you need to track how long it takes to get each database backed up so that you aren’t running to many backups, but you also don’t want the system to be very idle during the maintenance window either.

To get around this for these clients I put together a process that uses 8 database backup jobs per server, allowing for 8 databases to be backed up at a time. As a part of the process, the databases are sorted based on database size so that the duration of the backups across the backup jobs is spread out as evenly as possible. Depending on the skew in your database sizes the jobs may or may not run for the same duration.

In my deployment script it creates a new database called “BackupData”. You can change the name of this if you want, but you’ll need to update all the stored procedures to use whatever database name you specify. This database contains a single table which is a cached copy of the output of sp_helpdb. This is done to reduce the number of times that sp_helpdb is called as this is an expensive stored procedure to run, and the exact values from the stored procedure aren’t needed every time. There’s as separate job which runs this stored procedure nightly to get the updated values.

When you run the below SQL script it’ll create a bunch of SQL Agent jobs as well as the needed database, table and various stored procedure. If you don’t want differential backups you can simply delete the differential jobs and change the full backups to run daily instead of weekly.

You’ll want to change the network paths listed on lines 160-164 to the network share or folder that you want to backup to. In this case I’m using a network share. If you are going to use a network share you’ll want 10 Gig networking in place as well as SMB3 on both the SQL Server and the target server (Windows 2012 or newer) and storage that is fast enough to support writing this quickly. If you are writing to a virtual tape library like a data domain you’ll want to ensure that it supports SMB3 as you’ll be writing a log of data to it very quickly.

This stored procedure code is a little rough and the documentation isn’t all that great, but it gets the job done.

You can download the SQL Script to make all this happen here.

Thanks,
Denny


July 8, 2015  6:00 PM

Be Careful When Starting Up Azure VMs Running SQL Server

Denny Cherry Denny Cherry Profile: Denny Cherry
Firewall configuration, SQL Azure, SQL Server, Windows Azure

So Microsoft has done something pretty dumb with the Azure VMs which are running Microsoft SQL Server. By default the front end firewall (the one that allows or blocks traffic from the public Internet to the VMs) allows traffic to the default SQL Server port 1433. At first this is fine, until you change the firewall port on the Windows firewall to allow the other VMs to connect to SQL. Now you’ve got a problem as the public firewall is open, and your Windows firewall is open, so anyone who attempts to connect to the SQL port 1433 from the outside will have direct access to the SQL Server instance.

So when creating VMs which will be running SQL Server that you are creating from the default SQL Server template you’ll need to go into the Azure portal and change the firewall endpoints. Do to this edit the properties of the VM, and edit the settings. Then edit the Endpoints.

If you see the “SQL Server” endpoint as shown below, and you’ve disabled the Windows Firewall on the VM from blocking TCP port 1433, then the entire public Internet has access to your SQL Server VM.

VM_Settings

To remove this mouse over the SQL Server endpoint and click the menu button shown below, then click “Delete” from the context menu that appears.

VM_Settings2

For each SQL Server VM that you’ve deployed using Microsoft’s SQL VM Template.

If you’ve setup SQL Server VMs in Azure within the last couple of months you’ll want to go and check the Azure Endpoints and make sure you don’t have a firewall hole that you weren’t expecting. I’ve spoken to Azure team at Microsoft about this and the default template is being fixed so that it isn’t setup this way any more, if it isn’t fixed already.

Denny


July 1, 2015  10:00 AM

Splitting Files for SQL DW

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, SQL Azure, SQL Server, Windows Azure

When doing data warehouse loads into SQL DW you’ll want to break your single large file that you extract into multiple smaller files in order to do loads into the SQL DW (via Azure Blob Storage) as fast as possible. There’s a few ways that you can do this. One way would be to handle this in your ETL that extracts the data from your source database. Depending on how you are extracting the data this can be anywhere from painful to REALLY, REALLY painful.

A much easier way is to just extract the data to a single text file, then split that single text file using a command line tool. Now you are probably asking yourself where you might get this handy command line tool. Well thankfully you can download it right here. While doing some work for a client on SQL DW we went ahead and put this tool together for you which as a command line tool you can easily build into your ETL process. The only requirements for using the tool is that you have .NET 4.0 installed on the machine which is running the tool. It runs very quickly and will create evenly sized files from your large source file. I used it recently on a ~16.7 Gig file and it was able to process the file in about 20 minutes (keep in mind I was using a slow 7200 RPM consumer hard drive. Smaller 2 Gig files processed in about a minute (again same slow disk). As time permits we’ll work on making the software run even faster.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: