SQL Server with Mr. Denny


November 5, 2015  11:55 PM

Congrats to Our PASS Booth Prize Winner

Denny Cherry Denny Cherry Profile: Denny Cherry
IT conferences and events, SQL Server

We have done our raffle for the PASS 2015 Consultants Corner prize, which is a $1000 Amazon gift card. Our winner this year is …

David Sun from MasterCard.

(We actually did the drawing a few days ago, but being that David works for a large company there was a lot of checking to be done before this could be announced.)

Congrats David, I’m sure that the gift card will come in handy this year.

For those that came to see us at the PASS summit, we thank you for coming by and saying hello. If you didn’t make it to the PASS Summit be sure to get signed up for our newsletter (which is about to startup again will some great technical information). You can check out the recordings of our PASS sessions on the PASS Website as soon as they are published, and don’t forget to check out the Day 1 keynote which includes videos from by myself and Joey.

Denny

November 4, 2015  4:00 PM

In Azure CApS MatTers

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, PowerShell cmdlets, PowerShell Scripts, SQL Server, Windows Azure, Windows PowerShell

Recently I was doing some work for a client moving some vDisks from standard storage to premium storage. Overall it was a pretty easy migration to do, just requiring a little PowerShell to make it all happen.

The part that made it hard to get done is that the PowerShell parameters for file names on blob storage are case sensitive. This is a pain when the person who setup the virtual drives (not me) isn’t consistent in how they setup the file names. In this case the three drives which I needed to move where named like this.

  • sql1.vhd
  • SQL2.vhd
  • Sql3.vhd

This means that it takes some work and some careful looking to ensure that you have the case of each file correct so that the scripts actually work as planned. I got through it, but not without a decent of grumbling at my computer.

So when setting up drives for VMs in Azure, keep everything consistent. You’ll thank me later.

Denny


October 28, 2015  4:00 PM

My Experience Moving WordPress From @GoDaddy to @Azure

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, SQL Server, Web hosting, Web hosting providers

leakRecently I got fed up with the lack of answers to my performance problems from GoDaddy with regard to the performance problems that had started randomly showing up that I decided to look into moving our website hosting from GoDaddy to Azure. As a partner we have a bunch of credits with Azure every month that we can use for production services, so we figured why not. And frankly the performance couldn’t be any worse. Page load times with GoDaddy were 5+ seconds, and their answer was always the same; “just disable your plugins and theme and it works fine, so it must be a problem with the plugins and theme”. Except that we had been using the exact same plugins and theme for months without issue.

I knew that moving all the various sites from GoDaddy would be a pain. We have 46 domain names which point to 13 different websites so getting everything over in a way which didn’t impact the users was going to be a challenge.

Step 1 of the process was deciding if we should use Windows or Linux to host the sites. As every one of the sites is either PHP (like www.dcac.co) or just a static HTML file (like www.getazureposh.com) we opted for Linux VMs within Azure. For HA reasons we have two Linux VMs, both of which are A1 VMs. These machines are a single CPU core with 1.75 Gig of RAM. These two machines are behind an Azure load balancer so we’ve got HA. Now we’ve got basically the same CPU power and RAM in each VM that we had before (GoDaddy’s site says that the plan we were on before had 1CPU and 1 Gig of RAM) so we’ve technically doubled our CPU power for total visitors, but a single user should see the same basic performance if it’s a problem with the PHP code in the plugins and/or theme.

For the database server we’ll be using MySQL so we stuck it on a Linux VM as well, with a backup database server using MySQL Replication to mirror the data within the database to the second VM.

Out next decision was where should we be putting these VMs. We opted for Central US as the bulk of our customers are based in the US, so we wanted to have the site centrally located to our users for quickest network access. Because this is Azure if we started to get a lot of customers in Europe (for example) I could simply drop some web servers in Europe and have them access the database servers here in the US (over a site to site VPN between the two Azure Data Centers) and pretty quickly and easily give the Europe users faster access to our website.

After that was all decided I built the VMs. Now being that I’m a little paranoid about security I closed all the firewall holes that Microsoft likes to leave open for new VMs except for http and https. So all the SSH ports got closed as I won’t be accessing them via SSH from the outside as we already have VPN setup into our Windows Azure environment.

The next step was to get all the files for the sites (and the backups of the MySQL databases downloaded). That just to using wget and connecting to the GoDaddy FTP server. It took a while to download everything as some of the sites have a LOT of old junk laying around inside them. Once that was done, the databases were created and all the data for the databases was loaded in. Then Apache was configured along with all the configuration files for all the various websites.

Once that was done DNS was moved over and everything basically started working as expected. There were a couple of glitches with SSL which needed to be addressed. But that didn’t really impact our users, just Joey and myself.

As for performance, everything is MUCH better today than it used to be. Our average CPU performance now that we are on the new machines in Azure is 1.4% with spikes up to 5%, and that’s when I’m going through and patching all the sites with the newest WordPress build and the webpage response time is under a second based on just using the website where we were getting all sorts of timeouts before.

Now I’ve made it look like is was pretty easy, and for the most part it really was. Thankfully I was able to take my time and move site by site moving stuff, slowly getting each time moved across and making sure that each site was setup correctly before moving on to the next website.

Other than speed have I really noticed a big difference? Not really. Using the website is basically the same. The only really annoying thing is waiting for files to replicate between the two web servers as that only happens every 5 minutes (or just VPNing in and forcing them to sync when I want them to sync if I’m doing something specific. But as most everything that WordPress does is done within the database that isn’t that big a deal.

Denny


October 21, 2015  4:00 PM

The MERGE and large data sets

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

Recently I was doing some work at a customer and they were having some performance problems with their ETL into their data warehouse (see I do know at least some BI). One of the steps which was taking a long time was a MERGE statement which was updating existing rows and inserting new rows.

The target table in this case has several hundred million rows, so it’s a pretty good sized table.

It was a rather simple MERGE statement so we tried changing it from MERGE to separate UPDATE and INSERT commands because I remembered someone complaining at a conference about how performance went down when they started using MERGE.

So we changed the stored procedure and let the new version run the next morning. And boy was it a good decision. We reduced the run times of that step by about 50%. We changed another few MERGE statements which use large tables and got other great improvements, in one case reducing the run time of the job step by 75%. A heck of an improvement.

While the MERGE statement was easier to manage, the goal is to make the system run fast. We did that in this case, very nicely.

So the lesson here is that if you have large data sets that you are working with, MERGE isn’t for you.

Denny


October 16, 2015  8:21 PM

Recommended reading from mrdenny for October 16, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

PASS Elections: Fuzzy Math

Getting data into Azure Blob Storage

Idera Intends to Buy Embarcadero Technologies

Repairing a replication subscriber

Is a heap larger than a clustered table?

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


October 12, 2015  2:00 PM

Your PASS Speaker Idol 2015 Top 12 Are…

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

The time has come. It’s time to announce who the 12 people who will be competing for the glory that comes with beating down 11 of your #sqlfamily members in public. These 12 fine folks will be competing for the grand prize … a speaking spot at the PASS Summit 2016, and an awesome blog graphic that I’ll be showing publically for the first time during the speaker idol sessions at PASS this year.

The 12 people who are competing (in no specific order) are:

  • Luciano Caixeta Moreira
  • Rob Volk
  • Todd Kleinhans
  • Ronald Dameron
  • Amy Herold
  • William Durkin
  • Ginger Grant
  • Bill Wolf
  • Theresa Iserman
  • Ed Watson
  • Wes Springob
  • David Maxwell

These twelve poor people will be competing and being judged at the SQL PASS summit in three groups, one on each day. On Friday afternoon the winner from each of the three sessions plus one runner up (selected by the judges) will compete for the prize. Our 12 contestants have been randomly placed into three groups which you’ll find below.

On Wednesday (3:15pm – 4:30pm) we’ll have:

  • Todd Kleinhans
  • William Durkin
  • Ginger Grant
  • Ed Watson

On Thursday (4:45pm – 6:00pm) we’ll have:

  • Rob Volk
  • Amy Herold
  • Bill Wolf
  • Wes Springob

On Friday (2:00pm – 3:15pm) we’ll have:

  • Luciano Caixeta Moreira
  • Ronald Dameron
  • Theresa Iserman
  • David Maxwell

Then the finals will be Friday at (3:30pm – 4:45pm).

I look forward to seeing everyone at the Speaker Idol sessions while we cheer on our 12 great competitors. Good luck to all 12 competitors.

Denny


October 8, 2015  7:29 PM

Your PASS Speaker Idol 2015 Judges

Denny Cherry Denny Cherry Profile: Denny Cherry

2015_speakeridol

I’m pleased to announce the judges for this PASS Speaker Idol 2015 are:

  • Mark Simms
  • Joey D’Antoni
  • Allan Hirt
  • André Kamman
  • Hope Foley

These judges will be joined by Karen Lopez at the final session so that we have a judge who comes with a fresh set of eyes.

Stay tuned for who our contestants are…

Denny


October 7, 2015  4:00 PM

Basic Database Mirroring Monitoring

Denny Cherry Denny Cherry Profile: Denny Cherry
Database mirroring, Disaster Recovery, DR, High Availability, Monitoring, SQL Server, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014

If you have database mirroring setup and running, you’ll notice that it has a lack of built in monitoring. And if you have a lot of databases having monitoring for each potential state that each database can be in can flood you with email if the network between sites goes down and comes back up. Especially if it happens a few times in a minute (the record that I’ve gotten using this sort of configuration is about 700 emails in one night of network maintenance.

So I’ve put together a slightly more simplified database mirroring monitor. It’ll still tell you if the network goes down and comes back up, but it’ll do so if a slightly nicer way. You’ll want to change the @operator_name value to a valid operator on your system before you run this, otherwise you’ll get an error as the operator doesn’t exist. I’ve set the delay between retries at 5 minutes, so even if the network is going nuts you’ll get at most 12 emails from each alert per hour. Depending on how stable your network is you may want to adjust this as needed.

Never fear, this works on SQL Server 2005 – SQL Server 2016 (CTP 2 at least).

USE [msdb]
GO
DECLARE @name sysname
SET @name = @@SERVERNAME + N' - DBM State: Mirror Connection Lost'
EXEC msdb.dbo.sp_add_alert @name=@name,
@message_id=0,
@severity=0,
@enabled=1,
@delay_between_responses=300,
@include_event_description_in=0,
@category_name=N'Database Mirroring',
@wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER',
@wmi_query=N'SELECT * from DATABASE_MIRRORING_STATE_CHANGE
WHERE State = 6',
@job_id=N'00000000-0000-0000-0000-000000000000'
EXEC msdb.dbo.sp_add_notification @alert_name=@name, @operator_name=N'SQLAdmins', @notification_method = 1
GO

USE [msdb]
GO

DECLARE @name sysname
SET @name = @@SERVERNAME + N' - DBM Perf: Oldest Unsent Transaction Threshold'
EXEC msdb.dbo.sp_add_alert @name=@name,
@message_id=32040,
@severity=0,
@enabled=1,
@delay_between_responses=1800,
@include_event_description_in=0,
@category_name=N'Database Mirroring',
@job_id=N'00000000-0000-0000-0000-000000000000'
EXEC msdb.dbo.sp_add_notification @alert_name=@name, @operator_name=N'SQLAdmins', @notification_method = 1
GO

USE [msdb]
GO

DECLARE @name sysname
SET @name = @@SERVERNAME + N' - DBM State: Mirroring Suspended'
EXEC msdb.dbo.sp_add_alert @name=@name,
@message_id=0,
@severity=0,
@enabled=1,
@delay_between_responses=300,
@include_event_description_in=0,
@category_name=N'Database Mirroring',
@wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER',
@wmi_query=N'SELECT * from DATABASE_MIRRORING_STATE_CHANGE
WHERE State = 9',
@job_id=N'00000000-0000-0000-0000-000000000000'
EXEC msdb.dbo.sp_add_notification @alert_name=@name, @operator_name=N'SQLAdmins', @notification_method = 1
GO

USE [msdb]
GO

DECLARE @name sysname
SET @name = @@SERVERNAME + N' - DBM State: Principal Connection Lost'
EXEC msdb.dbo.sp_add_alert @name=@name,
@message_id=0,
@severity=0,
@enabled=1,
@delay_between_responses=300,
@include_event_description_in=0,
@category_name=N'Database Mirroring',
@wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER',
@wmi_query=N'SELECT * from DATABASE_MIRRORING_STATE_CHANGE
WHERE State = 5',
@job_id=N'00000000-0000-0000-0000-000000000000'
EXEC msdb.dbo.sp_add_notification @alert_name=@name, @operator_name=N'SQLAdmins', @notification_method = 1
GO

Happy monitoring,
Denny


October 1, 2015  1:08 PM

SQL PASS Summit 2015 Discount Code

Denny Cherry Denny Cherry Profile: Denny Cherry
discount coupons, IT conferences, IT conferences and events, SQL Server

Are you looking to register for the SQL PASS Summit and you haven’t already done so?  Would you like to save $150 off the current registration price?  Would you like to save that $150 with no strings attached?

Have I got a deal for you.  Simply use the discount code EXSP150 when you register for the PASS Summit and you’ll save $150.

That’s it.

Denny


September 30, 2015  6:00 PM

Upgrading a SQL FCI to Premium Storage in Azure Without Downtime

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Cluster computing, SQL Server, Windows Azure, Windows Cluster

Recently I was asked by a client to upgrade their SQL Server Failover Cluster from standard storage to Azure premium storage with as little downtime as possible. Due to the fact that the SQL Server instance was clustered already this was actually a pretty straight forward process.

The first step was to figure out which node of the cluster was running as the active node, so we could start with the passive node. The next step was the tell cluster to not allow failovers of the SQL Cluster. Next we opened the SIOS Data Keeper Cluster Edition GUI and break the mirror for the disks that we are going to upgrade. Then I logged into the Azure portal and converted the VM from a G2 into a GS2 so that premium disks could be attached. After the VM restarted (don’t forget, this is the passive node so there’s no outage for the restart) the disks are removed from the VM and the new disks are added. The new disks were added via PowerShell like this:

get-AzureVM -name ServerName | add-AzureDataDisk -CreateNew -DiskSizeInGb 1023 -DiskLabel ServerName-T http://Something.blob.core.windows.net/vhds/ServerName-t.vhd” | update-AzureVM

After the disks are all added, they are formatted and given the correct drive letters.

Next SIOS Data Keeper Cluster Edition is told to restart the mirror. This forces it to do a full sync as there’s no data on the old drives. This takes forever as we are limited to reading data from the disk at the speed of the old standard disks (500 IOPs). Once it’s done (in this case there was about 200 Gigs of data to replicate across three disks) the cluster can be failed over (this is the only outage in the process).

We can now upgrade the second VM to support premium storage, then change out the disks and restart the replication again. It’s a long process but it works, and there’s just a single outage to the process.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: