SQL Server with Mr. Denny


September 2, 2015  6:39 PM

PASS Summit 2015 1st Timers Webcast is Next Week

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

Just a reminder, that if you are planning on attending the PASS Summit this year you’ll want to make sure to get registered for my 1st timers webcast which is scheduled for September 7th. 2015 at 1pm Pacific Time. I know it’s a holiday here in the US, but apparently I forgot that when I scheduled it, and it is one of the few days that I’m actually in the US during the month, so you take what you can get.

In order to register for the session please visit this url. Once you have registered you’ll be able to get the conference call details as well as download a calendar reminder.

If you haven’t ever attended the PASS Summit before we’ll cover all the important things you need to know about the event, such as how to get from the airport to the convention center (hint, you don’t want to rent a car). Where the good food in town is, how to find things inside the convention center (it’s a big place if you’ve never been there before), and much more.

So get signed up for the webcast, get the calendar entry put into your calendar, and I’ll see you on Monday September 7th.

Denny

August 26, 2015  6:00 PM

Connecting To SQL Server Running On An Azure VM

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Cloud Virtual Machine Security, SQL Server, Virtual Machine Security, Virtual Machines, Windows Azure

If you have setup a Windows Azure Virtual Machine running SQL Server you may have noticed that you might not be able to connect to the SQL Server running on the machine by using SQL Server Management Studio on your workstation (desktop, labtop, etc) from your home or office.

This is normal and setup this way by default.

First I’ll explain the why, then how to change this.

The Why


The reason that this is disabled is that Microsoft doesn’t want to leave the SQL Server exposed to the public Internet where people can and will try to break into the SQL Server instance, pretty much 24 hours a day, 7 days a week until they get into the server. Microsoft is doing you a favor by not letting this happen by default.

Fixing It


Now, assuming that you want this access there’s a couple of ways to set this up.

Site to Site VPN


The first (and best) is to setup a site to site VPN between your office and the Windows Azure network. Now this requires a bit of configuration and a little understanding of networking (and access to make changes to your office’s router). If you aren’t the IT guy/gal, you’ll need to talk to the IT guy/gal in order to get this done. If you are the IT guy/gal and you have access to the router, Microsoft has provided some scripts to make this much easier to complete.

The first thing you’ll want to do it connect to your Azure account and browse to the virtual networks. Select the network that you want to modify and you’ll get the properties of the network to pop out on the next blade. It’ll look something like this.

Network_pre_vpn1

You’ll want to click on the VPN Connections section on the right, select site-to-site, check the “Create gateway immediately” check box, then click on Local Site and enter a name and the public IP for your office network as well as tell Azure what all the IP Subnets for your internal office network are. It’ll look something like this, then click OK to get back to the “New VPN Connection” blade.

VPN_in_process

Click the “Optional Gateway Configuration” button and change the routing from Dynamic to Static. If you want to change the IP subnet for the private side of the VPN within Azure you can do here as well. You can also select the gateway size between Normal and High performance, but the high performance one will probably cost more money. If you are a large enterprise you probably need the high performance option, if you are a small/medium business the normal gateway will be just fine. Click OK all the way back down to the virtual network blade and wait for the network changes to be made. This can take 5-10 minutes for the VPN endpoint to be created and spun up.

After Azure is done doing it’s thing the configuration section of the portal will look something like this.

vpn_almost_ready

Click on the “VPN Connections” panel and a new blade will open. At the top you can download the script to configure the office router to use the VPN connection.

Point to Site VPN


If you don’t want to setup a site to site VPN so that all office users can access Azure, or if you don’t have access to do so, you can setup a VPN from your specific machine to the Azure cloud. Just keep in mind that this option doesn’t support any user level authentication so if you give someone else access to your Azure network this way, you won’t be able to remove their access without revoking access for all users and setting everyone up from scratch.

The first thing you’ll want to do it connect to your Azure account and browse to the virtual networks. Select the network that you want to modify and you’ll get the properties of the network to pop out on the next blade. It’ll look something like this.

Network_pre_vpn1

You’ll want to click on the VPN Connections section on the right, select Point-to-site, and tell Azure what all the IP Subnet to use for people who VPN in should be. Any private IP subnet will work as long as you haven’t used it in one of your Azure networks already. The portal will verify that the IP subnet you enter will work. Something like “192.168.5.0/24″ should work fine.

Click OK all the way back down to the virtual network blade and wait for the network changes to be made. This can take 5-10 minutes for the VPN endpoint to be created and spun up.

After Azure is done doing it’s thing the configuration section of the portal will look something like this (but with the Point to site icon in color).

vpn_almost_ready

Click on the “VPN Connections” panel and a new blade will open. At the top you can download the VPN software that you will need to connect your computer to the Azure network securely.

Opening Access to the Virtual Machine from the Public Internet


If for some reason you don’t want to VPN in, first you need to figure out why you don’t want to do this. If there’s a really good reason to not VPN in, then continue with setting up direct Internet access to the SQL Server. But keep in mind, when I talk to clients about this sort of thing, there is almost never a good reason to setup access to the VM directly from the Internet.

To open an endpoint browse to the VM in the Azure portal. Open the properties of the VM in the Azure Portal, then click the “All Settings” option. Then select “Endpoints”. It’ll look something like this.

VM_Settings

If you see a “SQL Server” endpoint with 0 ACL Rules then the work is half done (shown above). If there are ACL rules then you should be finished unless you need to add more ACL Rules.

If there is no SQL Server endpoint click the “Add” button at the top of the Endpoints blade. Name the endpoint “SQL Server”, select the protocol TCP, then set the ports to 1433 (or whatever TCP ports you want to use, but 1433 is the default). Select to setup access rules for whoever needs access and block any subnets that don’t need access and then OK back to the VM’s properties.

Endpoint

Now What


At this point you can connect to the SQL Server instance through whichever method you’ve setup. If you are using either VPN option you can just connect to the Virtual Machine’s network name. If you are going through the public endpoint (again this is REALLY NOT recommended) you’ll need to connect to the machines full DNS name.

Denny


August 19, 2015  6:00 PM

pssdiag has some useful parameters

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

Anyone who has ever called Microsoft’s CSS team for support for a SQL Server has probably been told to run pssdiag against the server while reproducing the issue, or for some period of time. To run pssdiag we just run pssdiag.cmd and let it do it’s thing, then press +C to stop it. But there’s actually a few command line options which are available which can make using pssdiag a bit easier.

/Q – Delete the data from the output folder before running.
/E – Stop pssdiag after n minutes or at n time. If you want to run it for 10 minutes for example use “/E +00:10:00″. If you want to stop it at 10pm use “/E 22:00:00″.
/B – Start pssdiag capture after n minutes or at n time. If you want to start it in 10 minutes for example use “/B +00:10:00″. If you want to start it at 10pm use “/B 22:00:00″.
/O – write the output to a different folder. Such as “/O T:\pssdiag\output”.

/N – Weather you should use the same folder everytime pssdiag runs, or keep all the captures. Using “/N 1″ always using the same folder. Using “/N 2″ appends a number to the end of the folder so that you can run the capture over and over and keep your results each time.
/M – Specifies the machine to run pssdiag against if you want to run it remotely using “/M ServerName Server2 Server3″
/C – To use compression or not on the files. “/C 1″ uses no compression (this is the default). “/C 2″ uses NTFS compression. “/C 3″ uses CAB file compression.

If you need to make pssdiag a little more automated so that you can capture overnight issues this should present you with some really good options.

Denny


August 14, 2015  4:00 PM

Recommended reading from mrdenny for August 14, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: ijanric also known as Ric Howe

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


August 12, 2015  6:00 PM

The concurrent snapshot for publication ‘X’ is not available because it has not been fully generated or the Log Reader Agent is not running to activate it. If generation of the concurrent snapshot was interrupted, the Snapshot Agent for

Denny Cherry Denny Cherry Profile: Denny Cherry
Replication, Snapshot, SQL error messages, SQL replication, SQL Server, SQL Server replication, SQL Server transaction logs, SQL transaction logs, Transaction logs

Recently I was working on SQL Server Replication for a client, specifically removing and reading a table from replication to fix a different problem. After putting the table back and running the snapshot agent to create a differential snapshot I was greeted with the error message “The concurrent snapshot for publication ‘X’ is not available because it has not been fully generated or the Log Reader Agent is not running to activate it. If generation of the concurrent snapshot was interrupted, the Snapshot Agent for”, and yes that’s all of the error that was shown.

Doing some Google searching gave me basically no useful information other than people had dropped the publication and recreated it, sometimes having to completely remove replication and put it back in order to make the error go away. Neither of those was a very good option as this publication has some very large tables on it, and the distributor has 4 servers with about 80 publications several of which have multi-billion row tables being replicated.

On a whim I decided to look at the log reader history to make sure that everything there was running smoothly there. It wasn’t. There I was getting the message “Approximately 123500000 log records have been scanned in pass # 3, 0 of which were marked for replication.” in the history. Now this message I’ve seen before. This normally means that there are WAY to many VLFs in the publication database. Last time I saw this there were over 100k VLFs. This database only has 754.

So fixing this became much easier now. Slow, but easy. Fixing this simply requires shrinking the transaction log then manually growing it again. Because replication is enabled on the database you can’t just shrink the transaction log. You have to wait for replication to read the transactions from the log, then backup the log, then shrink the file. And you have to do this over and over again because the replication log reader is running so painfully slow. Especially on a system with a heavy workload like this system has. Once I got all the transactions read by the log reader and I was able to get the transaction log shrunk down to three VLFs (basically as small as you can shrink the transaction log) the log reader was happy as was the distribution agent. I then manually grew the transaction log back to the correct size and everything was happy with the system again.

Basically the error message is a really bad one. It says the snapshot isn’t available, but it means that the transaction which tells the distribution that it can process the snapshot because all the transactions from before that have been processed by the log reader have been processed hasn’t been read yet. If you get this error run DBCC LOGINFO on your publisher and see what it kicks back. If there’s a lot of VLFs you have the answer to your problem. Odds are the threads on forums that I found all worked because removing and rebuilding replication takes a while so the log reader was able to get through the part of the log it needed to while they were rebuilding everything manually.

Denny


August 3, 2015  5:00 PM

PASS 2015 1st Timers Webcast

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

Like I’ve done for the last several years, I’ll be presenting this years PASS Summit 1st Timers webcast, which really needs a new name as it really isn’t just for PASS Summit First Timers anymore. But anyway, during this webcast I’ll be giving some information about the convention center, specifically where to find things. Last year PASS moved a bunch of stuff around, so this was pretty important information for last year. I don’t have the final plans for the summit yet (but I will before the webcast).

This FREE webcast will be Monday September 7th, 2015 at 1pm Pacific Time.

In order to register for the session please visit this url. Once you have registered you’ll be able to get the conference call details as well as download a calendar reminder.

If you haven’t ever attended the PASS Summit before we’ll cover all the important things you need to know about the event, such as how to get from the airport to the convention center (hint, you don’t want to rent a car). Where the good food in town is, how to find things inside the convention center (it’s a big place if you’ve never been there before), and much more.

So get signed up for the webcast, get the calendar entry put into your phone, and I’ll see you on Monday September 7th.

Denny


July 31, 2015  7:20 PM

Recommended reading from mrdenny for July 31, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: ShareIT_CO also known as ShareIT

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


July 24, 2015  3:57 PM

Recommended reading from mrdenny for July 24, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: TheSmilingDBA also known as Thomas LeBlanc

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


July 22, 2015  6:00 PM

Updating All Jobs To Check For AG Primary Replica

Denny Cherry Denny Cherry Profile: Denny Cherry
AlwaysOn Availability Groups, High Availability, Job scheduling, SQL job scheduler, SQL scripts, SQL Server

One of the really annoying things about moving existing SQL Server databases into an Availability Group is setting up the jobs to work within the confines of the AlwaysOn Availability Group. This is because you have to manually create the jobs on each server, then modify the jobs on each server to check and see if the database for that job is on the primary server.

Well, here’s a T-SQL script which will help solve this problem for you. It’ll add a job step to the beginning of each database job that looks to see if it’s on the primary replica or not. If figures out the database to use based on the databases which are used in the job steps (by looking at the database which is set for each job step).

If the databases are in multiple different Availability Groups, this still works fine. After the databases are setup in their Availability Groups, just run this script. If you add more jobs later you can run this script again as it will skip any jobs which have this job step already added.

To be safe, backup your msdb database before running this. Nothing should go wrong, but you’ll notice that this script deletes the job steps then puts them back in. You wouldn’t want to loose all your job steps, so it’s just safer to backup msdb before doing anything like this against it.

USE [msdb]

CREATE TABLE #sysjobsteps(
[job_id] [uniqueidentifier] NOT NULL,
[step_id] [int] NOT NULL,
[step_name] [sysname] NOT NULL,
[subsystem] [nvarchar](40) NOT NULL,
[command] [nvarchar](max) NULL,
[flags] [int] NOT NULL,
[additional_parameters] [nvarchar](max) NULL,
[cmdexec_success_code] [int] NOT NULL,
[on_success_action] [tinyint] NOT NULL,
[on_success_step_id] [int] NOT NULL,
[on_fail_action] [tinyint] NOT NULL,
[on_fail_step_id] [int] NOT NULL,
[server] [sysname] NULL,
[database_name] [sysname] NULL,
[database_user_name] [sysname] NULL,
[retry_attempts] [int] NOT NULL,
[retry_interval] [int] NOT NULL,
[os_run_priority] [int] NOT NULL,
[output_file_name] [nvarchar](200) NULL,
[last_run_outcome] [int] NOT NULL,
[last_run_duration] [int] NOT NULL,
[last_run_retries] [int] NOT NULL,
[last_run_date] [int] NOT NULL,
[last_run_time] [int] NOT NULL,
[proxy_id] [int] NULL,
[step_uid] [uniqueidentifier] NULL
)
insert into #sysjobsteps
SELECT *
FROM msdb.dbo.sysjobsteps

DECLARE @job_id uniqueidentifier, @step_id int, @step_name sysname, @subsystem nvarchar(40), @command nvarchar(max) , @flags int, @additional_parameters nvarchar(max),
@cmdexec_success_code int, @on_success_action tinyint, @on_success_step_id int, @on_fail_action tinyint, @on_fail_step_id int, @server sysname, @database_name sysname,
@database_user_name sysname, @retry_attempts int, @retry_interval int, @os_run_priority int, @output_file_name nvarchar(200), @last_run_outcome int, @last_run_duration int,
@last_run_retries int, @last_run_date int, @last_run_time int, @proxy_id int, @step_uid uniqueidentifier

DECLARE jobs CURSOR FOR SELECT DISTINCT job_id
from msdb.dbo.sysjobs
WHERE NOT EXISTS (SELECT *
FROM msdb.dbo.sysjobsteps
WHERE sysjobsteps.step_name = 'Check Is AG Primary'
AND sysjobs.job_id = sysjobsteps.job_id)
open jobs
FETCH NEXT FROM jobs into @job_id
WHILE @@FETCH_STATUS = 0
BEGIN
SET @database_name = (SELECT database_name
FROM (SELECT TOP (1) database_name, count(*) ct
FROM #sysjobsteps
WHERE job_id = @job_id
GROUP BY database_name
ORDER BY count(*) desc) a)

IF @database_name NOT IN (SELECT database_name
FROM sys.availability_databases_cluster)
BEGIN
GOTO SkipJob
END

SET @command = 'IF [master].sys.fn_hadr_is_primary_replica (''' + @database_name + ''') 1
RAISERROR(''Not the PRIMARY server for this job, exiting with SUCCESS'' ,11,1)'

DECLARE steps CURSOR FOR SELECT step_id
FROM #sysjobsteps
WHERE job_id = @job_id
ORDER BY step_id DESC
OPEN steps
FETCH NEXT FROM steps into @step_id
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC msdb.dbo.sp_delete_jobstep @job_id=@job_id, @step_id=@step_id

FETCH NEXT FROM steps into @step_id
END
CLOSE steps
DEALLOCATE steps

EXEC msdb.dbo.sp_add_jobstep @job_id=@job_id, @step_name='Check Is AG Primary',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=3,
@on_fail_action=1,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem='TSQL',
@command=@command,
@database_name='master',
@flags=0

DECLARE steps CURSOR FOR SELECT step_id+1, step_name, subsystem, command, flags, additional_parameters, cmdexec_success_code, on_success_action, on_success_step_id+1, on_fail_action, on_fail_step_id+1, server,
database_name, database_user_name, retry_attempts, retry_interval, os_run_priority, output_file_name
FROM #sysjobsteps
WHERE job_id = @job_id
order by step_id
OPEN steps
FETCH NEXT FROM steps into @step_id, @step_name, @subsystem, @command, @flags, @additional_parameters, @cmdexec_success_code, @on_success_action, @on_success_step_id, @on_fail_action, @on_fail_step_id, @server,
@database_name, @databasE_user_name, @retry_attempts, @retry_interval, @os_run_priority, @output_file_name
WHILE @@FETCH_STATUS = 0
BEGIN

EXEC msdb.dbo.sp_add_jobstep @job_id=@job_id, @step_name=@step_name,
@step_id=@step_id,
@cmdexec_success_code=@cmdexec_success_code,
@on_success_action=@on_success_action,
@on_fail_action=@on_fail_action,
@on_success_step_id=@on_success_step_id,
@on_fail_step_id=@on_fail_step_id,
@retry_attempts=@retry_attempts,
@retry_interval=@retry_interval,
@os_run_priority=@os_run_priority, @subsystem=@subsystem,
@command=@command,
@database_name=@database_name,
@flags=@flags

FETCH NEXT FROM steps into @step_id, @step_name, @subsystem, @command, @flags, @additional_parameters, @cmdexec_success_code, @on_success_action, @on_success_step_id, @on_fail_action, @on_fail_step_id, @server,
@database_name, @databasE_user_name, @retry_attempts, @retry_interval, @os_run_priority, @output_file_name
END
CLOSE steps
DEALLOCATE steps

EXEC msdb.dbo.sp_update_job @job_id = @job_id, @start_step_id = 1

SkipJob:

FETCH NEXT FROM jobs into @job_id
END
CLOSE jobs
DEALLOCATE jobs

DROP TABLE #sysjobsteps


July 22, 2015  3:33 AM

PASS Summit 2015, Speaker Idol Is Back!

Denny Cherry Denny Cherry Profile: Denny Cherry

PASSLogoBW I’m pleased to announce that Speaker Idol is back for the PASS Summit in 2015. We’re using the same format as last year. There will be three opening rounds, followed by the final round. Each of the three opening rounds will have 4 people competing. In each of these three rounds a winner and a runner-up will be selected. In the final round the three winners will present again along with a wildcard who will be selected by the judges from the three runners up. The sessions should be 5 minutes long, and can be on any topic (preferably something SQL related).

The goal of this years Speaker Idol competition is the same as last years, to give speakers who are known in their region a chance to be seen at the international level so that they can improve their chances of being selected to speak in future years, to get some real time feedback from other speakers about their presentation style, and so that they can improve as speakers.

  1. You have spoken at a SQL Saturday or SQL Rally event (or other large non-pass event, such as SQL Bits, a TechEd Conference, etc.).
  2. You have not spoken at the PASS Summit or PASS Business Analytics Conference for a breakout, spotlight or pre-con session. (If you gave a lightning talk and no other session you are eligible.)
  3. You will be in Seattle during the PASS Summit (attending PASS isn’t needed, and if you aren’t attending PASS we will get you to the session room for Speaker Idol).
  4. You are available to attend which ever speaker idol session you are assigned to.
  5. You have a 5 minute technical presentation which you can give.
  6. You sign up on the registration form before September 31st, 2015 at 11:59:59pm (23:59:59) Pacific time according to Denny Cherry’s Verizon Cell Phone.

In order to enter the 2015 Speaker Idol you’ll need to fill out this handy form. The big difference from last year’s Speaker Idol is that in addition to being able to submit yourself, session attendees can submit speakers that they have seen at prior events that they really liked. Just fill out the same form, just put the speaker’s name (and email address if you have it) and we’ll contact them and let them know that they’ve been nominated to present at this years speaker idol.

Thanks to PASS for allowing me to run with this again.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: