SQL Server with Mr. Denny


February 10, 2020  7:37 PM

Thank You Tech Target

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve been informed by Tech Target that they are going to be shutting down their blogging platform this month, so sadly this will be the last post that I put on this platform.  It’s been an amazing 13 years that the folks at Tech Target have been putting up with me, and I sincerely thank them for their being willing to put up with me for so long.

I do plan on continuing to blog, but I’ll be doing it over on the DCAC website.  You’ll be able to find the newest blog posts at the bottom of the home page, or you can go straight to the Blogs page and few everything, and not just from me but from all the other great bloggers on the DCAC team as well.

I’ve copied all my old content from my Tech Target blog onto my DCAC blog, so no content should be lost. I’m working on switching images over this week so there are no dead links, and hopefully, I’m able to get them all.

Thank you Tech Target, it’s been a great 13 years.

Everyone, don’t forget to update your links and RSS feeds to the DCAC page.

Denny

February 10, 2020  4:00 PM

Setting up to use External Tables in SQL Server 2019 BDC

Denny Cherry Denny Cherry Profile: Denny Cherry

SQL Server 2019 Big Data Clusters is one of the cool new features that’s available with the release of SQL Server 2019. The idea behind SQL Server 2019 Bid Data Clusters (BDC) is that you can keep all your data on their native systems, and have BDC point to the tables needed using what are called External Tables. These External Tables are just basically pointers that point you to the actual data.

Think about this scenario, you’ve got data in a SQL Server 2012 database, you’ve got other data in an Oracle database and you need to create a report that includes columns from both sources.

In the olden days, you’d have to ETL this data from one server to another on some sort of schedule, usually to a third server (usually a data mart) so that the report could be run there.

With SQL Server 2019 BDC you don’t have to worry about creating and maintaining this ETL. You can simply use Azure Data Studio to spin up a SQL 2019 BDC instance (either in an Azure Kubernetes environment called aks, or in an on-prem Kubernetes environment) and create external tables to point to your source data.

In order to create external tables, there’s a little bit of setup that you’ll need to do on the BDC instance.

The first step will be to create a database that you want to use. The normal CREATE DATABASE command is all you need here.

CREATE DATABASE test;
From there you'll need to create a master key in that database (after switching into the database you just created).

USE test;
CREATE MASTER KEY ENCRYPTION BY PASSWORD = N'Pa$$word';

After the master key is created then you’ll need to create the credential that you’ll be using to log into the remote server.

CREATE DATABASE SCOPED CREDENTIAL [sql-auth]
WITH IDENTITY = N'bdc-login', SECRET = N'MyP@ssw0rd';

After the Credential is created, then you need to tell BDC where the remote data is going to be accessed from. You do this by creating an EXTERNAL DATA SOURCE. We also use this to specify the login that we’re going to use to log in to the remote server.

CREATE EXTERNAL DATA SOURCE
WITH (LOCATION = N'sqlserver://192.168.3.31', CREDENTIAL = [sql-auth]);

At this point, we’re ready to create our external tables. These are done using the CREATE EXTERNAL TABLE command. You specify the schema of the table, then in the WITH section we tell BDC what table we’re accessing the data from, and what data source (server and authentication method) that we’re going to use.

CREATE EXTERNAL TABLE dbo.Table1
(
ProductId int,
ProductName nvarchar(30)
)
with (location = 'SampleDb.dbo.Product', data_source = )
GO

Once the external table is setup we can access it just like any other table that we access. We can create multiple external tables, all pointing to different places, and join those together as needed to get the data we want.

One thing to keep in mind when you’re setting up BDC is database collation. If the database collation of the source table is different than BDC then you’ll need to specify the source collation when you create your external table. That’s done easily like this where we are using the Chinese_PRC_CI_AS collation as that is the collation that is being used in our source tables.

CREATE EXTERNAL TABLE dbo.Table1
(
ProductId int,
ProductName nvarchar(30) COLLATE Chinese_PRC_CI_AS
)
with (location = 'SampleDb.dbo.Product', data_source = )
GO

Hopefully, this quick walkthrough on creating external tables on SQL Server 2019 BDC was helpful.

SQL Server Big Data Cluster requires a good amount of planning in order to have a successful deployment.  The team at Denny Cherry & Associates Consulting would love to help you plan and deploy SQL Server Big Data Cluster with you.  Contact the team and we’ll get the process started.

Denny


February 3, 2020  4:00 PM

Make sure you have your IP Scheme set before you deploy SQL Server 2019 Big Data Cluster

Denny Cherry Denny Cherry Profile: Denny Cherry

I was doing a POC of SQL Server 2019 Big Data Cluster (BDC) recently.  We did the POC in Azure as they didn’t have a Kubernetes environment on-prem that we could work with.  Everything they have in Azure is PaaS we the Kubernetes environment (and the vNet that it requires) were brand new.  I set up the BDC using Azure Data Studio and everything was working fine.  We went to set up the site to site VPN and found that the IP range that was used by default to set up the Kubernetes environment in Azure was already in use on their internal network (bad Microsoft, bad).

No problem, we just added in the correct IP range to the possible addresses for the vNet, added a new Subnet and moved the VMs over to the new subnet (which caused the VMs to reboot, but that was expected).

It turns on that BDC in SQL Server 2019 doesn’t like having the IPs changed for the aks nodes.  The problem stems from the fact that BDC is generating its certificates off of the IP address of the node, so if the IP address of the node changes (even if you are using DHCP for on-prem nodes and DHCP gives you a new IP address) your BDC won’t respond.

There are three possible solutions at this point.

  1. Change the IPs back to what they were before.
  2. Rebuilt the certificates that BDC uses based on the new IP Addresses
  3. Toss the BDC and build a new one

Option 1 wasn’t going to work as the old IPs were a conflict with on-prem

Option 2 isn’t great because this is a totally undocumented process to build the new certificates. So you have to figure out how to do this on your own. Given that our clients pay us by the hour, this is not really good use of their money or my time (mostly there money).

Option 3 was a really good option in this case, as there was no data in the environment. Just to be safe I tossed the entire Kubernetes environment and started from scratch.

The moral of this story is going to be, make sure you never need to change the IP addresses of the Kubernetes (or aks or k8s) that hosts your SQL 2019 Big Data Cluster after the fact.  Doing so will make your Big Data Cluster unusable; so when it comes to changing the IP address of your container environment and your running SQL Server 2019 Big Data Cluster, plan ahead.

SQL Server Big Data Cluster requires a good amount of planning in order to have a successful deployment.  The team at Denny Cherry & Associates Consulting would love to help you plan and deploy SQL Server Big Data Cluster with you.  Contact the team and we’ll get the process started.

Denny


January 27, 2020  4:00 PM

SQL Server Performance Tuning Precon

Denny Cherry Denny Cherry Profile: Denny Cherry

On April 3rd, 2020 I’ll be presenting a Precon at SQL Saturday Croatia.

In this session, you will learn about SQL Server 2008 R2 through SQL Server 2017 performance tuning and optimization. Industry Expert Denny Cherry will guide you through tools and best practices for tuning queries and improving performance within Microsoft SQL Server. This session will guide you through real-life performance problems that have been gathered and tuned using industry-standard best practices and real-world skills.

This session starts off with reviewing the various native tools which are available to you from Microsoft with your SQL Server License and how all of these tools can be used to help you performance tune your SQL Server installation. These tools include SQL Profiler, Extended Events, Database Tuning Advisor and more. We will use these tools to identify problem areas within the SQL Server database engine and how to solve these problems.

We will discuss some scale-out options which are available with Microsoft SQL Server both within the database by using table partitioning to spread the workload to multiple disks, as well as AlwaysOn Availability Groups and how they can be used to scale a database out across multiple servers.

Registering

In order to register for the precon click over to the registration page, fill out the form and away you go; and I’ll see you on April 3rd.

Denny


January 20, 2020  4:00 PM

Stop scheduling to go from testing straight to production

Denny Cherry Denny Cherry Profile: Denny Cherry

All too often I see the same basic pattern happening.  Software changes are written in dev, those changes to move QA, then the changes move to production.  The problem with this schedule is that if there’s a problem found in the QA process, there’s no time in the schedule for problems to be found in QA, which requires sending the software back to Dev, the problems fixed and then we try QA again.

If we don’t build time into the schedule to address problems that QA finds then we have one of three options.

  1. Release software with known issues to production
  2. Cancel the release and send everything back to development
  3. Do the release late and send the software package back to development, then QA

None of these are particularly good options as they involve either releasing known bad software (that’s not a good option as users will be mad) or missing the release date (that’s not a good option because users and/or management will be mad).

The better option is to plan into the schedule to have to send the package back to development to have problems adjusted.  Developers are human and make mistakes, it happens.  It takes time to fix the problems that are found. This is why we have QA. It isn’t to sign off that software is perfect, its to find problems in software so that they can be fixed.

Say that we are starting a Dev string on Feb 3rd, 2020 (it’s a Monday).  The sprint goes until Feb 14th, 2020 (it’s a Friday).  Testing in QA is scheduled for a week so it starts on Feb 20, 2020, and ends on Feb 24, 2020.  Then the production release is on Feb 27th, 2020 (the last Monday).

Now, what happens if a problem is found on Feb 24th?  Do we push the release date from Feb 27th, or do we release known bad software?  Neither of those options is going to be particularly popular options.

What we should have between testing and the production release is at least a few days, maybe another week for additional dev work and re-testing after QA happens.  If that time isn’t needed great, then the package sits for a week.  If the time is needed, then we have it. In either case, the release can happen on time and we can release known good software, instead of software with known bugs.

This sort of change will require signoff from above, but it won’t really change the process that the developers go through when building the software.  What it will do is slightly reduce the number of releases that can happen each year as each cycle is now a little bit longer then it was before.  But the end result from this will be more stable software, and a better result from each software release; and this is course the prefered end result.

Denny


January 13, 2020  4:00 PM

IT Career Energizer Podcast

Denny Cherry Denny Cherry Profile: Denny Cherry

A couple of weeks ago I was on the IT Career Energizer Podcast. During the course of the episode, we’ll cover some career highlights, where I think IT is doing in the next few years and all other sorts of things IT Career-related.

You can subscribe to the podcast on Apple (the episode is here) or you can get the podcast from Stitcher as well.

There’s a ton of more information available via the show notes.

Hopefully, you’ll find the podcast useful and entertaining.

Denny


January 6, 2020  4:00 PM

Our EGPAF Click to Donate Campaign has wrapped up for 2019

Denny Cherry Denny Cherry Profile: Denny Cherry

Over the month of December (from the 1st through the 24th) we had a donation program set up with the Elizabeth Glaser Pediatric AIDS Foundation.  To help us, all you had to do, was to click through to the page that we set up for the donation program.  This simple act would cause a $1 donation to be made. If you wanted to help out more (which several people did, and we thank you) then you could click through and donate more directly to EGPAF, and we’d then match those donations up to $10,000.  While we were out donation program our marketing firm, Mayfield Consulting was doing one as well which followed along with ours.

Between the two programs, we had a fantastic fundraiser, and we thank everyone for participating.  Between the two donation programs, I’m thrilled to say that we were able to raise $15,662 for the Elizabeth Glaser Pediatric AIDS Foundation.

Thanks again everyone for helping us make this fundraiser a great success.  I’m sure that EGPAF will be able to use these donations in order to help save more lives.  This is a great way to finish up 2019, I hope everyone has a great New Years and we’ll see everyone as we kick off the 20s in just a couple of days’ time.

If you haven’t had a chance to watch the video that we made with the team at EGPAF, the video is above.

Denny


December 23, 2019  4:00 PM

Dynamically Assigning an Availability Set to a VM in an ARM Template

Denny Cherry Denny Cherry Profile: Denny Cherry

Recently I was creating an ARM template for a client.  The idea for the ARM template was that we were going to create two VMs, and it would also create an Availability Set and assign the VMs to the Availability Set.  So that’s no problem.Cloudflare being the cloud

We then got a request to also create an ARM template for one VM without the Availability Set.  I didn’t want to create (and then have to manage) a second template. I wanted one template.  So I added a parameter to the ARM template called “SingleMachineInstall” which accepted “Yes” no “No” to the parameters section like this.

“SingleMachineInstall”: {
“type”: “string”,
“allowedValues”: [ “Yes”, “No” ],
“defaultValue”: “No”,
“metadata”: {
“description”: “Check this if this should be installed as a single VM. This will prevent the ILB, Availability Set and the second VM from being created.”
}
},

For all the objects that I didn’t want to create when this parameter had a value of “Yes” I simply added a condition statement to each object like this.

“condition”: “[equals(parameters(‘SingleMachineInstall’), ‘No’)]”,

Then came the complex part of this.  When there are two VMs being created, each VM needed to be dependent on the Availability Set. But when there’s only one VM, I’m not creating the Availability Set, so if the dependency is still there, the deployment will fail.  One thing that I found in my testing, is that you can set a resource to be dependant on the same resource multiple times.  So what I did in the dependsOn section of the VM template, I combined this with an IF statement. In the IF statement, I put down that if the Parameter is set to “No” then we pass in the resourceId for the Availability Set. if the Parameter is set for “Yes” then we pass in the NIC again, which makes the dependsOn succeed either way.

“dependsOn”: [
“[concat(parameters(‘VM1_Name’), ‘_nic0’)]”,
“[if(equals(parameters(‘SingleMachineInstall’), ‘No’), resourceId(‘Microsoft.Network/loadBalancers’, parameters(‘AvailabilitySet_Name’)), concat(parameters(‘VM1_Name’), ‘_nic0’))]”
],

The final thing that we needed to figure out was the properties.availabilitySet value within the Virtual Machine configuration.  The problem with this value is that it expects to have a sub-Id called id like this.

“availabilitySet”: {
“id”: “[resourceId(‘Microsoft.Compute/availabilitySets’, parameters(‘AvailabilitySet_Name’))]”
},

Now, this couldn’t be wrapped in a simple IF statement, because then the id that is being passed in, is a blank string, which throws an error message.

There’s another syntax for this parameter which came in really handy for this.  That syntax is to pass in the ID, without specifying that it is an id like is shown here.

“availabilitySet”: “[resourceId(‘Microsoft.Compute/availabilitySets’, parameters(‘AvailabilitySet_Name’))]”,

Now, this syntax is important to us because we can use this to help us not put in an Availability Set if the parameter is set to not create the Availability Set.

We create one additional thing to make this work, first we create a variable within the ARM template.  I called this variable “availabilitySet”.  Because of the way ARM works, no error will be thrown if there’s no Availability Set created.

“availabilitySet”: {
“id”: “[resourceId(‘Microsoft.Compute/availabilitySets’, parameters(‘AvailabilitySet_Name’))]”
}

Back within the VM section of the ARM template, we need to tell the ARM template to use the variable if it needs, and if it isn’t needed we send is a NULL value.  This way if we need an Availability Set, we specify the child value, otherwise, we pass in nothing.

“availabilitySet”: “[if(equals(parameters(‘SingleMachineInstall’), ‘No’), variables(‘availabilitySet’), json(‘null’))]”,

When I was able to do this, I was using “apiVersion”: “2019-03-01” so this may not work in later version of the API, so be warned about that.

When I was searching around on Google I couldn’t find any examples that showed how to do this. They may be out there, but I couldn’t find them.

Hopefully, this technique works for you in the few places that you need this sort of technique.

Denny


December 16, 2019  4:00 PM

Another Day, Another Ransomware Attack

Denny Cherry Denny Cherry Profile: Denny Cherry

This weekend New Orleans reported that they suffering a cyber attack.  In response to this, the city turned off all their workstations and servers while they tried to contain the issue.  While I have no issue with what the city of New Orleans had to do in order to contain the outbreak. You do what you need to do, in order to stop the outbreak.

What I do have an issue with is that production servers were impacted by this at all. Things like Ransomeware are going to happen but preparing for these sorts of problems is going to make it much easier to deal with these sorts of problems.  When it comes to something to Ransomeware we aren’t really that concerned with the workstations, we only really care about the servers as that’s where the data that we need to protect is.

The first thing that we want to do in order to protect our servers is to prevent normal users from being able to RDP to the servers.  We probably don’t even want the IT teams to be able to RDP to the servers directly.  What we’ll want to set up is a jump box and that’s the only machine in the server environment which we can RDP to.  From there we can RDP from the jump box to other servers.  We can probably secure RDP access from one server to another as well.

The same applies to file share access.  There should be no servers that users have network file share access to (or any other network access).  File servers are going to be the exception to this, and they should be treated differently than the other servers.  Those file servers that users have file share access to, should be treated just like the users are treated. File servers should have no RDP or file share access to the other servers in the environment.

Domain controllers should be treated similarly to file servers. Users and other servers will need file share (and other) access to the Domain Controllers in other for authentication to work as expected.  But users don’t need RDP access to the Domain Controllers.

On top of all of this, servers should not have access to connect to the Internet. The exception to this will be the Windows Update Server (WSUS) or whatever software is being used to patch the servers.  Unless there’s a specific requirement for software to have internet access, that access should be opened up. Other than that access from the server network to the internet should be completely closed down.

While these steps aren’t going to give you 100% protection from things like Ransomeware, but it’s going to give you a lot more protection than the “normal” setup where things are wide open.

Denny


December 9, 2019  4:00 PM

Microsoft wants bugs found in SQL Server and reported to Microsoft

Denny Cherry Denny Cherry Profile: Denny Cherry

There are two groups of people that use SQL Server. Those that submit bugs to Microsoft and those that blog about problems that they find.

Those that submit their feedback about bugs to Microsoft want the issues to be fixed.

Those that blog about potential bugs in order to be seen, just want to be famous.

Be the first group, not the second. Make SQL Server (or whatever product) better.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: