SQL Server with Mr. Denny


December 9, 2019  4:00 PM

Microsoft wants bugs found in SQL Server and reported to Microsoft

Denny Cherry Denny Cherry Profile: Denny Cherry

There are two groups of people that use SQL Server. Those that submit bugs to Microsoft and those that blog about problems that they find.

Those that submit their feedback about bugs to Microsoft want the issues to be fixed.

Those that blog about potential bugs in order to be seen, just want to be famous.

Be the first group, not the second. Make SQL Server (or whatever product) better.

Denny

December 2, 2019  4:00 PM

DCAC Supporting Elizabeth Glaser Pediatric AIDS Foundation (@egpaf) #WorldAIDSDay – Redux

Denny Cherry Denny Cherry Profile: Denny Cherry

Tis the season and Denny Cherry & Associates Consulting has a quick way for you to make a difference. All you have to do, is click through to our donation webpage, and that’s it.  For clicking through, DCAC will donate $1 to Elizabeth Glaser Pediatric AIDS Foundation.

Every dollar accessed by your participation matters. Having assisted 30 million pregnant women at more than 5,000 clinics in 19 countries, The Elizabeth Glaser Pediatric AIDS Foundation continues to seek an end to global pediatric HIV/AIDS through prevention and treatment programs, research, and advocacy.

We’re working with the Elizabeth Glaser Pediatric AIDS Foundation in part because 500 children are newly infected with HIV because their families lack access to the health services they need to prevent transmission.  And this is something that everyone at DCAC wants to help solve.  We have come so far, but progress towards ending AIDS in children and youth is stalling. We must redouble our efforts or risk losing the progress we’ve gained!

Want to help out by DONATING MORE? EXCELLENT! We’re matching donations! Click through here to donate to EGPAF! DCAC will match every dollar you choose to donate through our link up to a total of $10,000 US for donations made between December 1st, 2019 through December 24th, 2019 at 11:59:59pm Pacific Time.

We announced this yesterday, which was World AIDS Day, but we wanted to announce it again today to make sure that as many people as possible see it.  We urge you to click through and make the $1 donation and if you can make your own donation which DCAC will match to the Elizabeth Glaser Pediatric AIDS Foundation.

Denny


December 1, 2019  4:00 PM

DCAC Supporting Elizabeth Glaser Pediatric AIDS Foundation (@egpaf) #WorldAIDSDay

Denny Cherry Denny Cherry Profile: Denny Cherry

Tis the season and Denny Cherry & Associates Consulting has a quick way for you to make a difference. All you have to do, is click through to our donation webpage, and that’s it.  For clicking through, DCAC will donate $1 to Elizabeth Glaser Pediatric AIDS Foundation.

Every dollar accessed by your participation matters. Having assisted 30 million pregnant women at more than 5,000 clinics in 19 countries, The Elizabeth Glaser Pediatric AIDS Foundation continues to seek an end to global pediatric HIV/AIDS through prevention and treatment programs, research, and advocacy.

We’re working with the Elizabeth Glaser Pediatric AIDS Foundation in part because 500 children are newly infected with HIV because their families lack access to the health services they need to prevent transmission.  And this is something that everyone at DCAC wants to help solve.  We have come so far, but progress towards ending AIDS in children and youth is stalling. We must redouble our efforts or risk losing the progress we’ve gained!

Want to help out by DONATING MORE? EXCELLENT! We’re matching donations! Click through here to donate to EGPAF! DCAC will match every dollar you choose to donate through our link up to a total of $10,000 US for donations made between December 1st, 2019 through December 24th, 2019 at 11:59:59pm Pacific Time.

We’re proud to be able to announce or donation page and matching fund on World AIDS Day, which is today December 1st, 2019 (which is why we are posting on a Sunday. We urge you to click through and make the $1 donation and if you can make your own donation which DCAC will match to the Elizabeth Glaser Pediatric AIDS Foundation.

Denny


November 25, 2019  4:00 PM

Let the Holidays Begin

Denny Cherry Denny Cherry Profile: Denny Cherry

This week begins the holiday season with Thanksgiving in the US being Thursday of this week, and Black Friday is Friday of this week.

Locked Bike

https://www.flickr.com/photos/123327536@N08/23891946434

Enjoy the holiday season starting this week, and for those in IT that are working on Friday to support Black Friday may you have a nice simple day.

Denny


November 18, 2019  4:00 PM

SQL Server 2019 Big Data and Licensing

Denny Cherry Denny Cherry Profile: Denny Cherry

One of the big features of SQL Server 2019 is the new feature called Big Data Cluster (BDC for short).  BDC doesn’t have its own licensing per se (it does, but only if you don’t get enough “free” licenses) as you get BDC free when you license what is called the head node.

The first thing to remember about BDC is that there’s this concept of a head node. This is what the users connect to and it runs good old fashioned SQL Server.

From a licensing perspective, you get the ability to run X number of BDC data cores when you purchase Y CPU cores of SQL Server 2019 for the head node.

The math for this is pretty straight forward.   If you purchase SQL Server 2019 Enterprise Edition for the head node, you get 8 BDC cores for each SQL Server 2019 core that you purchase for the head node.

If you purchase SQL Server 2019 Standard Edition for the head node, you get 1 BDC core for each SQL Server 2019 core that you purchase for the head node.

So what this means is that if you have 4 cores on the head node, and 32 BDC cores then there would be nothing extra to purchase if you purchase Enterprise Edition. If however, you purchase SQL Server 2019 Standard Edition for the head node then an additional 28 BDC cores need to be purchased.

The next question at this point is, how much do extra cores of BDC cost to run?  The cores are sold on a subscription basis, which comes to $400 per core, per year (and the licenses are sold in 2 packs).  So in our example of having SQL Server 2019 Standard Edition and needing 28 BDC cores would cost $11,200 per year to run BDC.  Now given the cost of SQL Server Enterprise Edition is $14,992 for four cores and SQL Server Standard Edition are $14,344 (plus $11,200) or $25,544 (for SQL and BDC) with a system such as this, buying Enterprise Edition is going to be the cheaper option.  Depending on the number of cores and the number of BDC cores the math here could get very interesting and will take some evaluating before selecting a license mode to select.

To make this easier, let’s look at a chart for the sample of 4 cores on the head node and 32 BDC cores (You’ll probably want SA on this which is an extra cost which I’m not including here).

License Type License Costs BDC Annual Costs First-Year Costs Additional Years Costs
Standard Edition $14,992 $11,200 $25,544 $11,200
Enterprise Edition $14,992 $0 $14,992 $0

As you can see, depending on how you scale this it could become an interesting converation.

Denny


November 11, 2019  4:00 PM

How to test for query regressions when upgrading

Denny Cherry Denny Cherry Profile: Denny Cherry

One of the things that you want to test for when doing an upgrade of SQL Server is query regressions. Regressions happen when the new version of SQL Server makes a poor decision when running the same code that you can before.  Testing an application against the new version should always be done, even though everything is supposed to work perfectly. You never know until you test, and since Microsoft hasn’t tested your specific workload, that leaves it up to you.

Prepping for the test

This testing does require a test machine to run your workload against, but since we have the cloud, doing testing like this is easy, and you can just destroy the VM when you’re done with your testing.

To do this testing, we’re going to need a few things.  The first is a machine to run our test databases on. In the perfect world, it’ll be the same size as the production server that you’re going to be running the new production system on. It doesn’t have to be, but the closer, the better.  We also need a  backup of the production database that we’re going to restore on the test machine.

We also need a SQL Profiler capture from the production system. If possible, you want the profiler capture to have every possible query that can be run.  This could be guaranteed by capturing data for a day or two, or it could be guaranteed by someone clicking all the options in the app while the trace is running.  You need a profiler trace as you need to play it back, and the easiest way to playback a capture is SQL Profiler.  It’ll take a lengthy profiler script to create the trace that it needed, but you can see the script at the bottom of this post.  You will want to change the path that the trace is writing to.  The script will output a number when it runs; you’ll need this number to stop the trace when you are ready to stop it.  The script will create a bunch of trace files that are 1 Gig in size when you run the script.  Make sure that you’ve got plenty of drive space for the capture.

Once you’ve captured the data, stop the trace (the stop script is at the bottom of this post as well) and move the capture to the test server.  On the test server, you’ll want to install the version of SQL Server that we’re going to move to (Hopefully SQL 2017 or newer).  Once SQL is installed, restore your production database then turn on the query store for that database (database properties).  You’ll also want to increase the size of the query store, 1000 Megs should be enough, but if it’s not, go ahead and make it bigger than that.

Once the query store is enabled (do NOT change the compatibility level yet), we’re ready for the first run of our profiler trace.

Checking for regressions

If you open SQL Profiler on the test server and open the first file of the trace.  Once that’s done, click the “Replay” drop-down menu, and then click “Start.” It’ll then prompt you to connect to a server. Enter the name of the test server.  It’s going to give you the option to use multiple threads, and you’ll probably want to switch to this option.

Once you are happy with these settings, click “OK” to start the replay.  Depending on how big the trace is, this might take a while. At a client, when I did this recently, we captured data for about 15 minutes, and it took about 7 hours to replay.

Once the replay is done, change the compatibility level to whatever version you plan to change the database to (usually the newest compatibility level).

At this point, we need to rerun the trace again.  Once the trace is done, we can go into the query store and see if things regressed.  Microsoft has made looking for regressed queries very easy; there’s a query store report you can open up. If you open the “Regressed Queries” report, it’ll show you all the queries that had plans that regressed to do changes in the SQL Server engine.  You can then investigate these queries and hint them or rewrite them to fix them.  Once those queries are all fixed, the system can be upgraded.

In the case of a client system that I was working on recently we had to hint a few queries, but those hints weren’t available in the old version of SQL Server that we were moving off of, so the hints couldn’t be applied until after the upgrade was done, but we just built that into the upgrade plan so there would be a couple more minutes of outage while we added hints to a bunch of procedures, then we opened the gates and let users onto the system.

Depending on the regressions that are found you might need some development time to see what’s going on (hint, we can help you figure out the regressions and what needs to be hinted and what needs to be rewritten).

Denny

 

To start the trace
declare @p1 int
declare @maxfilesize bigint = 1024
exec sp_trace_create @p1 output,2,N'c:\temp\Replay',@maxfilesize,NULL
select @p1
exec sp_trace_setevent @p1,14,1,1
exec sp_trace_setevent @p1,14,9,1
exec sp_trace_setevent @p1,14,10,1
exec sp_trace_setevent @p1,14,11,1
exec sp_trace_setevent @p1,14,6,1
exec sp_trace_setevent @p1,14,12,1
exec sp_trace_setevent @p1,14,14,1
exec sp_trace_setevent @p1,14,3,1
exec sp_trace_setevent @p1,14,35,1
exec sp_trace_setevent @p1,15,3,1
exec sp_trace_setevent @p1,15,35,1
exec sp_trace_setevent @p1,17,3,1
exec sp_trace_setevent @p1,17,35,1
exec sp_trace_setevent @p1,10,3,1
exec sp_trace_setevent @p1,10,35,1
exec sp_trace_setevent @p1,12,3,1
exec sp_trace_setevent @p1,12,35,1
exec sp_trace_setevent @p1,15,11,1
exec sp_trace_setevent @p1,15,6,1
exec sp_trace_setevent @p1,15,9,1
exec sp_trace_setevent @p1,15,10,1
exec sp_trace_setevent @p1,15,12,1
exec sp_trace_setevent @p1,15,13,1
exec sp_trace_setevent @p1,15,14,1
exec sp_trace_setevent @p1,15,15,1
exec sp_trace_setevent @p1,15,16,1
exec sp_trace_setevent @p1,15,17,1
exec sp_trace_setevent @p1,15,18,1
exec sp_trace_setevent @p1,17,1,1
exec sp_trace_setevent @p1,17,9,1
exec sp_trace_setevent @p1,17,10,1
exec sp_trace_setevent @p1,17,11,1
exec sp_trace_setevent @p1,17,6,1
exec sp_trace_setevent @p1,17,12,1
exec sp_trace_setevent @p1,17,14,1
exec sp_trace_setevent @p1,10,9,1
exec sp_trace_setevent @p1,10,2,1
exec sp_trace_setevent @p1,10,10,1
exec sp_trace_setevent @p1,10,6,1
exec sp_trace_setevent @p1,10,11,1
exec sp_trace_setevent @p1,10,12,1
exec sp_trace_setevent @p1,10,13,1
exec sp_trace_setevent @p1,10,14,1
exec sp_trace_setevent @p1,10,15,1
exec sp_trace_setevent @p1,10,16,1
exec sp_trace_setevent @p1,10,17,1
exec sp_trace_setevent @p1,10,18,1
exec sp_trace_setevent @p1,12,1,1
exec sp_trace_setevent @p1,12,9,1
exec sp_trace_setevent @p1,12,11,1
exec sp_trace_setevent @p1,12,6,1
exec sp_trace_setevent @p1,12,10,1
exec sp_trace_setevent @p1,12,12,1
exec sp_trace_setevent @p1,12,13,1
exec sp_trace_setevent @p1,12,14,1
exec sp_trace_setevent @p1,12,15,1
exec sp_trace_setevent @p1,12,16,1
exec sp_trace_setevent @p1,12,17,1
exec sp_trace_setevent @p1,12,18,1
exec sp_trace_setevent @p1,13,1,1
exec sp_trace_setevent @p1,13,9,1
exec sp_trace_setevent @p1,13,11,1
exec sp_trace_setevent @p1,13,6,1
exec sp_trace_setevent @p1,13,10,1
exec sp_trace_setevent @p1,13,12,1
exec sp_trace_setevent @p1,13,14,1
exec sp_trace_setevent @p1,13,3,1
exec sp_trace_setevent @p1,13,35,1
exec sp_trace_setevent @p1,70,1,1
exec sp_trace_setevent @p1,70,9,1
exec sp_trace_setevent @p1,70,11,1
exec sp_trace_setevent @p1,70,6,1
exec sp_trace_setevent @p1,70,10,1
exec sp_trace_setevent @p1,70,12,1
exec sp_trace_setevent @p1,70,14,1
exec sp_trace_setevent @p1,70,3,1
exec sp_trace_setevent @p1,70,35,1
exec sp_trace_setevent @p1,53,1,1
exec sp_trace_setevent @p1,53,9,1
exec sp_trace_setevent @p1,53,11,1
exec sp_trace_setevent @p1,53,6,1
exec sp_trace_setevent @p1,53,10,1
exec sp_trace_setevent @p1,53,12,1
exec sp_trace_setevent @p1,53,14,1
exec sp_trace_setevent @p1,53,3,1
exec sp_trace_setevent @p1,53,35,1
exec sp_trace_setevent @p1,11,1,1
exec sp_trace_setevent @p1,11,9,1
exec sp_trace_setevent @p1,11,11,1
exec sp_trace_setevent @p1,11,6,1
exec sp_trace_setevent @p1,11,10,1
exec sp_trace_setevent @p1,11,12,1
exec sp_trace_setevent @p1,11,14,1
exec sp_trace_setevent @p1,11,3,1
exec sp_trace_setevent @p1,11,35,1
exec sp_trace_setevent @p1,74,1,1
exec sp_trace_setevent @p1,74,9,1
exec sp_trace_setevent @p1,74,11,1
exec sp_trace_setevent @p1,74,6,1
exec sp_trace_setevent @p1,74,10,1
exec sp_trace_setevent @p1,74,12,1
exec sp_trace_setevent @p1,74,14,1
exec sp_trace_setevent @p1,74,3,1
exec sp_trace_setevent @p1,74,35,1
exec sp_trace_setevent @p1,71,1,1
exec sp_trace_setevent @p1,71,9,1
exec sp_trace_setevent @p1,71,11,1
exec sp_trace_setevent @p1,71,6,1
exec sp_trace_setevent @p1,71,10,1
exec sp_trace_setevent @p1,71,12,1
exec sp_trace_setevent @p1,71,14,1
exec sp_trace_setevent @p1,71,3,1
exec sp_trace_setevent @p1,71,35,1
exec sp_trace_setevent @p1,72,1,1
exec sp_trace_setevent @p1,72,9,1
exec sp_trace_setevent @p1,72,11,1
exec sp_trace_setevent @p1,72,6,1
exec sp_trace_setevent @p1,72,10,1
exec sp_trace_setevent @p1,72,12,1
exec sp_trace_setevent @p1,72,14,1
exec sp_trace_setevent @p1,72,3,1
exec sp_trace_setevent @p1,72,35,1
exec sp_trace_setevent @p1,100,1,1
exec sp_trace_setevent @p1,100,9,1
exec sp_trace_setevent @p1,100,11,1
exec sp_trace_setevent @p1,100,6,1
exec sp_trace_setevent @p1,100,10,1
exec sp_trace_setevent @p1,100,12,1
exec sp_trace_setevent @p1,100,14,1
exec sp_trace_setevent @p1,100,3,1
exec sp_trace_setevent @p1,100,35,1
exec sp_trace_setstatus @p1,1

To stop the trace
declare @p1 int = 2 --{Whatever value is outputted from the first query}
exec sp_trace_setstatus @p1,0
exec sp_trace_setstatus @p1,2


November 4, 2019  4:00 PM

When to use SQL DB vs SQL DW

Denny Cherry Denny Cherry Profile: Denny Cherry

There’s a lot of confusion around when to use SQL DB or SQL DW with a lot of people assuming that these are interchangeable; so, I wanted to try to address these.

SQL DW

Let’s talk about SQL DW first. There are some pretty hard and fast rules around SQL DW and when it should be used. SQL DW should only be used for reporting workloads where you have a proper data warehousing design for your database. It also is only going to be effective when you have a data warehouse that at least 2TB in size.

SQL DW came from APS (which used to be called PDW). One of the things that made APS (and PDW) so successful was that you needed consulting hours from an APS consultant to implement it successfully.  With SQL DW you can just buy a SQL DW from the Azure Portal, and off you go. There’s no need to work with a consultant; you can just build your tables and start reporting. However, if you don’t fully grasp data replication, data partitioning, and query processes (and other things) in SQL DW, then odds of a successful implementation to SQL DW are going to be slim to none.

SQL DB

SQL DB is going to be the best use case for a few different workloads.  Any OLTP workloads are going to be a good fit for SQL DB. If you’re coming from an On-Prem solution (SQL Server, Oracle, MySQL, etc.), then you may need to do some architecture changes in your application to get the best performance at scale from your application. Not all applications are going to require changes to the database architecture, but some will. Going into a cloud migration project with the assumption that there will be some database architecture work is a good thing. It will mean that if there is, you’re ready for the architecture work. If there doesn’t need to be any, then great.

SQL DB by itself supports databases up to 4TB in size. There is now a feature called Hyper-Scale which will let your databases in SQL DB go as large as they need to be (there are some additional costs to use Hyper-Scale).

Data warehouses and other reporting workloads can go into SQL DB as well. Smaller data warehouses that simply aren’t big enough for SQL DW (that will be under 2 TB in size) are great to put in SQL DB.  They’ll perform well, and you can use things like PowerBI to report off of them, and the cost for SQL DB will be much more attractive compared to SQL DW.

Some Migration Numbers

Doing some architecture work can potentially save you a ton of money as well.  I was doing a POC recently for a client as they were looking at an Azure Migration. They have ~80,000 databases that they were looking to move.  Phase 1 on the project was going to be to move the existing SQL Server’s to VMs in the cloud.  The cost for this was ~$80k a month.  By moving these databases into SQL DB (as Phase 2) they would be able to reduce their monthly cost to ~$37k a month.  Assuming that they need to spend $20k on developers a month to do this change, and it takes the developers six months to do the work ($120k in development costs), that’s a 3-month ROI before the company starts saving money.

What to use when

What it comes down to is that SQL DW is specialized with specific requirements (data warehousing with a well-defined schema and at least 2 TB in size) which SQL DB is more of a catch-all for everything else.

Hopefully, that helps explain when to use each one.  If you’ve still got more questions, then the team at DCAC would love to help you out.

Denny

 


October 28, 2019  4:00 PM

Is it time for an SSIS checkup?

Denny Cherry Denny Cherry Profile: Denny Cherry

On November 15th, 2019 Meagan Longoria and Kerry Type will present their webcast “Is it time for an SSIS checkup?”.

Database health checks are a familiar concept to many people. We check database and server settings, ensure security is appropriately implemented, and validate HADR plans to meet business needs. But do you ever assess the health of your SSIS projects? Do you know what things you should look for? Join us as we talk through what we think should be included in a data integration health check specific to SSIS, whether you run SSIS on your own machine or as an Integration Runtime within Data Factory.

We’ll discuss server configurations such as memory allocation and Kerberos, development processes and source control, and creating and populating non-production environments. Next, we’ll show you some design patterns and anti-patterns that can affect the quality and speed of your data integration processes. Then we’ll talk about appropriate settings and use of the SSIS catalog. We will also, of course, address data security and sensitivity. And we’ll finish with maintenance and support topics.

Organizations running SSIS should be able to easily answer questions such as “What is my longest-running package?”, “Who should be contacted when a package fails, and who is their backup?”, “Is the SSISDB being backed up?”, “What is our HADR plan?”, and “What are our SLAs for data being available to consumers?”.

Don’t let your SSIS environment become neglected. Learn the concepts you need to know to keep it healthy.

Go over to the webcast’s webpage to sign up for the webcast.

Denny


October 21, 2019  4:00 PM

Heading to the #sqlpass Summit? Have Questions? We Have Answers.

Denny Cherry Denny Cherry Profile: Denny Cherry

Are you planning on heading to the PASS Summit starting on November 5th? Do you have questions about the PASS Summit summit or Seattle in general?  We’ve got the answers for you, and we should have answered them when we presented our Attendee Orientation for the PASS Summit last week.  We’ve posted the recording of the webcast online if you missed the live broadcast, you can watch the recording as it just got posted.

So click over, watch the recording, get some tips about Seattle and we’ll see you in Seattle at the Summit.

Denny


October 21, 2019  3:05 PM

What is skip-2? What is means to you? And what is compromised to make this happen?

Denny Cherry Denny Cherry Profile: Denny Cherry

Skip-2 is the name of a software package that lets what appears to be state sponsored hackers attack your SQL Server data and download, change or delete data without the auditing that you have configured being triggered.  I won’t go in to the details of how skip-2 works, as the folks at ESET have done a great job with that writeup.

What I want to talk about is how they are able to make this happen.

The first thing to note is that this does not exploit a bug in SQL Server 2012 or SQL Server 2014 (the two versions that this targets).

In this exploit a new dll is injected into SQL Server, much in the same way an anti-virus can injected their DLL into SQL Server.  This injected DLL intercepts the call to functions like CPwdPolicyManager::ValidatePwdForLogin.  It first checks to see if the hard coded password of the attacker has been used. If it does then it sets a flag that the other intercepted function calls will check for, and it’s exists without sending the authentication request to the normal function that processed authentication requests.  If you aren’t using their hardcoded password, then it sends your authentication requests over to the authentiation function just like normal.

The other functions that are being intercepted are the functions that handle things like reporting authntication success (they wouldn’t want that to trigger as you’d then know when they were logging into your server) as well as the functions around auditing, transaction success reporting, etc. (they are all in the ESET writeup).

How they do this is pretty ingenous, they have written their own DLL and they are attaching it to SQL Server and loading that DLL higher in the process higherarchy then the DLL that normally hosts those functions (it’s more complex than this, this this is basically what’s being done). This allows them to intercept the needed functions and inject their own code that causes these functions to be skipped if needed (their code in their functions before calling the real functions).

The question that has come up already today, why does this only target SQL 2012 and SQL 2014. My assumption (and keep in mind I haven’t seen the attackers code, and I sure don’t have access to the SQL Server Code) is that this was origionally a targeted attack that they attackers decided could be used to get other useful data as well. Why it only works against SQL 2012 and SQL 2014? I assume that either the functions are in different places, or there are other parameters that are needed for SQL 2016+ (or both of these) which makes this sort of attack not work.

Is this something that you have to worry about, hopefully not. It only effects older versions of SQL Server, and in order to deploy this the attacker would need to be able to successfull breach your operating system so that the code can be loaded onto the SQL Server.  This means that in order to do this they would already have to have breached your network and gotten a foothold on a server somehow. If they’ve gotten far enough into your network to install this, you’ve got some big problems that you need to address, quickly.

The question becomes why isn’t this is a SQL Server bug?  Because they aren’t exploiting a bug in SQL Server to make this happen.  The injecting a DLL in SQL Server is a supported, documented process.  Attaching a DLL to another DLL, that’s a problem in Windows, and the attacker is getting into the Windows OS to deploy this.

Upgrading to SQL 2016+ is going to be a great way to get around this, as SQL 2016+ isn’t impacted by Skip-2. <sales hat>We can help with that</sales hat>.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: