SQL Server with Mr. Denny


December 7, 2016  4:00 PM

Settings of the Query Store GUI

Denny Cherry Denny Cherry Profile: Denny Cherry

The SQL Server Query Store which was introduced in Azure about a year ago or so, and came to the on-premises version of SQL Server in SQL Server 2016. This is a fantastic tool which can help you find performance problems. However when looking at the query store it may not be showing you the data that you are looking for. This is because the Query Store GUI doesn’t know what time window you want it to look at by default, so it’s using the settings which Microsoft programmed in as the defaults. Changing this to get a better view of the data that you have stored within the Query Store is just a matter of changing the settings within the Query Store GUI.

https://www.flickr.com/photos/mezuni/475143348/in/

https://www.flickr.com/photos/mezuni/475143348/in/

With whatever Query Store report you are looking at within Management Studio in the upper right of the window there’s a “Configure” button.  If you click that it’ll give you the various options for the report that you are looking at.  What those options are will depend on the Query Store report that you are looking at.  The settings I end up changing the most usually fall into the “Time Interval” category so that I can change the time windows that the Query Store is querystoreconfiguredisplaying data from.

For example on the Regressed Queries report Management Studio will default to showing queries from the last hour that have regressed from what they were over the last month. However if I’m looking at a customers system I may want to see the queries that have changed in the last week compared to the last few months.  This is where I’d set those settings.

Once done click OK, and you’ll get updated reports with the data that you are looking for.

Nice and easy.

Denny

December 4, 2016  11:03 PM

Recommended reading from mrdenny for December 2, 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server, SQL Server 2016

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


November 30, 2016  4:00 PM

My VMware AirWatch Transaction Log File Keeps Growing

Denny Cherry Denny Cherry Profile: Denny Cherry

Anything the uses a database needs to have the database settings configured correctly, this includes all the VMware tools.  I’ve had a couple of customers contact me recently about VMware’s AirWatch system because the transaction log on the database keeps growiairwatchng.  The reason in both cases so far is because the database is using the default settings and the transaction log was never backed up.  I talked about this at Microsoft’s TechEd conference a couple of years ago during my talk titled “Introduction to SQL Server Management for the Non-DBA”.  If you prefer a VMware themed slide deck, I’ve given basically the same presentation at VMworld as well (registration required).

Fixing this is actually a really simple fix.  You need to do one of two things.

  1. Change the recovery model from FULL to SIMPLE.
  2. Start taking transaction log backups of the database

Now, how do you select the one of these that you want to do?  That’s pretty easy.  Do you need point in time recover-ability of the database?  If the answer to that question is yes, then select option 2.  If the answer to that question is no, then select option 1.

With most AirWatch systems you don’t really care if the database is restored to what it looked like yesterday or 5 minutes ago, so you usually don’t need point in time recovery.  So just change the database from FULL to SIMPLE and you’re good to go.  Doing this is actually really easy.  Connect to the SQL server in SQL Server Management Studio.  Right click on the database and select properties.  Change the recovery model from FULL to SIMPLE, then click OK. That’s it the log won’t grow any more.

If you need to reclaim the space because the disk is full, then you need to shrink the log file.  To do this, click the “New Query” button at the top of management studio and run the following command.

DBCC SHRINKFILE (2, 1024)

That’ll shrink the file down to 1 Gig and it “shouldn’t” ever grow any larger than that. There’s no outage to doing this, and no risk to the system.  Because of the way that shrinkfile works you may need to run it a couple of times over a couple of days in order to actually get all the space back.

If you don’t have SQL Server Management Studio, or you can’t find the properties, here’s a script that’ll do everything that I’ve described.  Just run this in the AirWatch database (it could be named anything so you have to change the database name in the USE statement from YourDatabaseName to whatever the name of your AirWatch database is).

USE YourDatabaseName
GO
declare @dbname sysname
set @dbname = db_name()

declare @sql nvarchar(1000)
set @sql = 'ALTER DATABASE [' + @dbname + '] SET RECOVERY SIMPLE'

exec (@sql)
GO
CHECKPOINT
GO
CHECKPOINT
GO
DBCC SHRINKFILE (2, 1024)
GO

Denny


November 23, 2016  4:00 PM

Why Are Nested Views Horrible?

Denny Cherry Denny Cherry Profile: Denny Cherry
https://www.flickr.com/photos/1024/3407840726/

https://www.flickr.com/photos/1024/3407840726/

Nesting views in SQL Server by itself probably won’t introduce performance issues. However when it will do is make it MUCH harder to find performance problems. When you have the top view calling 3 views, which each call 3 views you suddenly have a nightmare of views to unwind to resolve the performance issues.

There’s another dirty little secret of views. Even if you aren’t accessing columns from all the tables in the views which are being referenced, SQL Server still needs to query those columns. This increases IO for your queries, for basically no reason. Because of this reason alone it often makes sense to not use views, and specifically nested views.

Now there are plenty of reasons to use views in applications, however views shouldn’t be the default way of building applications because they do have this potential problems.

While working with a client the other week we had to unwind some massive nest views. Several of these views were nested 5 and 6 levels deep with multiple views being referenced by each view. When queries would run they would take minutes to execute instead of the milliseconds that they should be running in. The problems that needed to be fixed were all indexed based, but because of the massive number of views that needed to be reviewed it took almost a day to tune the single query.

Eventually the query was tuned, and the query was able to be run in about 200ms, but the cost to the customer was quite high.

Use views when they make sense, but don’t use them every time. Like everything else in SQL Server, there is no one right answer for every problem. They are one solution for a subset of problems, not the end all solution.

Denny


November 16, 2016  3:30 PM

Big Announcements in SQL Server 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
8058678846_b65d4be043_z

https://www.flickr.com/photos/epicfireworks/8058678846/

Today Microsoft announced some MAJOR changes to the SQL Server 2016 product. As of SQL Server 2016 SP1, which ships today, from a programmability perspective there is no difference between the editions anymore. All programmability features will be available in all editions from LocalDB to Enterprise edition (there’s a couple of exceptions for various reasons which I’ve tried to notate below).

What’s Included?

This includes features such as table partitioning, Hekaton, ColumnStore, distributed partitioned views, Change Data Capture, Change Tracking, etc. will work everywhere. Basically the goal of this change was to make it so that you can deploy your database via a deployment script on any edition of SQL Server and it would just work, no matter if you are deploying on Express or Enterprise edition.

There’s some realistic exceptions to this. Chance Data Capture won’t work in Express Edition or LocalDB for example due to the fact that these editions don’t include the SQL Server Agent.

This is a huge improvement for software vendors as they can now use all the features of SQL Server 2016 SP1 and above without needing to require that their customers buy Enterprise Edition.

You may be asking yourself why would I buy Enterprise Edition after this announcement? The reason to buy Enterprise Edition hasn’t really changed. Most people didn’t buy Enterprise Edition to get any of these features. Customers typically buy Enterprise Edition to get access to more than 16 cores and to more than 128 Gigs of RAM. This will still be the case moving forward. If you need to run SQL Server on massive hardware at massive scale, if you need scale out HA using Always On Availability Groups, or Virtualization installation rights (also known as host based licensing), then you will still need to purchase SQL Server Enterprise Edition.

But Why?

9460677348_7c8289e15c_zThe next question is “why is Microsoft doing this”? The reason is that Microsoft wants to make life simpler for people that develop software. They want to make it so that ISVs can install their software using a single codebase. They can start their customers on Express Edition, and as the customer grows they simply upgrade their instance to Standard Edition, then later to Enterprise Edition without having to have the customer make ANY changes to the application database. Just buy the new SQL Server license and to a SKU upgrade.

Personally I don’t see this taking any Enterprise Edition license sales away from Microsoft (remember that Microsoft is in business to sell software). The primary reasons for buying Enterprise Edition are still there. There’s probably a couple of people that won’t need to buy Enterprise Edition, which is fine. Now they’ll be happier customers because they only need to buy the right sized license for their workload.

All this will be available in SQL Server 2016 SP1 as well as CTP1 of SQL Server v.Next (it doesn’t have a name yet) which also ships today.

Why Much RAM?

When it comes to In Memory OLTP (Hekaton) and ColumnStore there’s probably some questions about how much memory you can use for editions other than Enterprise Edition. The rule will be that Hekaton and ColumnStore will each be allowed to use 25% of the memory cap, and this memory will be outside of the memory cap. So if you are using SQL Server Express edition which has a 1 Gig cap for SQL Server pool memory today, you’ll be able to use an additional 256 Megs of RAM for Hekaton and an additional 256 Megs of RAM for ColumnStore. For Standard Edition you can use 128 Gigs of RAM for pools and an additional 32 Gigs of RAM for Hekaton and an additional 32 Gigs of RAM for ColumnStore. When using these features this effectively gives you a 50% increase in the amount of RAM you can use with these down-level editions.

With PolyBase there’s an additional note. If you need scale out of PolyBase where you scale the PolyBase configuration out to a scale out farm the head node of the farm needs to be Enterprise Edition, however the worker nodes of the PolyBase farm can be standard edition. This is a massive cost savings for those using PolyBase for huge amounts of data processing. If you don’t need the scale out feature of PolyBase and just need normal PolyBase support within SQL Server that will be available in all editions other than LocalDB.

Wow!

Just wow, I can’t imagine the work that went into getting this change in place.  And I’m not talking about the technical work to get this far, but the work of getting all the various people in management to approve such a radical change of taking what we have traditionally called “Enterprise Edition” features and moving them into not just Standard Edition, but into every edition of SQL Server 2016 SP1.  This is just huge.

Now there’s a few things which aren’t quite done yet because they’re going to take some more work, so we’ll hopefully see these changes rolling into SQL Server 2016 in later CUs or SPs.

In the meantime, enjoy the new features once you patch your SQL Server 2016 servers to SP1.

If you are running an older version of SQL Server don’t look for a new service pack that’ll back port these features to your older version.  The only way that you’re getting that to happen is with an upgrade to SQL Server 2016.  If you are looking for a consulting partner to help you with those upgrades, might I recommend Denny Cherry & Associates Consulting.  We are available to assist you with those projects right away.

Denny


November 9, 2016  2:00 PM

Why I love being an MVP

Denny Cherry Denny Cherry Profile: Denny Cherry

This week is the MVP summit and it seemed an appropriate time to talk about why I love being an MVP. (This post is going to be full of not so humble brags.)

The MVP summit is probably one of the best things about being an MVP. We as MVPs get direct access to members of the product team (in person at the Microsoft campus) where they show us outsiders all sorts of stuff that they are working on, or thinking about working on (different groups do this to different amounts, with the SQL Server team being one of the most open AFAIK).

https://www.flickr.com/photos/59632563@N04/6239670686/

https://www.flickr.com/photos/59632563@N04/6239670686/

Because of this openness I’ve been able to make close personal relationships with various members of the product team, and I’d ever call several of them friends as we have talked about non-work things when we see each other on work trips to conferences and other events. Without being a Microsoft MVP I may have met some of the folks on the product team, but we wouldn’t have been stuck in the same room for a week allowing us the opportunity to get to know each other. As this is my 9th year as an MVP, and my 9th MVP summit I’ve had a good amount of time to get to know various members of the product team at this point, and hopefully I get to know more over future years (if I’m lucky enough to be renewed).

There are other great benefits of being an MVP such as the MVP email lists where you can interact with MVPs from all over the world, in just about every country, in just about any product group that you can think of. There’s the Product Group Interactions (PGIs) which are webcasts where the product group talks to the MVPs about some topic for an hour. And of course there’s the MSDN subscription. All of these are great things which the MVP program and the product group give to the MVPs. But by far the most important to me is the relationships which I’ve built up over the years which give me some amazing insight into Microsoft and how the products are made (for good or bad) and the level of trust that the members of the product group have in me to trust me with information about the future of products (what little they like to share, even with MVPs).

Denny


November 4, 2016  8:48 PM

Recommended reading from mrdenny for November 4, 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
AWS, Cisco UCS, Dell EMC, Vblock, VMware vSphere

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


November 2, 2016  4:00 PM

Length limit of the Run and RunOnce registry keys

Denny Cherry Denny Cherry Profile: Denny Cherry
Microsoft Windows

Microsoft has had the registry keys for Run and RunOnce in the registry since the registry was introduced in Windows 95 and Windows NT 4.  But in the 20+ years that those keys have been there (and I’ve used them for a variety of things) I’ve never known that there was a limit on the length of the commands that you could put into those keys.

I found this while working on a client project when I needed to kick off some powershell automatically when the server restarted to get it added to the domain, as well as do a few other things.  But for some reason the key just wasn’t running.

I did a LOT of digging around the Internet and I stumbled across this old Stack Overflow post (from 2012) which in the comments talks about how there’s a limit of 256 characters.  I threw the command in Microsoft word and low and behold my command was 350+ characters long.

The next step was to change the PowerShell script that created the command to put it into a batch file and then setup the RunOnce key to run the batch file instead, and it worked on the next try.

So here it is documented for all the world to see, sometime in the mid 1990s, someone at Microsoft set the length of a command at 256 characters, probably to save memory and because “why would anyone need a command longer than 256 characters”.  And today in 2016 it bit me.

Denny


October 26, 2016  6:00 PM

When does it make sense to move to the cloud?

Denny Cherry Denny Cherry Profile: Denny Cherry
Amazon, Azure, Cloud Computing, Rackspace, Windows Azure

Moving to the cloud isn’t for everyone. When the cloud providers first launched their cloud platforms their approach, and marketing message, was for the most part that everything should be running in the cloud. And they were pushing that message hard.

Today however we have a much friendlier, cuddlier cloud and the cloud providers understand that not everything will be running in the cloud. Hybrid configurations are not only allowed, they are encouraged (don’t get me wrong, Microsoft, Amazon, VMware, Rackspace, etc. will all be thrilled if you’ll put your entire platform in their cloud environments.

11407118963_a8d4431a5c_k

The reality is that not all workloads are a right fit for the cloud. If you are running highly sustained workloads, then the cloud probably isn’t the right solution for your environment. The systems which work best in the cloud are the ones which can be converted to use the various PaaS services not just the IaaS services. The PaaS services will usually provide a more cost effective hosting solution, especially for workloads which vary over time; for example, ramping up during the day and becoming almost idle overnight.

Even if running in a PaaS environment isn’t an option this may be cost effective for running in an IaaS environment. It all depends on how bursty the workload is that you plan on moving to the cloud.

The other metric that you need to consider before moving to the cloud, is how long is the ROI on moving to the cloud? If you just purchased new hardware for hosting your production environment in a CoLo (or your own data center) then moving to the cloud probably isn’t the best option for you, at least not right now. You need to get your monies worth from the hardware that you just invested in before you replace it with a cloud offering. If however your hardware has reached the end of its life cycle, then now is the perfect time to consider moving to the cloud, as one of the benefits of moving to the cloud is that the cloud provider takes on the hardware replacement cost every few years instead of you having to take on those costs.

When looking at cloud provider costs, remember the cloud providers are in a race to the bottom on cost. Spinning up services in the various cloud platforms is getting cheaper than ever. Especially if you can take advantage of long term contracts and pre-paid services as these can knock even more of the cost for moving to the cloud down even lower.

One big key to taking advantage of the cloud is to scale down when you don’t have a need for the load, and scale up when you do. By scaling down resources when they aren’t needed, depending on your workload you could end up running services at a lower pricepoint about 50% of the time (16 hours of peak load 5 days a week, 8 hours of low load 5 days a week, and 24 hours of low load on the weekend breaks down to 80 peak load hours and 88 low load hours) depending on your applications workload.

With these numbers in mind, suddenly moving services from on-premises to the cloud (we do recommend Microsoft Azure as we are a Gold Cloud Platform partner with Microsoft) suddenly looks a lot more attractive.

Denny


October 19, 2016  8:10 PM

Obscurity is not Security

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

AndyWe’ve all heard the argument that using a different TCP port number for SQL, other than 1433, is more secure.  Here’s the truth, it isn’t.  Andy said it best here in a conversation on Twitter that happened in the #sqlhelp hash tag.  If you think that hiding your SQL Servers on a different TCP port number is going to make them harder to break into you are sadly mistaken.

The two most common ways of a data breach are employees and SQL Injection.  Employees already know what port the SQL Server is listening on, as they know how to connect to it already.  If not, they can easily download a port scanner and scan the SQL Server for open ports.  The ones that are open they’ll connect to.  If it’s a decent port scanner it’ll tell them which port is SQL and what all the other open ports are so they’ll know which one to connect to. You’ve slowed the attacker down by about 5-10 seconds, maybe.

If the attacker is using SQL injection then they don’t care what port the SQL Server is running on. They are connecting to the web/app tier and just using whatever SQL Server connection information that the application has to get access to your database.  By changing the TCP port number you haven’t slowed them down at all.

Proper security requires understand user/business requirements and building a solution which will meet those requirements while building proper secure protections into the system so that if something does go wrong the exposure is limited.

Pretending that hiding the servers and making them harder to find is making them more secure isn’t.  Spend time doing things which will make the environment actually secure so that when the system is breached the data isn’t all lost.  And if possible do this without making the environment harder to manage, which is all that you are doing by changing the TCP port numbers.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: