SQL Server with Mr. Denny


December 28, 2016  12:34 AM

2016 By The Numbers

Denny Cherry Denny Cherry Profile: Denny Cherry
https://www.flickr.com/photos/oskay/8668965783

https://www.flickr.com/photos/oskay/8668965783

Well 2016 was a hell of a year for blogging. I recorded just under 285k page views (284,553 as of writing this the afternoon of December 27th), which is up about 40k or so from 2015.

This years top 10 posts look a bit different from last years.  There’s actually 7 repeats but the top two were written in SQL Server 2016.

  1. Fixing a storage pool that doesn’t have an read-write server
  2. SQL Server on Linux?!?!?!
  3. What exactly is MSDTC, any when do I need it? (Number 2 in 2014, Number 2 in 2015)
  4. SQL Server 2014 Licensing (Number 1 in 2015)
  5. Big Announcements in SQL Server 2016
  6. Difference Between An Index and a Primary Key (Number 3 in 2015)
  7. What Does WITH (NOLOCK) Actually Mean (Number 5 in 2015)
  8. How To Configure MSDTC On Windows 2008 (Number 1 in 2014, Number 4 in 2015)
  9. The Process Could Not Execute sp_replcmds on x (Number 7 in 2015)
  10. Calculating Fibre Latency Correctly Is a Must When Trying To Determine Storage Performance Numbers (Number 6 in 2015)

This year was another first for my blogging. I had a single post this year hit not just 20k page views, but 30k page views.  That’d be #1 on this list which topped out at 30,749 page views (plus a few more in the last 3 working days of the year).  Number 2 up there had 14,337 while the MSDTC post had 11,643.

I review these numbers every year (I actually look at them through out the year as well) because it gives me some insight into what people are looking for. New posts obviously get some traffic (especially when I’m able to get blog posts written and posted within minutes of the announcement) but I find it interesting that a lot of these older posts continue to get a lot of traffic, like the MSDTC post up there which has been a top three post by page views for 3 years in a row.  Not bad for 15+ year old technology or a 3 year old blog post.

So based on this multi-year trending of data, I need to keep writing about random errors that I get when working on clients systems, and keep blogging about Microsoft announcements 5 seconds after they are announced.  That was pretty much my plan for 2017 anyway. 🙂

Don’t forget to check out DCAC’s Ascending to Azure webcast on January 5th, 2017 where we’ll talk about some of the successful migrations we’ve done from on-premises to Microsoft Azure.

Denny

December 21, 2016  4:00 PM

SQL Server for the New or Non-dba – Nashville Edition

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m thrilled to be able to say that I’ll be presenting a pre-con at the Nashville 2017 SQL Saturday. I’ll be presenting a pre-con named SQL Server for the New or Non-dba on January 13th, 2017 at 8am CST.sqlsat581_header

In this all day session on Microsoft SQL Server we will be learning about how Microsoft SQL Server works and what needs to be done to keep it up and running smoothly when you don’t have a full time database administrator on staff to help you keep it running.

In this session we will cover a variety of topics including backups, upgrade paths, indexing, database maintenance, database corruption, patching, virtualization, disk configurations, high availability, database security, database mail, anti-viruses, scheduled jobs, and much, much more.

After taking this full day session on SQL Server you’ll be prepared to take the information that we go over and get back to the office, get the SQL Server’s patched and properly configured so that they run without giving you problems for years to come.

Be sure to go register for the pre-con, as registration is required.

See you at SQL Saturday Nashville.

Denny


December 14, 2016  2:00 PM

Ascending to Azure

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure

I’m thrilled to announce that we’ll be presenting a free webinar called Ascending to Azure on January 5th, 2017 at 10am PCloudST / 1pm EST. During this webcast the Denny Cherry & Associates Consulting team will review some of the challenges that we’ve seen with moving customer workloads to Azure and how we’ve been able to work through those challenges.

We’ll also talk about one of the migrations that DCAC did where we moved the customers entire on-prem environment to Azure. SQL, web servers, domain controllers, everything. Leaving basically no servers on-prem giving the client the maximum flexibility for scale.

Want to attend? No problem. The webcast is free, you just need to register, save the calendar entry to your computer and show up on January 5th. Pretty simple. So go register now.

Denny

“>


December 9, 2016  10:16 PM

Recommended reading from mrdenny for December 9, 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server, SQL Server 2016

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


December 7, 2016  4:00 PM

Settings of the Query Store GUI

Denny Cherry Denny Cherry Profile: Denny Cherry

The SQL Server Query Store which was introduced in Azure about a year ago or so, and came to the on-premises version of SQL Server in SQL Server 2016. This is a fantastic tool which can help you find performance problems. However when looking at the query store it may not be showing you the data that you are looking for. This is because the Query Store GUI doesn’t know what time window you want it to look at by default, so it’s using the settings which Microsoft programmed in as the defaults. Changing this to get a better view of the data that you have stored within the Query Store is just a matter of changing the settings within the Query Store GUI.

https://www.flickr.com/photos/mezuni/475143348/in/

https://www.flickr.com/photos/mezuni/475143348/in/

With whatever Query Store report you are looking at within Management Studio in the upper right of the window there’s a “Configure” button.  If you click that it’ll give you the various options for the report that you are looking at.  What those options are will depend on the Query Store report that you are looking at.  The settings I end up changing the most usually fall into the “Time Interval” category so that I can change the time windows that the Query Store is querystoreconfiguredisplaying data from.

For example on the Regressed Queries report Management Studio will default to showing queries from the last hour that have regressed from what they were over the last month. However if I’m looking at a customers system I may want to see the queries that have changed in the last week compared to the last few months.  This is where I’d set those settings.

Once done click OK, and you’ll get updated reports with the data that you are looking for.

Nice and easy.

Denny


December 4, 2016  11:03 PM

Recommended reading from mrdenny for December 2, 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server, SQL Server 2016

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


November 30, 2016  4:00 PM

My VMware AirWatch Transaction Log File Keeps Growing

Denny Cherry Denny Cherry Profile: Denny Cherry

Anything the uses a database needs to have the database settings configured correctly, this includes all the VMware tools.  I’ve had a couple of customers contact me recently about VMware’s AirWatch system because the transaction log on the database keeps growiairwatchng.  The reason in both cases so far is because the database is using the default settings and the transaction log was never backed up.  I talked about this at Microsoft’s TechEd conference a couple of years ago during my talk titled “Introduction to SQL Server Management for the Non-DBA”.  If you prefer a VMware themed slide deck, I’ve given basically the same presentation at VMworld as well (registration required).

Fixing this is actually a really simple fix.  You need to do one of two things.

  1. Change the recovery model from FULL to SIMPLE.
  2. Start taking transaction log backups of the database

Now, how do you select the one of these that you want to do?  That’s pretty easy.  Do you need point in time recover-ability of the database?  If the answer to that question is yes, then select option 2.  If the answer to that question is no, then select option 1.

With most AirWatch systems you don’t really care if the database is restored to what it looked like yesterday or 5 minutes ago, so you usually don’t need point in time recovery.  So just change the database from FULL to SIMPLE and you’re good to go.  Doing this is actually really easy.  Connect to the SQL server in SQL Server Management Studio.  Right click on the database and select properties.  Change the recovery model from FULL to SIMPLE, then click OK. That’s it the log won’t grow any more.

If you need to reclaim the space because the disk is full, then you need to shrink the log file.  To do this, click the “New Query” button at the top of management studio and run the following command.

DBCC SHRINKFILE (2, 1024)

That’ll shrink the file down to 1 Gig and it “shouldn’t” ever grow any larger than that. There’s no outage to doing this, and no risk to the system.  Because of the way that shrinkfile works you may need to run it a couple of times over a couple of days in order to actually get all the space back.

If you don’t have SQL Server Management Studio, or you can’t find the properties, here’s a script that’ll do everything that I’ve described.  Just run this in the AirWatch database (it could be named anything so you have to change the database name in the USE statement from YourDatabaseName to whatever the name of your AirWatch database is).

USE YourDatabaseName
GO
declare @dbname sysname
set @dbname = db_name()

declare @sql nvarchar(1000)
set @sql = 'ALTER DATABASE [' + @dbname + '] SET RECOVERY SIMPLE'

exec (@sql)
GO
CHECKPOINT
GO
CHECKPOINT
GO
DBCC SHRINKFILE (2, 1024)
GO

Denny


November 23, 2016  4:00 PM

Why Are Nested Views Horrible?

Denny Cherry Denny Cherry Profile: Denny Cherry
https://www.flickr.com/photos/1024/3407840726/

https://www.flickr.com/photos/1024/3407840726/

Nesting views in SQL Server by itself probably won’t introduce performance issues. However when it will do is make it MUCH harder to find performance problems. When you have the top view calling 3 views, which each call 3 views you suddenly have a nightmare of views to unwind to resolve the performance issues.

There’s another dirty little secret of views. Even if you aren’t accessing columns from all the tables in the views which are being referenced, SQL Server still needs to query those columns. This increases IO for your queries, for basically no reason. Because of this reason alone it often makes sense to not use views, and specifically nested views.

Now there are plenty of reasons to use views in applications, however views shouldn’t be the default way of building applications because they do have this potential problems.

While working with a client the other week we had to unwind some massive nest views. Several of these views were nested 5 and 6 levels deep with multiple views being referenced by each view. When queries would run they would take minutes to execute instead of the milliseconds that they should be running in. The problems that needed to be fixed were all indexed based, but because of the massive number of views that needed to be reviewed it took almost a day to tune the single query.

Eventually the query was tuned, and the query was able to be run in about 200ms, but the cost to the customer was quite high.

Use views when they make sense, but don’t use them every time. Like everything else in SQL Server, there is no one right answer for every problem. They are one solution for a subset of problems, not the end all solution.

Denny


November 16, 2016  3:30 PM

Big Announcements in SQL Server 2016

Denny Cherry Denny Cherry Profile: Denny Cherry
8058678846_b65d4be043_z

https://www.flickr.com/photos/epicfireworks/8058678846/

Today Microsoft announced some MAJOR changes to the SQL Server 2016 product. As of SQL Server 2016 SP1, which ships today, from a programmability perspective there is no difference between the editions anymore. All programmability features will be available in all editions from LocalDB to Enterprise edition (there’s a couple of exceptions for various reasons which I’ve tried to notate below).

What’s Included?

This includes features such as table partitioning, Hekaton, ColumnStore, distributed partitioned views, Change Data Capture, Change Tracking, etc. will work everywhere. Basically the goal of this change was to make it so that you can deploy your database via a deployment script on any edition of SQL Server and it would just work, no matter if you are deploying on Express or Enterprise edition.

There’s some realistic exceptions to this. Chance Data Capture won’t work in Express Edition or LocalDB for example due to the fact that these editions don’t include the SQL Server Agent.

This is a huge improvement for software vendors as they can now use all the features of SQL Server 2016 SP1 and above without needing to require that their customers buy Enterprise Edition.

You may be asking yourself why would I buy Enterprise Edition after this announcement? The reason to buy Enterprise Edition hasn’t really changed. Most people didn’t buy Enterprise Edition to get any of these features. Customers typically buy Enterprise Edition to get access to more than 16 cores and to more than 128 Gigs of RAM. This will still be the case moving forward. If you need to run SQL Server on massive hardware at massive scale, if you need scale out HA using Always On Availability Groups, or Virtualization installation rights (also known as host based licensing), then you will still need to purchase SQL Server Enterprise Edition.

But Why?

9460677348_7c8289e15c_zThe next question is “why is Microsoft doing this”? The reason is that Microsoft wants to make life simpler for people that develop software. They want to make it so that ISVs can install their software using a single codebase. They can start their customers on Express Edition, and as the customer grows they simply upgrade their instance to Standard Edition, then later to Enterprise Edition without having to have the customer make ANY changes to the application database. Just buy the new SQL Server license and to a SKU upgrade.

Personally I don’t see this taking any Enterprise Edition license sales away from Microsoft (remember that Microsoft is in business to sell software). The primary reasons for buying Enterprise Edition are still there. There’s probably a couple of people that won’t need to buy Enterprise Edition, which is fine. Now they’ll be happier customers because they only need to buy the right sized license for their workload.

All this will be available in SQL Server 2016 SP1 as well as CTP1 of SQL Server v.Next (it doesn’t have a name yet) which also ships today.

Why Much RAM?

When it comes to In Memory OLTP (Hekaton) and ColumnStore there’s probably some questions about how much memory you can use for editions other than Enterprise Edition. The rule will be that Hekaton and ColumnStore will each be allowed to use 25% of the memory cap, and this memory will be outside of the memory cap. So if you are using SQL Server Express edition which has a 1 Gig cap for SQL Server pool memory today, you’ll be able to use an additional 256 Megs of RAM for Hekaton and an additional 256 Megs of RAM for ColumnStore. For Standard Edition you can use 128 Gigs of RAM for pools and an additional 32 Gigs of RAM for Hekaton and an additional 32 Gigs of RAM for ColumnStore. When using these features this effectively gives you a 50% increase in the amount of RAM you can use with these down-level editions.

With PolyBase there’s an additional note. If you need scale out of PolyBase where you scale the PolyBase configuration out to a scale out farm the head node of the farm needs to be Enterprise Edition, however the worker nodes of the PolyBase farm can be standard edition. This is a massive cost savings for those using PolyBase for huge amounts of data processing. If you don’t need the scale out feature of PolyBase and just need normal PolyBase support within SQL Server that will be available in all editions other than LocalDB.

Wow!

Just wow, I can’t imagine the work that went into getting this change in place.  And I’m not talking about the technical work to get this far, but the work of getting all the various people in management to approve such a radical change of taking what we have traditionally called “Enterprise Edition” features and moving them into not just Standard Edition, but into every edition of SQL Server 2016 SP1.  This is just huge.

Now there’s a few things which aren’t quite done yet because they’re going to take some more work, so we’ll hopefully see these changes rolling into SQL Server 2016 in later CUs or SPs.

In the meantime, enjoy the new features once you patch your SQL Server 2016 servers to SP1.

If you are running an older version of SQL Server don’t look for a new service pack that’ll back port these features to your older version.  The only way that you’re getting that to happen is with an upgrade to SQL Server 2016.  If you are looking for a consulting partner to help you with those upgrades, might I recommend Denny Cherry & Associates Consulting.  We are available to assist you with those projects right away.

Denny


November 9, 2016  2:00 PM

Why I love being an MVP

Denny Cherry Denny Cherry Profile: Denny Cherry

This week is the MVP summit and it seemed an appropriate time to talk about why I love being an MVP. (This post is going to be full of not so humble brags.)

The MVP summit is probably one of the best things about being an MVP. We as MVPs get direct access to members of the product team (in person at the Microsoft campus) where they show us outsiders all sorts of stuff that they are working on, or thinking about working on (different groups do this to different amounts, with the SQL Server team being one of the most open AFAIK).

https://www.flickr.com/photos/59632563@N04/6239670686/

https://www.flickr.com/photos/59632563@N04/6239670686/

Because of this openness I’ve been able to make close personal relationships with various members of the product team, and I’d ever call several of them friends as we have talked about non-work things when we see each other on work trips to conferences and other events. Without being a Microsoft MVP I may have met some of the folks on the product team, but we wouldn’t have been stuck in the same room for a week allowing us the opportunity to get to know each other. As this is my 9th year as an MVP, and my 9th MVP summit I’ve had a good amount of time to get to know various members of the product team at this point, and hopefully I get to know more over future years (if I’m lucky enough to be renewed).

There are other great benefits of being an MVP such as the MVP email lists where you can interact with MVPs from all over the world, in just about every country, in just about any product group that you can think of. There’s the Product Group Interactions (PGIs) which are webcasts where the product group talks to the MVPs about some topic for an hour. And of course there’s the MSDN subscription. All of these are great things which the MVP program and the product group give to the MVPs. But by far the most important to me is the relationships which I’ve built up over the years which give me some amazing insight into Microsoft and how the products are made (for good or bad) and the level of trust that the members of the product group have in me to trust me with information about the future of products (what little they like to share, even with MVPs).

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: