SQL Server with Mr. Denny


August 7, 2013  2:00 PM

Back To Basics: Statistics

Denny Cherry Denny Cherry Profile: Denny Cherry

Statistics are magical little objects within the database engine that have the ability to make your queries run fast or painfully slow.  The reason that statistics are so important is because they tell the database engine what data exists within the database table and how much data exists.  The problem with statistics comes from how often they are updated.  By default in all versions and editions of SQL Server the statistics are updated when 20%+500 rows within the database table change.  So if a database table has 10000 rows in it we need to change 2500 rows (2000 rows is 20% plus an additional 500 rows) for the statistics to be updated.  With smaller tables like this having out of date statistics usually doesn’t cause to many problems.  The problems really come into play with larger tables.  For example if there are 50,000,000 rows in a table for the statistics to be automatically updated we would need to change 10,000,500 rows.  Odds are it is going to take quite a while to change this number of rows.  To fix this we can manually tell the SQL Server to imageupdate the statistics by using the UPDATE STATISTICS command.

Within the statistic there are up to 200 values which are sampled from the column.    The statistic shown below contains a few different columns.  The statistic shows a series of values from the column which the statistic is built on.  It also contains the count of the number of rows between that row and the next in the statistic.  From this information the SQL Server is able to build the execution plan which is used to access the data.  When the data within the statistic is out of date the SQL Server doesn’t make the correct assumptions about how much data there is and what the best way to access that data is.  When the statistic gets updated the SQL Server is able to make better assumptions so the execution plan becomes better so the SQL Server is able to get the data faster.

Denny

July 31, 2013  2:00 PM

Making people pay their own way to a job interview is not the way to get good talent

Denny Cherry Denny Cherry Profile: Denny Cherry

I get job postings emailed to me all the time from various recruiters.  Usually they are, we’ll call them OK.  But sometimes the requirements of just getting to the interview are just stupid.  Every once and a while, usually about twice a year, I get an email that says “remote candidates were welcome, but the candidate would have to pay their own way to get to the job interview”.  Now when you are talking about paying for gas to get from your house to their office that’s fine.  However these jobs are often not in the city, or even the state that I live in.  So let me get this straight, you want me to pay to fly out to see you, so that you can tell me that you don’t want me to work for you.  That’s really not how this works.

If you’ve exhausted the talent in your local city and you need to get talent from out of town, then it’s on you to pay for the travel to get the person to the interview.

Even if there was a position open at a company that was in my local area, if it said this I probably wouldn’t even consider the job.  This tells me that as a company you don’t respect my time and my resources.  From this I assume that you won’t want to pay for any of my training so that I can better support the companies systems.  I can assume that you’ll expect me to work projects on nights and weekends (I’ve got no problems with nights and weekends for emergency system down issues, but not for projects that weren’t properly planned).

If you are a company that puts these sorts of silly statements in your job descriptions, and you are wondering why you can’t get any candidates stuff like this is try.

Denny


July 24, 2013  7:00 AM

Blocking Is Not Bad

Denny Cherry Denny Cherry Profile: Denny Cherry

When dealing with SQL Server databases we have to deal with locking, and blocking within our application databases.  All to often we talk about blocking as being a bad thing.  How ever in reality blocking isn’t a bad thing.  The SQL Server uses blocking to ensure that only one person is accessing some part of the database at a time.  Specifically blocking is used to ensure that when someone is writing data that no one else can read that specific data.

While this presents as a royal pain in that users queries run slower than expected, the reality is that we don’t want users accessing incorrect data, and we don’t want to allow two users to change the same bit of data. Because of this we have locking, which then leads to blocking within the database.

All of this is done to ensure that data integrity is maintained while the users are using the application so that they can ensure that the data within the database is accurate and correct.

Without locking and blocking we wouldn’t have data that we could trust.

Denny


July 17, 2013  7:00 AM

Nolock and your financial application

Denny Cherry Denny Cherry Profile: Denny Cherry

The NOLOCK indexing hint gets used way, way to frequently.  The place that I hate seeing it the most is in financial applications, where I see it way to often.

Developers who are working on financial applications need to understand just how important not using NOLOCK is.  Using NOLOCK isn’t just a go faster button, it changes the way that SQL Server lets the user read the data which they are trying to access.  With the NOLOCK hint in place the user is allowed to read pages which other users already have locked for changes.  This allows the`  users query to get incorrect data from the query.

If the user is running a long running query that is accessing lots of rows which are in the process of being accessed, the user could get duplicate rows, or missing rows.  This can obviously cause all sorts of problems with the users report as the data won’t actually be accurate.  In reports that internal staff are running this is not good, if this your external users which are getting incorrect data, such as account debits and credits being processed while the user is requesting data they could suddenly get all sorts of invalid data.

If you are working with a financial application and you are seeing NOLOCK hints in there you’ll want to work on getting rid of them, and for the ones which must remain for some reason to make sure that the business users understand exactly how the data that they are looking at is totally incorrect and shouldn’t be trusted.

If the application is using the NOLOCK hint to solve performance problems so problems need to be resolved in other ways.  Typically by fixing indexing problems that exist on the tables which are causing some sort of index or table scans.

Denny


July 10, 2013  7:00 AM

Preventing Locking, Blocking and Deadlocks in the vCenter database

Denny Cherry Denny Cherry Profile: Denny Cherry

As our VMware environments become larger and larger with more and more hosts and guests more thought needs to be given to the vCenter database that is typically running within a SQL Server database.

With the vCenter database running within Microsoft SQL Server (which is the default) their will be lots of locking and blocking happening as the queries which the vCenter server runs aggregates the data into the summary tables.  The larger the environment the more data that needs to be aggregated every 5 minutes, hours, daily, etc.

Then problem here is that in order for these aggregations to run the source and destination tables have to be locked.  This is normal data integrity within the SQL Server database engine.

Thankfully there is a way to get out of this situation.  That is to enable a setting called Snapshot Isolation level for the vCenter database.  This setting changes the way that SQL Server handles concurrency by allowing people to write to the database while at the same time allowing people to read the old versions of the data pages therefor preventing locks.  The SQL Server does this by making a copy of the data page when it is being modified and putting that copy into the tempdb database.  Any user that attempts to run queries against the original page will instead be given the old version from the tempdb database.

If you’ve seen problems with the vCenter client locking up and not returning performance data when the aggregation jobs are running, this will make these problems go away.

Turning this feature on is pretty simple.  In SQL Server Management Studio simply right click on the vCenter database and find the “Allow Snapshot Isolation” setting on the options tab.  Change the setting from False to True and click OK (this is the AdventureWorks2012 database, but you’ll get the idea).

image

If you’d rather change the settings via T-SQL it’s done via the ALTER DATABASE command shown below.

ALTER DATABASE [vCenter] SET ALLOW_SNAPSHOT_ISOLATION ON
GO

Hopefully this will help fix some performance problems within the vCenter database.

Denny


July 5, 2013  5:02 PM

Recommended reading from mrdenny for July 05, 2013

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny

.

Denny


July 3, 2013  7:00 AM

Backups & the Buffer Pool

Denny Cherry Denny Cherry Profile: Denny Cherry

    As we know with Microsoft SQL Server everything is processed from disk and loaded into the buffer pool for processing by the query engine.  So what happens to the buffer pool when backups are taken?

    The answer is that nothing happens to the buffer pool.

    When SQL Server is backing up data from the disk, SQL Server simply takes the data from the data files and writes it to the backup file.  During the backup process the dirty pages are written to the disk by the checkpoint process being triggered by the backup database process.

    Because the backup process simply reads the data files and writes them to the backup location there’s no need to cache the data in the buffer pool as this data isn’t being queried by a normal SQL query.

    Denny


June 26, 2013  2:00 PM

Performance Tuning a Spotlight for SQL Server Query

Denny Cherry Denny Cherry Profile: Denny Cherry

The other day I was looking at parallel query plans on a customers system and I noticed that the bulk of the parallel queries on the system where coming from Spotlight for SQL Server.

The query in question is used by spotlight to figure out when the most recent full, differential and log database backups were taken on the server.  The query itself is pretty short, but it was showing a query cost of 140 on this system.  A quick index created within the MSDB database solved this problem reducing the cost of the query down to 14.  The query cost was reduced because a clustered index scan of the backupset table was changed into a nonclustered index scan of a much smaller index.

The index I created was:

CREATE INDEX mrdenny_databasename_type_backupfinishdate on backupset
(database_name, type, backup_finish_date)
with (fillfactor=70, online=on, data_compression=page)

Now if you aren’t running Enterprise edition you’ll want to turn the online index building off, and you may need to turn the data compression off depending on the edition and version of SQL Server that you are running.

If you are running SpotLight for SQL Server I’d recommend adding this index as this will fix the performance of one of the queries which SpotLight for SQL Server is running against the database engine pretty frequently.  I’d recommend adding this index to all the SQL Server’s which SpotLight for SQL Server monitors.

Denny


June 19, 2013  5:00 PM

A week of using my Surfii (when you have more than one Microsoft Surface)

Denny Cherry Denny Cherry Profile: Denny Cherry

So while at the TechEd North America conference Microsoft gave the attendees, speakers and booth staff the ability to purchase some Microsoft Surfii (surf-I) at some major discounts.  The Surface RT was available for $99 US and the Surface Pro was just $399 US.  Needless to say I purchased both of them.

I’ve been using them for about a week now and I’ve got to say that I’m really liking them for the most part.  The devices are pretty light even with the keyboard attached.  I spent the entire flight home from my layover in Houston to San Diego using just the Pro to do some writing on via the type cover and while it was a little cramped and took a little getting used to, within about 30 minutes of using it I was basically adjusted to using it.  The only thing that really kept screwing me up is the lack of a right click key on the keyboard, which I’m very used to using as I’m not big on using the mouse to right click when I’m writing a book or article.  Using the device on the flight I was using it with the kickstand and I found that it was at a really good angle for me to see everything on the screen without any eye strain or potential problems like that.

Another annoyance that I found was that when the type cover isn’t flat on a hard surface and you try and right click with it you’ll end up left clicking.  In the airport I was sitting with the surface on my lap with the kickstand out and about 1/2 the time when I used the right click I got a left click instead.  Very annoying.

After getting home with the device I didn’t do much heavy work with it besides getting some software installed.  I was really impressed with the quality of the WiFi antenna in it.  Using it downstairs and in my back yard there’s great WiFi signal when only one of my two laptop works well downstairs in the same stop in the living room.

Recently I used the Surface Pro as a notebook with the pen that comes with it.  I needed to figure out which sprinklers in my yard went with which zone (there’s 15 zones in the timer and 11 controllers and no documentation) in our new house.  I grabbed my Surface Pro, grabbed the pen (which docks nicely to the power port when you aren’t charging the Pro) and fired up OneNote.  I just wrote in it just like I would if I had a piece of paper, but in this case it’s a never ending piece of paper because it just keeps going instead of making you flip the page.  Because I’ve got OneNote configured to sync everything to the cloud, everything just syncs up to OneNote on my laptops and desktop so all my notes were instantly available on my desktop when I sat down.  Now I can’t say anything about the quality of the hand writing, but that’s all me not the device (it isn’t any better on paper).

Because the Surface Pro is just Windows 8 I installed Cubby on it and configured it to sync the My Documents folder to all my other machines.  This gives me all my scripts, documents, articles, presentations, etc. on the Surface Pro just like on my desktop and laptops.  I’ve also got all the normal VPN applications installed as well as SQL Server Management Studio to that I can do basically whatever client work I need to from this little device.

Now the Surface RT isn’t a full blown copy of Windows 8.  To me it is more of an iPad replacement than anything else.  So on that guy I’ve installed the same stuff that is on my iPad, games.  Free ones whenever possible.  For a platform for playing those kinds of games it is working pretty well.  I can’t comment on using it for actual work type things as I’ve got no plans for doing that on the device.  One thing that I will say is that I wish that the pen worked on the Surface RT as well as the Surface Pro, but it appears that what ever the pen talks to only works on the Surface Pro.

For the prices I paid, these items were a steal.  If I was paying full price I’d have to think more about purchasing them (which should be obvious as I didn’t buy them until I got them for a damn good price at TechEd).

Denny


June 10, 2013  12:00 PM

SQL Server 2014 Standard Edition High Availability

Denny Cherry Denny Cherry Profile: Denny Cherry

With all the announcements about SQL Server 2014 this last week there have been a lot of questions about what’s going to happen for SQL Server 2014 and the non-shared storage High Availability options as we are now one step closer to database mirroring being removed from the product.  You’ll see several blog posts coming out this morning from a variety of people all with their own opinions about what should be included.  These opinions are mine and mine alone.

In SQL Server 2005 Standard Edition and up we had database mirroring which supported having a single mirror on site which was synchronous mirroring only with asynchronous mirroring being an Enterprise Edition feature.  I would like to see this same feature set moved into the SQL Server 2014 Standard Edition product as well.  How I would see this working would be the normal AlwaysOn Availability Group configuration that we have today but only supporting a single replica.  I can see synchronous data movement being the only data movement option which would allow for a local onsite HA without giving you the ability for geographically distributed disaster recovery as that requires asynchronous data movement.

If Microsoft wanted to do something really nice for their Standard Edition customers they could allow for a second replica which would be an Azure database replica and that would allow for Disaster Recovery within Standard Edition while pushing Azure (which we all know is a big deal for Microsoft these days).

So there you have it, that’s what I would like to see in the SQL Server 2014 Standard Edition version of the product.  Do I expect to see it, honestly I’m really not sure.  Microsoft has been very tight lipped about what is coming in the Standard Edition, mostly because these decisions haven’t been made yet.  Once they are someone people will be happy, others wont be, but that’ll be what we have to deal with until we do this all over again in a couple more years when the next version of SQL Server is released.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: