SQL Server with Mr. Denny


June 10, 2015  5:58 PM

Recommended reading from mrdenny for June 05, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL Server

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: way0utwest also known as Steve Jones

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny

June 3, 2015  6:00 PM

Why don’t the memory usage numbers from Task Manager and vSphere match?

Denny Cherry Denny Cherry Profile: Denny Cherry
Performance Monitor, Performance monitoring, SQL Server, Task Manager, VMware vCenter

This question comes up a lot. Windows Admins and DBAs see one number in Task Manager and the VMware Admins see another number in vSphere. And often the VMware Admins want to reduce the amount of memory that the server has available because of the memory utilization number. Here we see two screenshots from a server at a client’s site. One from task manager and one from vSphere.

TaskManager     vSphere

You can see that according to vSphere the VM is using about 37 Gigs of it’s memory, when the host has 60 Gigs allocated.  Looking at Task Manager we see that all the memory is in use (43 Gigs with Windows saying that 18 Gigs is available).  When I look at the PLE for this server, the number is shockingly slow, just 44 seconds (the server is a replication distributor so that actually makes sense).  But we can still see a major disparity in the numbers between Task Manager and vSphere.

So who’s right?

That’s the problem, they both are.

Task Manager is right because that’s the amount of RAM in use within the OS.  vSphere is also right because it’s showing the amount of memory which has been recently used (I’m not really sure what recently actually means in this instance).  The problem is that the vSphere number isn’t measuring things in a way that makes sense for a database server (SQL, Oracle, DB2, MySQL, etc.).

Database platforms store data in memory in order to make that data available for use as needed without hitting the disks again.  Just because that data hasn’t been used in an hour (well outside what VMware considers to be “Active”) doesn’t mean that the data should be purged from RAM forcing the database to read the page back from the disk.  After all RAM is cheap, and high speed disk isn’t.  If the VM admin forces the amount of RAM to be lowered they aren’t allowed to then complain that SQL is pushing the disks harder, because that is exactly what will happen.

The Active Guest memory value is useful for servers running IIS, application servers, etc.  For SQL Server it’s useless as databases manage their memory much differently than most other applications.  The VM admins are going to need to start trusting that when the DBAs request 64 Gigs of RAM that they actually know what they are talking about.

Thanks,

Denny


May 29, 2015  6:00 PM

Recommended reading from mrdenny for May 29, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: heigesr2 also known as Rick Heiges

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


May 25, 2015  6:00 PM

Database Point In Time Recovery vs. Storage Point In Time Recovery

Denny Cherry Denny Cherry Profile: Denny Cherry

It’s no wonder that people in our industry have such a hard time figuring out what platforms can actually perform what features. Probably one of the most confusing is the phrase “Point In Time Recovery”. Both databases and storage (and virtualization) people use this phrase and it means two VERY different things depending on who is saying it.

Database People

What database people use this phrase they mean that they want to be able to restore the database to any point in time.

Storage and Virtualization People

What storage and virtualization people mean is that they can restore the system to the point in time that the snapshot was taken.

 

The test that I always provide people with is that, if I can a phone call today saying that the database needs to be restored to January 17, 2015 at 1:34:14pm can your solution meet that requirement without knowing about the requirement back on January 17th.  If the answer is yes, then you have point in time recovery, if the answer is no then you do not have point it time recovery.

The problem is really just a phrasing issue more than anything else.  In reality both the database backup solutions and the storage/virtualization backup people (think array snapshots, Veeam, etc.) can do point in time restores.  Database backups done with the transaction logs can restore to any point in time, while the storage and virtualization folks can restore to pre-defined points in time.  While the phrases are the same, the actual meaning behind them is VERY different.

When talking to storage and virtualization folks who want to take away the ability to do backups from the database team we need to get a clear understanding between everyone involved as to what is expected on the database backups, and from a technical perspective not from a marketing / sales / English language perspective.

Denny


May 22, 2015  5:49 PM

Recommended reading from mrdenny for May 22, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
CDC, SQL Azure, SQL Server, SQL Server 2005, VMware vSphere, Windows Azure

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: jscottmoss also known as Scott Moss

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


May 20, 2015  4:00 PM

India Here I Come

Denny Cherry Denny Cherry Profile: Denny Cherry
IT conferences, SQL, SQL Server

I’m thrilled to report that I’ll be presenting at the first SQL Server conference in India later this year. The conference organizer Amit Bansal made the official announcement last week. A new conference is a huge undertaking for the organizers, and bringing in speakers from outside the US is a huge commitment as international airfare isn’t cheap.

The conference is this August 27th-29th in Bangalore, India and includes a day of precons, which I’ll be delivering one of. The event team is currently putting together a survey for the local community to see what precon session they’d like to see the most, so once the results of that survey are in I’ll know what session I’m presenting.

I look forward to seeing India for the first time and I look forward to meeting new people at the conference.

Once I’ve presented my first session I’ll have officially presented on four continents (North and South America, Europe and now Asia). Only three more to go (granted one of them is a lot harder than the other two.

Denny


May 15, 2015  8:45 PM

Recommended reading from mrdenny for May 15, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: MilosSQL also known as HP

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


May 13, 2015  4:00 PM

Know Your Monitoring Scripts

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

If you are running monitoring scripts against your SQL Server database you need to know what those scripts do. As even a pretty basic monitoring script can cause some pretty big problems. Recently I was called to look at a large SQL Server who’s CPU had run up to almost 100% and users were getting huge numbers of timeouts.

Looking at the server using sp_whoisactive I could see massive waits in tempdb all on PAGELATCH wait types. Looking at sys.dm_exec_requests I could see that everything that was waiting was waiting on page 2:1:128. This was a pretty strange page ID for everything to be waiting on. Do I busted out DBCC PAGE and looked at the headers for the page. This page belongs to the system object sysobjvalues which stores meta-data about tables. The only reason that one of these system objects like this should be locked is if there’s an open transaction which created an object and never rolled back.

So I switched to the tempdb database and ran DBCC OPENTRAN. This showed a user transaction had been hanging around for about 90 minutes (it took a while for me to get the call and get VPNed in) and this session belonged to a user not the application. They checked with the user and he didn’t have any sessions running in SSMS.

So I ran DBCC INPUTBUFFER for the session with the open transaction and saw a query which was querying against various DMVs. So at some point this query window was used to begin a transaction, then create a temp table then do stuff (either with the temp table or without it). The transaction was never rolled back or committed so SQL hung onto it’s locks on those system objects, effectively preventing the application from creating temp tables or other temporary objects.

As soon as I killed this session, which forced it to roll back the application processes were able to begin flowing through the system again and everything got back to normal within about a minute or so.

So it just goes to show that you don’t have to be running some complex query against the SQL Server for all hell to break loose. One open transaction with a few small locks taken in just the right place can cause some serious problems pretty fast.

Denny


May 8, 2015  11:22 PM

Recommended reading from mrdenny for May 08, 2015

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL, SQL Server

Microsoft Ignite was great, but I still found the time this week to find some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: murilocmiranda also known as Murilo Miranda

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


May 6, 2015  6:00 PM

Should I use Backup Compression or NTFS Compression for Backups?

Denny Cherry Denny Cherry Profile: Denny Cherry
SQL

There’s a few different ways to compress your database backups when backing them up to disk. The two easiest ones are database backup compression (using the COMPRESSION parameter in the BACKUP DATABASE command) or turning on NTFS compression on the folder. Recently I was working with a client doing a database migration and their old scheme used NTFS compression while I was doing SQL backup compression on the server that they were migrated to.

So this sounded like a perfect time to do a little testing while other backups and restores were running.

The backup files that I’m playing with are from a SQL 2014 database. The database was backed up to a database backup file using SQL compression, then to a database backup file without using compression. Then the uncompressed backup file was copied to the old Windows 2008 R2 server to see how well NTFS handled the compression.

The database it self is about 4768.63 MB in size with 1180.09 MB free space within the database.

The SQL compressed database backup was about 38 Megs give or take. The SQL uncompressed database backup was about 3.5 Gigs which is basically what you’d expect as that’s how much data is in the database.

sql_backups

So the SQL Backup Compression here is pretty damn good. So taking those same files and copying them to a folder which has NTFS compression we see that uncompressed backup has been compressed to 1.28 Gigs while the compressed backup has been compressed to 38 Megs (NTFS compression saves all of 512 bytes according to the screenshot below).

test

test_compressed

So if you are looking for a way to store your backups in a compressed format, SQL Backup Compression is going to be the way to go. The good news is that SQL Backup Compression was first introduced in SQL Server 2008 but as an Enterprise Edition feature. It was moved into the standard edition in SQL Server 2008 R2 so almost every production SQL Server out there should be able to use it (or at least most of them).

Denny

(As always when looking at compression metrics from someone, your mileage may vary depending on the kind of data in the database, the alignment of the moon, the color of the sky, etc.)


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: