SQL Server with Mr. Denny


July 25, 2011  2:00 PM

Any testing on the effect of Hiding NX/XD flag from guest with SQL Server on ESX?

Denny Cherry Denny Cherry Profile: Denny Cherry

During the webcast which I did for the SQL PASS Virtulization Virtual Chapter on July 13th, 2011 a question came up that I didn’t have the answer for.  The question was if I had done “Any testing on the effect of Hiding NX/XD flag from guest with SQL Server  on ESX?”.  I hadn’t done any testing of this, so I kicked the question over to a friend of mine in the VMware world Gabrie van Zanten (blog | @gabvirtualworld) who hasn’t done any testing of performance effects of hiding this value.  Gabrie told me that he hasn’t heard of any performance problems with this setting that would change the general use of this setting for a SQL Server specifically.

Denny

July 19, 2011  2:00 PM

Want to try out Windows Azure for free for 30 days?

Denny Cherry Denny Cherry Profile: Denny Cherry

If you’ve been wanting to try out Windows Azure and haven’t been able to yet, now is the time.  I’ve been given an access code that you can use which will give you 30 days of full Windows / SQL Azure for free.  And the best part is there is no credit card required.  Just drop in the code on sign up and give the thing a test drive and see if it’ll work for your needs.

  1. Go to http://www.windowsazurepass.com
  2. Enter the country you live in
  3. Enter the code “DENNYCHERRY”

That’s all there is to it.

Your free 30 day Windows Azure account includes:

Windows Azure

  • 3 Small Compute Instances
  • 3 GB of Storage
  • 250,000 Storage Transactions

SQL Azure

  • Two 1 GB Web Edition Database

AppFabric

  • 100,000 Access Control Transactions
  • 2 Bus Service Connections

Data Transfers

  • 3 GB In
  • 3 GB Out

Denny

P.S. The site to sign up kind of sucks. After selecting your country and entering the code you’ll be asked to sign into Microsoft Live to setup your Azure account. This screen doesn’t look like it worked correctly (see below) but it did.  Sign in and it’ll activate your free 30 day trial.

Keep in mind that you can only use the 30 day Azure trial once per Live account, so if you’ve already tried it with a different account you’ll need to create a new Live account.


July 18, 2011  2:00 PM

Interview with our new whipping boy Adrian Bethune

Denny Cherry Denny Cherry Profile: Denny Cherry

While at Tech Ed 2011 I spent a good amount of time talking with Adrian Bethune (@canon_sense) who is the new product manager for SQL Server Manageability, originally hired onto the team by the magnificent Dan Jones (blog | twitter) who was smart enough to run screaming from the team towards an awesome new position at Microsoft.  Adrian was crazy nice enough to take some time and sit down with me for a bit for an interview so that he could be introduced to the community at large (and so that everyone knows where to throw their rotten fruits and vegetables).

[Denny] As someone I’ve meet a couple of times now, I know a little about your history at Microsoft, but the good folks of the SQL community don’t know much about you as you’ve done a pretty good job staying out of the public eye, until now. Can you tell us a little about your life at Microsoft and how you got here?

[Adrian] I finished my CS degree at University of Illinois [UIUC] in 2007 and came to work in the build and test infrastructure team here in SQL Server for a few years to get some experience building some enterprise scale services and applications that get deployed and used right away. In the infrastructure team I worked on the test and build automation systems that pump millions of tests on hundreds of builds every day. The coolest projects I worked on included designing, deploying, and migrating to a next-gen distributed build system as well as automated storage management and provisioning services that work on top high-end hardware. As Microsoft’s SQL Server strategy shifted to include the cloud and focusing on reducing the cost of developing and maintaining SQL Server, I saw a great opportunity in the SQL Manageability team to get into the thick of it so I joined the team last June.

[Denny] DACPAC has a pretty sorted history with the v1 release being looked upon less than favorably (it may have been compared to a steaming pile of something, or Windows ME, etc.). What brought you to the team and made you want to take this project on?

[Adrian] While the first implementation did leave something to be desired as you subtly point out, four important points drew me to this area. First, the entire DAC concept is new and is therefore more of a startup environment than the typical monolithic product development team where you get pigeon-holed very quickly. Right out of the gate we were heads-down on shipping a major feature as soon as possible – in-place upgrades – in VS 2010 SP1 and SQL 2008 R2 SP1. Second, the concept of DAC and the services it provides is appealing even if the first implementation is not ideal. The way I see it, DB developers have become accustomed to having to develop on this cumbersome stateful beast by continuously executing differential scripts which modify the state of their application (schema). With the push towards minimizing the cost of managing databases, developers and DBAs need serious tools that help reduce the burden of managing and deploying databases so they can focus on working on real innovation. DAC has the potential to become one of the key pillars in the drive to drop costs. Third, the engineering team behind DAC is staffed with some top development and test talent. The DAC team is a serious engineering team that has a passion for demonstrable quality and the drive to push multiple agile releases therefore it’s a fun and exhilarating team to work with. Over the next few months you’ll see some exciting announcements and developments in the DAC space, both with and for a multitude of partners and products within Microsoft as well as integration into the tooling and services for SQL Azure. Lastly, the partnerships and engagements within SQL have been fantastic. DAC is not just a SQL Manageability initiative, it’s a SQL initiative with some great work from the Engine team on providing a set of compilation services to validate the DAC model as well as moving the needle towards containing the database. Together with the Engine team we will provide a symmetrical model for a database (application) in the runtime environment (contained databases) and the logical definition (DAC – dacpac). Check out the DAC/CDB session from TechEd for more info on the roadmap – http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DBI306. In the session you’ll see how the Engine and DAC teams are working towards a common vision to make developing and managing cheaper.

[Denny] From what you’re saying it sounds like some Microsoft products will begin using DAC and DACPAC to deploy applications. Does this include customer shipped applications such as Dynamics and SCOM or just internal applications? (We can edit this to not list any specific products if needed.)

[Adrian] Besides several internal teams picking up and using DAC services for their own purposes, shipping products will also be integrating with it. Publically, System Center Virtual Machine Manager 2012 has already shipped a beta with DAC integration. At TechEd, the AppFabric team announced their new composite application model initiative/project which also integrates DAC as the data-tier component for applications. Expect to see more products integrate DACFx in the coming months. That’s all I can say for now.

[Denny] If people have feedback on Data Tier Applications (DAC) or DACPAC what’s the best way to get that to the team?

[Adrian] The broader community can always engage us with connect bugs or on the MSDN forums but for MVPs and folks I interact with, feel free to shoot me a personal mail.

[Denny] Knowing the abuse that we’ve given our good friend Dan Jones (a.k.a. DACPAC Dan) did that make you hesitant to take on DAC and DACPAC?

[Adrian] Sure, it would give any reasonable person pause, however, my own personal estimation of the potential value of DAC and the chance that we could align with our partner teams in the Engine, Juneau, Visual Studio to provide a single surface area for development which enables some key management features trumped my reservations. While I can’t disclose much more than we talked about at TechEd, I can say that the reality has met the potential and it’s exciting to see how the future is shaping up.

[Denny] So when you aren’t being abused by the MVPs, and you are permitted to actually leave the confines of building 35 what sort of things do you fill your 10 minutes of daily free time that Steve Ballmer allocates to you?

[Adrian] From time to time they do let us out but only enough so people don’t file missing persons reports. In my spare time I hang out with the wife, dabble with gadgetry, swim, read quite a bit (Sci-Fi typically) and follow economic and political news and trends.

[Denny] Are there any other projects that you are working on that you can share with us that’ll be included in the SQL Server “Denali” release or maybe even earlier?

[Adrian] After ramping up, I spent the latter half of last year working on shipping DAC v1.1 that includes in-place upgrades as soon as possible, which means we actually shipped in Visual Studio 2010 SP1 and will ship in SQL Server 2008 R2 SP1 (CTP available today). Once we shipped 1.1, I worked on getting the import/export services up and running and we shipped a CTP currently available on www.sqlazurelabs.com which you may have seen at TechEd. In parallel, I am working on an import/export service for SQL Azure which will provide import/export as a service (rather than from client side tools) that will import or export to/from Azure BLOB storage without the need for client side tools. Apart from that, I’ve been very busy working on partnership engagements within Microsoft because DAC provides a nice and cheap way for other product teams to operate on and with SQL Server and SQL Azure.

[Denny] I’m interested in this Azure Import/Export utility. The BLOB storage that this will integrate with (keeping that I don’t know much about Azure besides the SQL Azure part), how would one get a file uploaded to that automagically? Can you FTP files to it, or is there an API which has to be used, etc?

[Adrian] There is an API you can use, however, there are quite a few tools which will synchronize folders between your client machine and your BLOB storage account. That’s the easiest way to get your files into the cloud. I won’t mention any specific tools broadly to avoid favoritism/politics, however a quick search for “azure storage tools” is a good starting point. Keep in mind that the only time you need to transfer the import/export artifact – a BACPAC – between your client and the cloud is when you want to migrate or move your database between the cloud and on-prem environments. Otherwise, you can just keep your files in the cloud in your BLOB account and use our services to operate over them. Sounds like a good topic to cover in a session…

[Denny] If v2 of DACPAC blows up, would you prefer to be slow roasted over gas or open coals?

[Adrian] That depends. Is the purpose to inflict pain or are you of the cannibalistic persuasion? Honestly, as MVPs are some of the most seasoned SQL consumers, we’d love to hear your feedback on the new upgrade engine as well as the overall application lifecycle experience that DAC enables. We are a nimble team and if there’s a great opportunity to incorporate fixes for our services for the Denali release. Unfortunately, because we were so focused on DAC 1.1, we didn’t have enough time to deliver a lot of DAC value in Denali CTP1, however, CTP3 coming this summer will be fully DACified and include all the latest and greatest including SQL Engine validation, in-place upgrades, and full support for SQL Azure application scoped objects including permissions and roles!

[Denny] It is pretty clear that DAC and DACPAC is geared mostly towards SQL Azure as it supports the current SQL Azure feature set. Can you tell us a bit about why the decision was made to push DAC and DACPAC as being an on premise solution instead of keeping the focus for it on SQL Azure until it was ready to support a fuller on premise feature set?

[Adrian] Fantastic question. The reason it was positioned as an on-premise solution is because the SQL Azure story was still being written. If you rewind back to the days 2008 R2 was working towards release, SQL Azure started out with this simple DB concept and was then reset to provide full relational features. At that time, we really weren’t sure if we wanted to dock the DAC roadmap to Azure because the SQL Azure story was in flux. So the fallback position was to tie the DAC story to the box product because we weren’t able to really commit to a direction for DAC and Azure. Since then, we’ve been straightening the story in a big way with partners and at TechEd.

[Denny] When we were hanging out at Tech Ed 2011 you seemed like you wanted to become more involved in community. Did I guess this one right? Will you be joining us at events like PASS and the MVP summit for some “learn’ and camaraderie”?

[Adrian] Yes, I certainly hope to join you at PASS and have another couple sessions at the next MVP summit but don’t know with certainty yet.

[Denny] The most important question, would you prefer to be known as “DACPAC Adrian” or “DACPAC Dan 2.0″?

[Adrian] The former. There’s already a “DACPAC Dan 1.0″ and we haven’t tested any side by side or upgrade scenarios. :)

I’d like to thank Adrian for being a sucker good sport and agreeing to sit down with me, even knowing the beatings that I’ve given Dan over DACPAC v1.  I hope that everyone enjoyed reading this interview as much as I enjoyed talking with Adrian.

All joking aside Adrian is a great guy, and a lot of fun to hang out with, and he’s got some great vision for Data Tier Applications and DACPAC.  I just hope he’s able to pull off what he’s got planned.  If not, we’ll be having a BBQ at the next MVP summit and Adrian will be the “guest of honor”.

Denny


July 14, 2011  6:33 PM

Updated SSMS Shortcut Key file

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve updated the SQL Server “Denali” settings file to include the CTRL+E and CTRL+X hot keys.  You can find the new file in the same location as the old one so you only need to download it once.

Denny


July 14, 2011  2:56 AM

Resetting SSMS shortcut keys in SQL Server “Denali” CTP 3

Denny Cherry Denny Cherry Profile: Denny Cherry

If you did what I did when you installed CTP 3 of SQL Server “Denali” (which happened when I uninstalled CTP 1 then installed CTP 3) you’ll be in for a rude surprise, a lot of your shortcut keys that you’ve been using for years don’t work any more.  This is to say the least VERY annoying, and shouldn’t have ever happened.  If you upgrade you won’t have this problem as it’ll take your old shortcut key settings.

It isn’t all that hard to fix, provided that you know what each and every hot key was supposed to be you can go into Tools > Options then Environment > Keyboard > General and reset all the keys that are hosed up.

Or you can download the settings file that I’ve put together that has (I think) all the screwed up hot keys corrected.  Safe this file on your computer somewhere and uncompress it, then click on Tools > Import and Export Settings.  Run through the wizard (be sure to safe your current settings in case it really screws something up) then import my file.  Now all your shortcut keys “should” work as expected.

If you find a shortcut key that I missed please put it in the comments below, or let me know in some other way and I’ll get the file updated.

Denny

P.S. I’ve sent a nasty note to some of my SQL Server development team contacts.  I’ve also submitted a connect item so that feedback can be publicly collected.


July 12, 2011  2:00 PM

Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’.

Denny Cherry Denny Cherry Profile: Denny Cherry

So I hit the error in the title the other night while converting a heap to a clustered index on a production system.  After this error poped up, some other nasty stuff showed up in the error log, which I’ve included below.

Error: 9002, Severity: 17, State: 4.
The transaction log for database ‘%d’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 5.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 1.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 5.
During undoing of a logged operation in database %d, an error occurred at log record ID (87:9933:1). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

At this point crash recovery ran and the database started back up.

The first error is that the log filled up, followed by the nasty part.

The good part is that this is going to be a pretty rare problem to run into.  This will only happen on update operations on a heap, or adding a clustered index to a heap and you run out to transaction log space.  The reason that this can happen is that updates against a heap and clustered index operations are a two phase operation.  First deletes of the old rows is done, then an insert of the new rows is done.  When the insert of the new rows is started an internal flag is set which states that the insert operation is a “must complete” operation.  When an unexpected error happens between the delete operation and the subsequent insert operation, the database is taken offline as a precautionary measure to ensure consistency and prevent concurrent operations to the record that was being modified.

The downside to this problem, is that it can show it’s ugly face in SQL Server 2008 and SQL Server 2008 R2 and there is no fix available for it in either version.  The only way to fix the problem is to change the table from a heap to a clustered index or to add more transaction log space.

I have been told that this has been fixed in SQL Server “Denali” as “Denali” will handle the heap updates differently than prior versions.  If you have SQL Server 2005 you shouldn’t see this problem.

In my case the problem was solved by letting the database come back online, and creating the clustered index.  My database was in simple recovery mode so after recovery the log was marked for reuse automatically meaning that there was enough room in the transaction log for the operation to create the clustered index to complete.

If you run into this problem, hopefully this helps explain what is happening and why.

Denny


July 11, 2011  2:00 PM

Transaction log growth best practices

Denny Cherry Denny Cherry Profile: Denny Cherry

There is a lot of talk online about transaction log growth and what you should do. There are two answers to this question, and in this post I’ll give you both answers.

The theoretical answer is that you want to grow your transaction logs in 8 gig chunks. This will maximize the amount of space that make up each VLF within the transaction log file. This will allow SQL Server to to minimize the number of VLFs that are created which is a performance improvement. Unfortunately there is a problem with this which leads us to the practical answer…

The practical answer is that there is a bug in SQL Server which causes the database to grow he transaction log by 32k the first time you issue a growth which is a multiple of 4 Gigs (which 8 Gigs obviously is). Because of this bug, which has not yet been fixed in any released versions of SQL Server it is recommended to grow the transaction log by a slightly smaller number such as 8000 megs at a time. The VLFs will be almost as large, keeping log performance where it is expected to be without having the SQL Server create a 32k log growth and 32k VLF.

With this bug there is no error or message that the log was grown by 32k instead of the requested amount. You can check the log growth information by looking at the DBCC LOGINFO output.

Now don’t forget that you don’t want to auto grow by is large size. If you do transactions may be killed by client side timeouts while waiting for the log to grow as the transaction log doesn’t support instant file initialization no matter what. You’ll therefor want to pre-grow the transaction log to the needed size ahead of time, with a smaller auto grow size that your storage can accept quickly. Personally i like to use a 1 gig or 1000 Meg log auto grow size for emergency auto grows. This gives me enough log space to work with, without having to wait for the storage to take to long to write the file.

Denny


July 7, 2011  2:00 PM

How HyperBac worked for me

Denny Cherry Denny Cherry Profile: Denny Cherry

I recently installed HyperBac on a clients SQL Server to reduce the amount of space needed to store their backups.  Specifically we were looking to reduce the amount of space the log backups took, but I was curious to see how much space the full backups would be reduced by.

The full backups were ~53 Gigs in size, and the transaction logs averaged about 800 Megs every 12 minutes.  After installing HyperBac, we reduced the full backups down to ~14 Gigs, and the log backups down to about 150 Megs every 12 minutes.  Additionally the full backup went from 57 minutes down to 14 minutes greatly reducing the amount of time that the production disks have extra IO load on them.

Installation and configuration was very simple.  I just installed the software on the server, thankfully with no reboot required and changed the backups to use hbc as the file extension instead of .bac and .trn.

Overall the space savings for 1 days worth of backups went from 107 Gigs to 35 Gigs.  This allowed us to save a very large amount space on the backup drive, removing the need to purchase more storage for the back drive for a long time to come.  All in all, well worth it for the few hundred dollar price tag of the software.

Denny


July 5, 2011  2:00 PM

What does WITH (NOLOCK) actually mean

Denny Cherry Denny Cherry Profile: Denny Cherry

Contrary to popular belief WITH (NOLOCK) isn’t actually the SQL Server “Turbo” button.  While it may appear that way, there’s actually some stuff going on in the back that makes it look this way.  Understanding what is happening is critical to using WITH (NOLOCK) correctly.

When you use the NOLOCK query hint you are telling the storage engine that you want to access the data no matter if the data is locked by another process or not.  This is why is can make it appear that the query is just running faster as you are no longer waiting for other processes to complete their writes, you are simply reading what ever is in the buffer pool or on disk at the time that you get to it.  This leads to a problem called dirty reads, meaning that you may not be getting the same values that you would get it you were to run the query again.  This isn’t necessarily a bad thing, just something to be aware of.  Usage of the NOLOCK hint may be just fine in your application, or it may be incredibly bad.

When other users are making changes (insert, updates and deletes all do this) to the data in the tables within the database, they take locks on the pages which they are modifying.  These locks tell the SQL Server that no one else can use these pages until the change has been completed.  Once these locks have been released your select statement can continue until it either finishes or comes to another page which is locked.

When using the NOLOCK hint you ignore the locks and read what ever data is available at the time.  This can cause problems when joining foreign keys as you can end up with different data being returned if the foreign key value is updated (depending on the execution plan that your SELECT statement uses).

Many people believe that one reason that NOLOCK works is because the execution plan changes when you use the NOLOCK hint.  This isn’t true.  The NOLOCK tuning hint isn’t used by the query optimizer, but instead by the data access engine as this isn’t used to change the way that the data is found, but instead it simply changes the way the locks are accessed as the data is read from the buffer cache and/or the disk.

Hopefully this will give a little insight into what is happening when you use the “SQL Turbo button”.

Denny


July 1, 2011  2:00 PM

SQL Server Consolidation at #SQLExcursions

Denny Cherry Denny Cherry Profile: Denny Cherry

The fifth and final session that I’ll be presenting at SQL Excursions is on SQL Server Consolidation.  There are lots of ways to consolidate SQL Server databases, and some times the wrong method of consolidation is selected by people.  Knowing all of the options when working on a SQL Server Consolidation project is key to a successful consolidation project.  In this session we’ll look at how to evaluate which consolidation option is best for different kinds of databases so that you can maximize your SQL Server infrastructure and minimize consolidation costs without impacting post consolidation performance.

I hope that the sessions that Tom and I are presenting at SQL Excursions sound interesting and would help you and your employer succeed in your future projects.  Personally I feel that the costs to go to an event like this are minimal compared to the massive savings that a company could realize over time from the information which their DBA would be able to bring back to the office.

If you haven’t signed up yet, now is the time.

See you in Napa September 22nd-24th,

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: