SQL Server with Mr. Denny


August 1, 2011  2:00 PM

SQL Server won’t start due to LUNs being unavailable

Denny Cherry Denny Cherry Profile: Denny Cherry

This question came up on ServerFault a while back and a wanted to expand on the solution more.  The basic problem that the person was having was that the iSCSI disks were taking a long time to come online which was causing the SQL Server to crash when the server was rebooted.

The solution that I came up with was to change the SQL Server service to make it depending on the disk drivers to ensure that the disk drivers were online before the SQL Server attempted to start.  Fortunately this is a pretty quick registry change which can fix the problem.

To fix this, open the registry editor (regedt32) and navigate to HKEY_Local_Machine\system\CurrentControlSet\Services\MSSQLSERVER\ and find the DependOnService key (if it isn’t there for some reason, create it just as shown with a type of REG_MULTI_SZ).  Edit the DependOnService key and set the value to “Disk”.  Click OK, close the registry editor and restart the server.

Once the key has been set the SQL Server will no longer start up until after the disk drivers have been started.

This will work for all version of SQL Server that are installed on a version of Windows which has the registry (so probably SQL Server 6.5 and up).

As this is a registry change, don’t be making this change on all your servers.  Only tweak the registry if you know what you are doing.  If you kill your system by editing the registry it’s your fault, not mine.  I don’t want to hear about it if you delete a bunch of keys by accident, on purpose, because you didn’t read this correctly, etc.

Denny

August 1, 2011  2:00 PM

Slide decks from Dallas Tech Fest Sessions

Denny Cherry Denny Cherry Profile: Denny Cherry

This last Saturday I spoke at Dallas Tech Fest, where I had two great groups in my two sessions.  As promised here are the slide decks for the two sessions which I presented.

Index Internals

Optimizing SQL Server Performance in a Virtual Environment

I hope everyone had as good a time at the conference that I had.

Denny


July 28, 2011  2:00 PM

HA vs DR, what’s it all mean

Denny Cherry Denny Cherry Profile: Denny Cherry

People often get HA (High Availability) and DR (Disaster Recovery) mixed up.  There are a couple of reasons that this happens.  First is the fact that there aren’t any clear guidelines which separate the two.  There are standard terms which are used to help define HA and DR, but there’s nothing which says this is how you build an HA environment and this is how you build a DR environment.  The reason for this is that every system out there is different, and every system out there has a different set of requirements.

All to often when I talk to people about HA and DR they pick the technology that they want to use before they finish defining the requirements of the HA or DR platform.  This presents a problem because after they the technology is picked the system is pigeon holed into the solution which has been selected, often without fully defining the RTO and RPO.

The RTO and RPO are the Recovery Time Objective and the Recovery Point Objective.  The RTO is defined as the amount of time that it takes to get the system back online after a critical system failure.  The RPO is defined as the amount of data which can be lost while bringing the system back online after a critical system failure.  Neither of these are numbers which can be defined by anyone in the IT department.  Both of these numbers are numbers which need to be defined by the business unit.  If the numbers aren’t defined by the business unit that owns the system, the numbers basically don’t mean anything.  The reason that I say that it won’t mean anything if IT defines these numbers is because IT probably doesn’t have a good understanding on the monetary losses of 10 minutes of data loss, of if the system is down for 24 hours while the system is being brought online.

Different situations require different solutions.  Not every solution in a single shop needs to be the same.  Your most important systems should have one kind of solution, which has a low RTO and RPO while the less important systems have higher RTO, and possibly a higher RPO.  The companies that try to build a single HA solution (and/or a single DR solution) are the companies who are destined to have their  HA, and specifically their DR solutions fail, usually in a fantastic blaze of glory.

When looking at your HA and DR solutions don’t look just within SQL Server.  There are a variety of other technology solutions which can be used when designing HA and specifically DR solutions.  This is especially true in the storage space when it comes to data replication specifically.  Look at your vendor solutions as well as solutions from third party providers.  While there aren’t many third party data replication solutions there are some out there that can be leveraged such as EMC’s Recover Point appliances.  But like all the solutions which are available these aren’t and end all be all solution either.  They like all the options which are available should be used where they make sense and not everywhere just because they were purchased.

Microsoft’s new feature in SQL Server “Denali” called “Always On” (aka. HADR, HADRON) while marketed as a total HA/DR solution for SQL Server databases is a pretty good looking solution.  However I don’t think that it’ll be the end all be all solution.  It’s going to have limitations that have to be worked around, just like any software based solution is going to.  It will however make for a powerful tool in the toolbox which is available to us as IT administrators.

When it comes time to design your companies HA and DR strategy don’t get locked into thinking about one specific technology.  Look at all your options that are available to you, and learn about them so that you can make the decision to implement the correct solution for the specific task at hand.  For one system you might use database mirroring for your HA platform and Log shipping for your DR platform.  For another you might you clustering for your HA platform and mirroring for your DR platform.  For another you might use Clustering for both your HA and DR platforms.  It really and truly all depends on the needs of system which you are designing for.

The big argument that I hear from companies that have a single HA solution and a single DR solution is that because there is only one solution it is much easier to train staff on how to manage that one platform.  And that is certainly true, teaching someone one thing is much easier than teaching them 5 things.  However your IT staff isn’t a group of monkeys working from memory when working on these systems (if they are please send pictures).  When DR tests and done and when systems are failed over to DR specific run books are used to ensure that everything comes online correctly and in the correct order.  So if everything is going to be laid out in a run book anyway, why not have a few different technologies in use when it makes sense.

The whole point of this I suppose is don’t get locked into a single solution.  Use the right tool for the right job, even if it takes a little more time, or a little more money to get setup.  In the long run using the right tool for the right job will make keeping your database applications online a much easier process.

Denny


July 25, 2011  2:00 PM

Any testing on the effect of Hiding NX/XD flag from guest with SQL Server on ESX?

Denny Cherry Denny Cherry Profile: Denny Cherry

During the webcast which I did for the SQL PASS Virtulization Virtual Chapter on July 13th, 2011 a question came up that I didn’t have the answer for.  The question was if I had done “Any testing on the effect of Hiding NX/XD flag from guest with SQL Server  on ESX?”.  I hadn’t done any testing of this, so I kicked the question over to a friend of mine in the VMware world Gabrie van Zanten (blog | @gabvirtualworld) who hasn’t done any testing of performance effects of hiding this value.  Gabrie told me that he hasn’t heard of any performance problems with this setting that would change the general use of this setting for a SQL Server specifically.

Denny


July 19, 2011  2:00 PM

Want to try out Windows Azure for free for 30 days?

Denny Cherry Denny Cherry Profile: Denny Cherry

If you’ve been wanting to try out Windows Azure and haven’t been able to yet, now is the time.  I’ve been given an access code that you can use which will give you 30 days of full Windows / SQL Azure for free.  And the best part is there is no credit card required.  Just drop in the code on sign up and give the thing a test drive and see if it’ll work for your needs.

  1. Go to http://www.windowsazurepass.com
  2. Enter the country you live in
  3. Enter the code “DENNYCHERRY”

That’s all there is to it.

Your free 30 day Windows Azure account includes:

Windows Azure

  • 3 Small Compute Instances
  • 3 GB of Storage
  • 250,000 Storage Transactions

SQL Azure

  • Two 1 GB Web Edition Database

AppFabric

  • 100,000 Access Control Transactions
  • 2 Bus Service Connections

Data Transfers

  • 3 GB In
  • 3 GB Out

Denny

P.S. The site to sign up kind of sucks. After selecting your country and entering the code you’ll be asked to sign into Microsoft Live to setup your Azure account. This screen doesn’t look like it worked correctly (see below) but it did.  Sign in and it’ll activate your free 30 day trial.

Keep in mind that you can only use the 30 day Azure trial once per Live account, so if you’ve already tried it with a different account you’ll need to create a new Live account.


July 18, 2011  2:00 PM

Interview with our new whipping boy Adrian Bethune

Denny Cherry Denny Cherry Profile: Denny Cherry

While at Tech Ed 2011 I spent a good amount of time talking with Adrian Bethune (@canon_sense) who is the new product manager for SQL Server Manageability, originally hired onto the team by the magnificent Dan Jones (blog | twitter) who was smart enough to run screaming from the team towards an awesome new position at Microsoft.  Adrian was crazy nice enough to take some time and sit down with me for a bit for an interview so that he could be introduced to the community at large (and so that everyone knows where to throw their rotten fruits and vegetables).

[Denny] As someone I’ve meet a couple of times now, I know a little about your history at Microsoft, but the good folks of the SQL community don’t know much about you as you’ve done a pretty good job staying out of the public eye, until now. Can you tell us a little about your life at Microsoft and how you got here?

[Adrian] I finished my CS degree at University of Illinois [UIUC] in 2007 and came to work in the build and test infrastructure team here in SQL Server for a few years to get some experience building some enterprise scale services and applications that get deployed and used right away. In the infrastructure team I worked on the test and build automation systems that pump millions of tests on hundreds of builds every day. The coolest projects I worked on included designing, deploying, and migrating to a next-gen distributed build system as well as automated storage management and provisioning services that work on top high-end hardware. As Microsoft’s SQL Server strategy shifted to include the cloud and focusing on reducing the cost of developing and maintaining SQL Server, I saw a great opportunity in the SQL Manageability team to get into the thick of it so I joined the team last June.

[Denny] DACPAC has a pretty sorted history with the v1 release being looked upon less than favorably (it may have been compared to a steaming pile of something, or Windows ME, etc.). What brought you to the team and made you want to take this project on?

[Adrian] While the first implementation did leave something to be desired as you subtly point out, four important points drew me to this area. First, the entire DAC concept is new and is therefore more of a startup environment than the typical monolithic product development team where you get pigeon-holed very quickly. Right out of the gate we were heads-down on shipping a major feature as soon as possible – in-place upgrades – in VS 2010 SP1 and SQL 2008 R2 SP1. Second, the concept of DAC and the services it provides is appealing even if the first implementation is not ideal. The way I see it, DB developers have become accustomed to having to develop on this cumbersome stateful beast by continuously executing differential scripts which modify the state of their application (schema). With the push towards minimizing the cost of managing databases, developers and DBAs need serious tools that help reduce the burden of managing and deploying databases so they can focus on working on real innovation. DAC has the potential to become one of the key pillars in the drive to drop costs. Third, the engineering team behind DAC is staffed with some top development and test talent. The DAC team is a serious engineering team that has a passion for demonstrable quality and the drive to push multiple agile releases therefore it’s a fun and exhilarating team to work with. Over the next few months you’ll see some exciting announcements and developments in the DAC space, both with and for a multitude of partners and products within Microsoft as well as integration into the tooling and services for SQL Azure. Lastly, the partnerships and engagements within SQL have been fantastic. DAC is not just a SQL Manageability initiative, it’s a SQL initiative with some great work from the Engine team on providing a set of compilation services to validate the DAC model as well as moving the needle towards containing the database. Together with the Engine team we will provide a symmetrical model for a database (application) in the runtime environment (contained databases) and the logical definition (DAC – dacpac). Check out the DAC/CDB session from TechEd for more info on the roadmap – http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DBI306. In the session you’ll see how the Engine and DAC teams are working towards a common vision to make developing and managing cheaper.

[Denny] From what you’re saying it sounds like some Microsoft products will begin using DAC and DACPAC to deploy applications. Does this include customer shipped applications such as Dynamics and SCOM or just internal applications? (We can edit this to not list any specific products if needed.)

[Adrian] Besides several internal teams picking up and using DAC services for their own purposes, shipping products will also be integrating with it. Publically, System Center Virtual Machine Manager 2012 has already shipped a beta with DAC integration. At TechEd, the AppFabric team announced their new composite application model initiative/project which also integrates DAC as the data-tier component for applications. Expect to see more products integrate DACFx in the coming months. That’s all I can say for now.

[Denny] If people have feedback on Data Tier Applications (DAC) or DACPAC what’s the best way to get that to the team?

[Adrian] The broader community can always engage us with connect bugs or on the MSDN forums but for MVPs and folks I interact with, feel free to shoot me a personal mail.

[Denny] Knowing the abuse that we’ve given our good friend Dan Jones (a.k.a. DACPAC Dan) did that make you hesitant to take on DAC and DACPAC?

[Adrian] Sure, it would give any reasonable person pause, however, my own personal estimation of the potential value of DAC and the chance that we could align with our partner teams in the Engine, Juneau, Visual Studio to provide a single surface area for development which enables some key management features trumped my reservations. While I can’t disclose much more than we talked about at TechEd, I can say that the reality has met the potential and it’s exciting to see how the future is shaping up.

[Denny] So when you aren’t being abused by the MVPs, and you are permitted to actually leave the confines of building 35 what sort of things do you fill your 10 minutes of daily free time that Steve Ballmer allocates to you?

[Adrian] From time to time they do let us out but only enough so people don’t file missing persons reports. In my spare time I hang out with the wife, dabble with gadgetry, swim, read quite a bit (Sci-Fi typically) and follow economic and political news and trends.

[Denny] Are there any other projects that you are working on that you can share with us that’ll be included in the SQL Server “Denali” release or maybe even earlier?

[Adrian] After ramping up, I spent the latter half of last year working on shipping DAC v1.1 that includes in-place upgrades as soon as possible, which means we actually shipped in Visual Studio 2010 SP1 and will ship in SQL Server 2008 R2 SP1 (CTP available today). Once we shipped 1.1, I worked on getting the import/export services up and running and we shipped a CTP currently available on www.sqlazurelabs.com which you may have seen at TechEd. In parallel, I am working on an import/export service for SQL Azure which will provide import/export as a service (rather than from client side tools) that will import or export to/from Azure BLOB storage without the need for client side tools. Apart from that, I’ve been very busy working on partnership engagements within Microsoft because DAC provides a nice and cheap way for other product teams to operate on and with SQL Server and SQL Azure.

[Denny] I’m interested in this Azure Import/Export utility. The BLOB storage that this will integrate with (keeping that I don’t know much about Azure besides the SQL Azure part), how would one get a file uploaded to that automagically? Can you FTP files to it, or is there an API which has to be used, etc?

[Adrian] There is an API you can use, however, there are quite a few tools which will synchronize folders between your client machine and your BLOB storage account. That’s the easiest way to get your files into the cloud. I won’t mention any specific tools broadly to avoid favoritism/politics, however a quick search for “azure storage tools” is a good starting point. Keep in mind that the only time you need to transfer the import/export artifact – a BACPAC – between your client and the cloud is when you want to migrate or move your database between the cloud and on-prem environments. Otherwise, you can just keep your files in the cloud in your BLOB account and use our services to operate over them. Sounds like a good topic to cover in a session…

[Denny] If v2 of DACPAC blows up, would you prefer to be slow roasted over gas or open coals?

[Adrian] That depends. Is the purpose to inflict pain or are you of the cannibalistic persuasion? Honestly, as MVPs are some of the most seasoned SQL consumers, we’d love to hear your feedback on the new upgrade engine as well as the overall application lifecycle experience that DAC enables. We are a nimble team and if there’s a great opportunity to incorporate fixes for our services for the Denali release. Unfortunately, because we were so focused on DAC 1.1, we didn’t have enough time to deliver a lot of DAC value in Denali CTP1, however, CTP3 coming this summer will be fully DACified and include all the latest and greatest including SQL Engine validation, in-place upgrades, and full support for SQL Azure application scoped objects including permissions and roles!

[Denny] It is pretty clear that DAC and DACPAC is geared mostly towards SQL Azure as it supports the current SQL Azure feature set. Can you tell us a bit about why the decision was made to push DAC and DACPAC as being an on premise solution instead of keeping the focus for it on SQL Azure until it was ready to support a fuller on premise feature set?

[Adrian] Fantastic question. The reason it was positioned as an on-premise solution is because the SQL Azure story was still being written. If you rewind back to the days 2008 R2 was working towards release, SQL Azure started out with this simple DB concept and was then reset to provide full relational features. At that time, we really weren’t sure if we wanted to dock the DAC roadmap to Azure because the SQL Azure story was in flux. So the fallback position was to tie the DAC story to the box product because we weren’t able to really commit to a direction for DAC and Azure. Since then, we’ve been straightening the story in a big way with partners and at TechEd.

[Denny] When we were hanging out at Tech Ed 2011 you seemed like you wanted to become more involved in community. Did I guess this one right? Will you be joining us at events like PASS and the MVP summit for some “learn’ and camaraderie”?

[Adrian] Yes, I certainly hope to join you at PASS and have another couple sessions at the next MVP summit but don’t know with certainty yet.

[Denny] The most important question, would you prefer to be known as “DACPAC Adrian” or “DACPAC Dan 2.0″?

[Adrian] The former. There’s already a “DACPAC Dan 1.0″ and we haven’t tested any side by side or upgrade scenarios. :)

I’d like to thank Adrian for being a sucker good sport and agreeing to sit down with me, even knowing the beatings that I’ve given Dan over DACPAC v1.  I hope that everyone enjoyed reading this interview as much as I enjoyed talking with Adrian.

All joking aside Adrian is a great guy, and a lot of fun to hang out with, and he’s got some great vision for Data Tier Applications and DACPAC.  I just hope he’s able to pull off what he’s got planned.  If not, we’ll be having a BBQ at the next MVP summit and Adrian will be the “guest of honor”.

Denny


July 14, 2011  6:33 PM

Updated SSMS Shortcut Key file

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve updated the SQL Server “Denali” settings file to include the CTRL+E and CTRL+X hot keys.  You can find the new file in the same location as the old one so you only need to download it once.

Denny


July 14, 2011  2:56 AM

Resetting SSMS shortcut keys in SQL Server “Denali” CTP 3

Denny Cherry Denny Cherry Profile: Denny Cherry

If you did what I did when you installed CTP 3 of SQL Server “Denali” (which happened when I uninstalled CTP 1 then installed CTP 3) you’ll be in for a rude surprise, a lot of your shortcut keys that you’ve been using for years don’t work any more.  This is to say the least VERY annoying, and shouldn’t have ever happened.  If you upgrade you won’t have this problem as it’ll take your old shortcut key settings.

It isn’t all that hard to fix, provided that you know what each and every hot key was supposed to be you can go into Tools > Options then Environment > Keyboard > General and reset all the keys that are hosed up.

Or you can download the settings file that I’ve put together that has (I think) all the screwed up hot keys corrected.  Safe this file on your computer somewhere and uncompress it, then click on Tools > Import and Export Settings.  Run through the wizard (be sure to safe your current settings in case it really screws something up) then import my file.  Now all your shortcut keys “should” work as expected.

If you find a shortcut key that I missed please put it in the comments below, or let me know in some other way and I’ll get the file updated.

Denny

P.S. I’ve sent a nasty note to some of my SQL Server development team contacts.  I’ve also submitted a connect item so that feedback can be publicly collected.


July 12, 2011  2:00 PM

Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’.

Denny Cherry Denny Cherry Profile: Denny Cherry

So I hit the error in the title the other night while converting a heap to a clustered index on a production system.  After this error poped up, some other nasty stuff showed up in the error log, which I’ve included below.

Error: 9002, Severity: 17, State: 4.
The transaction log for database ‘%d’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
Database %d was shutdown due to error 9002 in routine ‘HeapPageRef::Update’. Restart for non-snapshot databases will be attempted after all connections to the database are aborted.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 5.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 4.
During undoing of a logged operation in database %d, an error occurred at log record ID (99:4216:617). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

Error: 9001, Severity: 21, State: 1.
The log for database %d is not available. Check the event log for related error messages. Resolve any errors and restart the database.

Error: 3314, Severity: 21, State: 5.
During undoing of a logged operation in database %d, an error occurred at log record ID (87:9933:1). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.

At this point crash recovery ran and the database started back up.

The first error is that the log filled up, followed by the nasty part.

The good part is that this is going to be a pretty rare problem to run into.  This will only happen on update operations on a heap, or adding a clustered index to a heap and you run out to transaction log space.  The reason that this can happen is that updates against a heap and clustered index operations are a two phase operation.  First deletes of the old rows is done, then an insert of the new rows is done.  When the insert of the new rows is started an internal flag is set which states that the insert operation is a “must complete” operation.  When an unexpected error happens between the delete operation and the subsequent insert operation, the database is taken offline as a precautionary measure to ensure consistency and prevent concurrent operations to the record that was being modified.

The downside to this problem, is that it can show it’s ugly face in SQL Server 2008 and SQL Server 2008 R2 and there is no fix available for it in either version.  The only way to fix the problem is to change the table from a heap to a clustered index or to add more transaction log space.

I have been told that this has been fixed in SQL Server “Denali” as “Denali” will handle the heap updates differently than prior versions.  If you have SQL Server 2005 you shouldn’t see this problem.

In my case the problem was solved by letting the database come back online, and creating the clustered index.  My database was in simple recovery mode so after recovery the log was marked for reuse automatically meaning that there was enough room in the transaction log for the operation to create the clustered index to complete.

If you run into this problem, hopefully this helps explain what is happening and why.

Denny


July 11, 2011  2:00 PM

Transaction log growth best practices

Denny Cherry Denny Cherry Profile: Denny Cherry

There is a lot of talk online about transaction log growth and what you should do. There are two answers to this question, and in this post I’ll give you both answers.

The theoretical answer is that you want to grow your transaction logs in 8 gig chunks. This will maximize the amount of space that make up each VLF within the transaction log file. This will allow SQL Server to to minimize the number of VLFs that are created which is a performance improvement. Unfortunately there is a problem with this which leads us to the practical answer…

The practical answer is that there is a bug in SQL Server which causes the database to grow he transaction log by 32k the first time you issue a growth which is a multiple of 4 Gigs (which 8 Gigs obviously is). Because of this bug, which has not yet been fixed in any released versions of SQL Server it is recommended to grow the transaction log by a slightly smaller number such as 8000 megs at a time. The VLFs will be almost as large, keeping log performance where it is expected to be without having the SQL Server create a 32k log growth and 32k VLF.

With this bug there is no error or message that the log was grown by 32k instead of the requested amount. You can check the log growth information by looking at the DBCC LOGINFO output.

Now don’t forget that you don’t want to auto grow by is large size. If you do transactions may be killed by client side timeouts while waiting for the log to grow as the transaction log doesn’t support instant file initialization no matter what. You’ll therefor want to pre-grow the transaction log to the needed size ahead of time, with a smaller auto grow size that your storage can accept quickly. Personally i like to use a 1 gig or 1000 Meg log auto grow size for emergency auto grows. This gives me enough log space to work with, without having to wait for the storage to take to long to write the file.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: