Have you been using SQL Server “Denali” and want to get your voice heard? Now’s your chance. Tech Target is looking for SQL Server “Denali” users (either CTP 1 or CTP 3) to interview for an article that they are working on about SQL Server “Denali”.
To get your voice heard contact Jason Sparapani (jsparapani AT techtarget DOT com) and he’ll take care of you.
That’s right SQL People, the .NET folks at the Redmond, WA NETDA user group has invited me to come and speak at their user group meeting on Monday August 8th, 2011 in the Building 41 cafeteria on the Microsoft Campus.
I’ll be giving a session which I’ve titled “Getting Back to the Basics of SQL Server” where we’ll be going back over the basics of accessing data within SQL Server. The slide deck isn’t very exciting so I’ll get it posted after the meeting with the sample code that I’ll be showing. The topics which I’ll be talking about will be useful no matter the version of SQL Server that you are working with from SQL Server 6.5 to SQL Server “Denali” (to be fare some of the stuff is SQL Server 2008+).
The doors open at 6:30pm and the free pizza starts at 6:45pm. I’ll be starting about 7:10pm. Once we are done some may go out for some beers afterwards (at least us out of town folks).
Hopefully I’ll see you there,
I’m pleased to say that I’m one of the great speakers who will be presenting at the Dallas Tech Fest on August 12th and 13th at the University of Texas at Dallas. Sadly I won’t be able to join everyone on the 12th, I’ve got commitments in Redmond until 5:30pm Pacific Time on Friday, but I’m taking a flight on Friday night so that I can see everyone on Saturday and present my two sessions.
Currently I’m presenting “Indexing Internals” at 12:45pm on Saturday and then at 2:15pm on Saturday I’ll be presenting “Optimizing SQL Server Performance in a Virtual Environment”. The schedule is subject to change, but hopefully won’t as I really like those time slots. Tickets for the two day conference are just $100 and are on sale through August 10th. Just go to the Dallas Tech Fest home page and you’ll see the Event Bright order form right there in the middle of the page. It should be an awesome event, I can’t wait to speak there for the first time.
Anyway, I can’t wait to see everyone; I’ll be there until Sunday afternoon when I have to head back to DFW so I can head home.
See you there,
Denny Cherry here, your PASS party planner. PASS is getting closer and closer, just 12 weeks away or so. You’ve probably got the basics figured out like are you going, which hotel will you be at, getting your flights setup and figuring out if you are joining the pre-PASS fun at SQL Saturday 92 in Oregon the week before.
However there is so much more to the SQL PASS experience than just going to sessions at the conference. While the official conference ends at 5 or so, that is just the beginning of the day for the seasoned conference attendee. You should be spending about 5-10 minutes in your hotel room between the time that the sessions end and when you go to sleep (which is hopefully around 2-3 in the morning); any more than this and you are doing something wrong (in my opinion). The reason that I cap this at about 10 minutes is that there are lots of people to see and talk to and plenty of parties to attend.
Every year I hear from people that they didn’t know what to do after hours, or where to meet up with people, so I’ve taken it upon myself to fix this. If you are spending time in your hotel room doing anything besides sleeping or fixing some emergency production problem (god knows that these can happen); stop it.
Some nights there are official parties like the Welcome reception on Tuesday night, the exhibitor reception on Wednesday night, and the Microsoft Customer Appreciation Party (which I would assume will be Thursday).
There will also be some unofficial things to look forward to. For those who are shutter bugs, there’s the photo walk usually on Monday headed up by Pat Wright (blog | @sqlasylum) which is a great time and a great way to meet some new people. The photo walk started a few years ago with just a couple of people and has turned into a pretty large group that goes walking around town just taking pictures and chatting. You sure don’t need to be a great photographer to go. I’ve been any most of pictures are awful (check out my Facebook photo page (soul sucking registration required) for reference material). Sadly I didn’t get to go last year as I was busy presenting a pre-con. This yeah hopefully I’ll be able to attend with my point and shoot all ready to go (this will be an upgrade for me, two years ago I just used my cell phone camera).
The big party event of the week is the most unofficial party of the week. That is the Karaoke party at Bush Garden (no, not Bush Gardens the theme park Bush Garden the Karaoke bar in the International District). SQL Karaoke will be Wednesday night starting at 9:30pm on Wednesday night until 2am, unless the fire marshal shuts us down early, which hasn’t happened, yet. We start at 9:30pm because that’s when the DJ shows up. The bar opens before that so feel free to show up and grab a table. Tables are hard to find as the place is a little small, so if you plan on sitting get there early. For those that don’t like to sing don’t worry, singing isn’t required to attend SQL Karaoke, just the desire to have a good time and watch some people make fools of themselves. There have been a few videos which have been taken and posted online over the years. If you want to get a general idea of the insanity check them out here, here, here, here, here (this is our favorite DJ singing our favorite song), and here (do you get the idea that we really like SQL Karaoke?). If you don’t come to check out the signing, at least come to check out the couch in the men’s rooms (I’m not kidding).
When all else fails there’s always a default place to go at PASS to find some SQL people to hang out with. That is Tap House Grill on the corner of 6th and Pike just across from the Sheraton hotel (about 1/4 block down 6th). The Tap House is always popular with the SQL folks thanks to its location less than 2 blocks from the Seattle Convention Center, and less than a block from the Sheraton where a good portion of the PASS attendees will be staying (including myself).
I can’t tell you how much I recommend that you check out these after events and the other ones which will be getting put together as we get closer to the PASS summit. There are also a number of private parties which happen on various nights at PASS, some hosted by Microsoft, some by venders like Quest, Red Gate, SQL Sentry, etc. which you’ll need an invite to attend. All of the public events will be published up on the passsummitevents.info website (there isn’t much up there yet for the 2011 summit, so if you are planning something be sure to get it posted) so be sure to check there as we get closer to PASS to see what parties you can fit into your schedule. Joseph Guadagno (blog | @jguadagno) who setup the passsummitevents.info site for us was nice enough to put together a blog post on his site on how to use the site and how to setup events. There’s even an iPhone app for the site. I highly recommend hitting at least one or two of the after events and meet some people. Everyone is welcome, even the locals that can’t attend the summit can join in the after parties.
I’ll see you somewhere at the PASS summit. I’ll be the guy running around with a lamp shade on his head, and shot of Jäger in his hand.
This question came up on ServerFault a while back and a wanted to expand on the solution more. The basic problem that the person was having was that the iSCSI disks were taking a long time to come online which was causing the SQL Server to crash when the server was rebooted.
The solution that I came up with was to change the SQL Server service to make it depending on the disk drivers to ensure that the disk drivers were online before the SQL Server attempted to start. Fortunately this is a pretty quick registry change which can fix the problem.
To fix this, open the registry editor (regedt32) and navigate to HKEY_Local_Machine\system\CurrentControlSet\Services\MSSQLSERVER\ and find the DependOnService key (if it isn’t there for some reason, create it just as shown with a type of REG_MULTI_SZ). Edit the DependOnService key and set the value to “Disk”. Click OK, close the registry editor and restart the server.
Once the key has been set the SQL Server will no longer start up until after the disk drivers have been started.
This will work for all version of SQL Server that are installed on a version of Windows which has the registry (so probably SQL Server 6.5 and up).
As this is a registry change, don’t be making this change on all your servers. Only tweak the registry if you know what you are doing. If you kill your system by editing the registry it’s your fault, not mine. I don’t want to hear about it if you delete a bunch of keys by accident, on purpose, because you didn’t read this correctly, etc.
This last Saturday I spoke at Dallas Tech Fest, where I had two great groups in my two sessions. As promised here are the slide decks for the two sessions which I presented.
I hope everyone had as good a time at the conference that I had.
People often get HA (High Availability) and DR (Disaster Recovery) mixed up. There are a couple of reasons that this happens. First is the fact that there aren’t any clear guidelines which separate the two. There are standard terms which are used to help define HA and DR, but there’s nothing which says this is how you build an HA environment and this is how you build a DR environment. The reason for this is that every system out there is different, and every system out there has a different set of requirements.
All to often when I talk to people about HA and DR they pick the technology that they want to use before they finish defining the requirements of the HA or DR platform. This presents a problem because after they the technology is picked the system is pigeon holed into the solution which has been selected, often without fully defining the RTO and RPO.
The RTO and RPO are the Recovery Time Objective and the Recovery Point Objective. The RTO is defined as the amount of time that it takes to get the system back online after a critical system failure. The RPO is defined as the amount of data which can be lost while bringing the system back online after a critical system failure. Neither of these are numbers which can be defined by anyone in the IT department. Both of these numbers are numbers which need to be defined by the business unit. If the numbers aren’t defined by the business unit that owns the system, the numbers basically don’t mean anything. The reason that I say that it won’t mean anything if IT defines these numbers is because IT probably doesn’t have a good understanding on the monetary losses of 10 minutes of data loss, of if the system is down for 24 hours while the system is being brought online.
Different situations require different solutions. Not every solution in a single shop needs to be the same. Your most important systems should have one kind of solution, which has a low RTO and RPO while the less important systems have higher RTO, and possibly a higher RPO. The companies that try to build a single HA solution (and/or a single DR solution) are the companies who are destined to have their HA, and specifically their DR solutions fail, usually in a fantastic blaze of glory.
When looking at your HA and DR solutions don’t look just within SQL Server. There are a variety of other technology solutions which can be used when designing HA and specifically DR solutions. This is especially true in the storage space when it comes to data replication specifically. Look at your vendor solutions as well as solutions from third party providers. While there aren’t many third party data replication solutions there are some out there that can be leveraged such as EMC’s Recover Point appliances. But like all the solutions which are available these aren’t and end all be all solution either. They like all the options which are available should be used where they make sense and not everywhere just because they were purchased.
Microsoft’s new feature in SQL Server “Denali” called “Always On” (aka. HADR, HADRON) while marketed as a total HA/DR solution for SQL Server databases is a pretty good looking solution. However I don’t think that it’ll be the end all be all solution. It’s going to have limitations that have to be worked around, just like any software based solution is going to. It will however make for a powerful tool in the toolbox which is available to us as IT administrators.
When it comes time to design your companies HA and DR strategy don’t get locked into thinking about one specific technology. Look at all your options that are available to you, and learn about them so that you can make the decision to implement the correct solution for the specific task at hand. For one system you might use database mirroring for your HA platform and Log shipping for your DR platform. For another you might you clustering for your HA platform and mirroring for your DR platform. For another you might use Clustering for both your HA and DR platforms. It really and truly all depends on the needs of system which you are designing for.
The big argument that I hear from companies that have a single HA solution and a single DR solution is that because there is only one solution it is much easier to train staff on how to manage that one platform. And that is certainly true, teaching someone one thing is much easier than teaching them 5 things. However your IT staff isn’t a group of monkeys working from memory when working on these systems (if they are please send pictures). When DR tests and done and when systems are failed over to DR specific run books are used to ensure that everything comes online correctly and in the correct order. So if everything is going to be laid out in a run book anyway, why not have a few different technologies in use when it makes sense.
The whole point of this I suppose is don’t get locked into a single solution. Use the right tool for the right job, even if it takes a little more time, or a little more money to get setup. In the long run using the right tool for the right job will make keeping your database applications online a much easier process.
During the webcast which I did for the SQL PASS Virtulization Virtual Chapter on July 13th, 2011 a question came up that I didn’t have the answer for. The question was if I had done “Any testing on the effect of Hiding NX/XD flag from guest with SQL Server on ESX?”. I hadn’t done any testing of this, so I kicked the question over to a friend of mine in the VMware world Gabrie van Zanten (blog | @gabvirtualworld) who hasn’t done any testing of performance effects of hiding this value. Gabrie told me that he hasn’t heard of any performance problems with this setting that would change the general use of this setting for a SQL Server specifically.
If you’ve been wanting to try out Windows Azure and haven’t been able to yet, now is the time. I’ve been given an access code that you can use which will give you 30 days of full Windows / SQL Azure for free. And the best part is there is no credit card required. Just drop in the code on sign up and give the thing a test drive and see if it’ll work for your needs.
- Go to http://www.windowsazurepass.com
- Enter the country you live in
- Enter the code “DENNYCHERRY”
That’s all there is to it.
Your free 30 day Windows Azure account includes:
- 3 Small Compute Instances
- 3 GB of Storage
- 250,000 Storage Transactions
- Two 1 GB Web Edition Database
- 100,000 Access Control Transactions
- 2 Bus Service Connections
- 3 GB In
- 3 GB Out
P.S. The site to sign up kind of sucks. After selecting your country and entering the code you’ll be asked to sign into Microsoft Live to setup your Azure account. This screen doesn’t look like it worked correctly (see below) but it did. Sign in and it’ll activate your free 30 day trial.
Keep in mind that you can only use the 30 day Azure trial once per Live account, so if you’ve already tried it with a different account you’ll need to create a new Live account.
While at Tech Ed 2011 I spent a good amount of time talking with Adrian Bethune (@canon_sense) who is the new product manager for SQL Server Manageability, originally hired onto the team by the magnificent Dan Jones (blog | twitter) who was smart enough to run screaming from the team towards an awesome new position at Microsoft. Adrian was crazy nice enough to take some time and sit down with me for a bit for an interview so that he could be introduced to the community at large (and so that everyone knows where to throw their rotten fruits and vegetables).
[Denny] As someone I’ve meet a couple of times now, I know a little about your history at Microsoft, but the good folks of the SQL community don’t know much about you as you’ve done a pretty good job staying out of the public eye, until now. Can you tell us a little about your life at Microsoft and how you got here?
[Adrian] I finished my CS degree at University of Illinois [UIUC] in 2007 and came to work in the build and test infrastructure team here in SQL Server for a few years to get some experience building some enterprise scale services and applications that get deployed and used right away. In the infrastructure team I worked on the test and build automation systems that pump millions of tests on hundreds of builds every day. The coolest projects I worked on included designing, deploying, and migrating to a next-gen distributed build system as well as automated storage management and provisioning services that work on top high-end hardware. As Microsoft’s SQL Server strategy shifted to include the cloud and focusing on reducing the cost of developing and maintaining SQL Server, I saw a great opportunity in the SQL Manageability team to get into the thick of it so I joined the team last June.
[Denny] DACPAC has a pretty sorted history with the v1 release being looked upon less than favorably (it may have been compared to a steaming pile of something, or Windows ME, etc.). What brought you to the team and made you want to take this project on?
[Adrian] While the first implementation did leave something to be desired as you subtly point out, four important points drew me to this area. First, the entire DAC concept is new and is therefore more of a startup environment than the typical monolithic product development team where you get pigeon-holed very quickly. Right out of the gate we were heads-down on shipping a major feature as soon as possible – in-place upgrades – in VS 2010 SP1 and SQL 2008 R2 SP1. Second, the concept of DAC and the services it provides is appealing even if the first implementation is not ideal. The way I see it, DB developers have become accustomed to having to develop on this cumbersome stateful beast by continuously executing differential scripts which modify the state of their application (schema). With the push towards minimizing the cost of managing databases, developers and DBAs need serious tools that help reduce the burden of managing and deploying databases so they can focus on working on real innovation. DAC has the potential to become one of the key pillars in the drive to drop costs. Third, the engineering team behind DAC is staffed with some top development and test talent. The DAC team is a serious engineering team that has a passion for demonstrable quality and the drive to push multiple agile releases therefore it’s a fun and exhilarating team to work with. Over the next few months you’ll see some exciting announcements and developments in the DAC space, both with and for a multitude of partners and products within Microsoft as well as integration into the tooling and services for SQL Azure. Lastly, the partnerships and engagements within SQL have been fantastic. DAC is not just a SQL Manageability initiative, it’s a SQL initiative with some great work from the Engine team on providing a set of compilation services to validate the DAC model as well as moving the needle towards containing the database. Together with the Engine team we will provide a symmetrical model for a database (application) in the runtime environment (contained databases) and the logical definition (DAC – dacpac). Check out the DAC/CDB session from TechEd for more info on the roadmap – http://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DBI306. In the session you’ll see how the Engine and DAC teams are working towards a common vision to make developing and managing cheaper.
[Denny] From what you’re saying it sounds like some Microsoft products will begin using DAC and DACPAC to deploy applications. Does this include customer shipped applications such as Dynamics and SCOM or just internal applications? (We can edit this to not list any specific products if needed.)
[Adrian] Besides several internal teams picking up and using DAC services for their own purposes, shipping products will also be integrating with it. Publically, System Center Virtual Machine Manager 2012 has already shipped a beta with DAC integration. At TechEd, the AppFabric team announced their new composite application model initiative/project which also integrates DAC as the data-tier component for applications. Expect to see more products integrate DACFx in the coming months. That’s all I can say for now.
[Denny] If people have feedback on Data Tier Applications (DAC) or DACPAC what’s the best way to get that to the team?
[Adrian] The broader community can always engage us with connect bugs or on the MSDN forums but for MVPs and folks I interact with, feel free to shoot me a personal mail.
[Denny] Knowing the abuse that we’ve given our good friend Dan Jones (a.k.a. DACPAC Dan) did that make you hesitant to take on DAC and DACPAC?
[Adrian] Sure, it would give any reasonable person pause, however, my own personal estimation of the potential value of DAC and the chance that we could align with our partner teams in the Engine, Juneau, Visual Studio to provide a single surface area for development which enables some key management features trumped my reservations. While I can’t disclose much more than we talked about at TechEd, I can say that the reality has met the potential and it’s exciting to see how the future is shaping up.
[Denny] So when you aren’t being abused by the MVPs, and you are permitted to actually leave the confines of building 35 what sort of things do you fill your 10 minutes of daily free time that Steve Ballmer allocates to you?
[Adrian] From time to time they do let us out but only enough so people don’t file missing persons reports. In my spare time I hang out with the wife, dabble with gadgetry, swim, read quite a bit (Sci-Fi typically) and follow economic and political news and trends.
[Denny] Are there any other projects that you are working on that you can share with us that’ll be included in the SQL Server “Denali” release or maybe even earlier?
[Adrian] After ramping up, I spent the latter half of last year working on shipping DAC v1.1 that includes in-place upgrades as soon as possible, which means we actually shipped in Visual Studio 2010 SP1 and will ship in SQL Server 2008 R2 SP1 (CTP available today). Once we shipped 1.1, I worked on getting the import/export services up and running and we shipped a CTP currently available on www.sqlazurelabs.com which you may have seen at TechEd. In parallel, I am working on an import/export service for SQL Azure which will provide import/export as a service (rather than from client side tools) that will import or export to/from Azure BLOB storage without the need for client side tools. Apart from that, I’ve been very busy working on partnership engagements within Microsoft because DAC provides a nice and cheap way for other product teams to operate on and with SQL Server and SQL Azure.
[Denny] I’m interested in this Azure Import/Export utility. The BLOB storage that this will integrate with (keeping that I don’t know much about Azure besides the SQL Azure part), how would one get a file uploaded to that automagically? Can you FTP files to it, or is there an API which has to be used, etc?
[Adrian] There is an API you can use, however, there are quite a few tools which will synchronize folders between your client machine and your BLOB storage account. That’s the easiest way to get your files into the cloud. I won’t mention any specific tools broadly to avoid favoritism/politics, however a quick search for “azure storage tools” is a good starting point. Keep in mind that the only time you need to transfer the import/export artifact – a BACPAC – between your client and the cloud is when you want to migrate or move your database between the cloud and on-prem environments. Otherwise, you can just keep your files in the cloud in your BLOB account and use our services to operate over them. Sounds like a good topic to cover in a session…
[Denny] If v2 of DACPAC blows up, would you prefer to be slow roasted over gas or open coals?
[Adrian] That depends. Is the purpose to inflict pain or are you of the cannibalistic persuasion? Honestly, as MVPs are some of the most seasoned SQL consumers, we’d love to hear your feedback on the new upgrade engine as well as the overall application lifecycle experience that DAC enables. We are a nimble team and if there’s a great opportunity to incorporate fixes for our services for the Denali release. Unfortunately, because we were so focused on DAC 1.1, we didn’t have enough time to deliver a lot of DAC value in Denali CTP1, however, CTP3 coming this summer will be fully DACified and include all the latest and greatest including SQL Engine validation, in-place upgrades, and full support for SQL Azure application scoped objects including permissions and roles!
[Denny] It is pretty clear that DAC and DACPAC is geared mostly towards SQL Azure as it supports the current SQL Azure feature set. Can you tell us a bit about why the decision was made to push DAC and DACPAC as being an on premise solution instead of keeping the focus for it on SQL Azure until it was ready to support a fuller on premise feature set?
[Adrian] Fantastic question. The reason it was positioned as an on-premise solution is because the SQL Azure story was still being written. If you rewind back to the days 2008 R2 was working towards release, SQL Azure started out with this simple DB concept and was then reset to provide full relational features. At that time, we really weren’t sure if we wanted to dock the DAC roadmap to Azure because the SQL Azure story was in flux. So the fallback position was to tie the DAC story to the box product because we weren’t able to really commit to a direction for DAC and Azure. Since then, we’ve been straightening the story in a big way with partners and at TechEd.
[Denny] When we were hanging out at Tech Ed 2011 you seemed like you wanted to become more involved in community. Did I guess this one right? Will you be joining us at events like PASS and the MVP summit for some “learn’ and camaraderie”?
[Adrian] Yes, I certainly hope to join you at PASS and have another couple sessions at the next MVP summit but don’t know with certainty yet.
[Denny] The most important question, would you prefer to be known as “DACPAC Adrian” or “DACPAC Dan 2.0”?
[Adrian] The former. There’s already a “DACPAC Dan 1.0” and we haven’t tested any side by side or upgrade scenarios.
I’d like to thank Adrian for being a sucker good sport and agreeing to sit down with me, even knowing the beatings that I’ve given Dan over DACPAC v1. I hope that everyone enjoyed reading this interview as much as I enjoyed talking with Adrian.
All joking aside Adrian is a great guy, and a lot of fun to hang out with, and he’s got some great vision for Data Tier Applications and DACPAC. I just hope he’s able to pull off what he’s got planned. If not, we’ll be having a BBQ at the next MVP summit and Adrian will be the “guest of honor”.