SQL Server with Mr. Denny


November 3, 2011  5:06 PM

SQL Server 2012 Licensing Changes

Denny Cherry Denny Cherry Profile: Denny Cherry
Microsoft licensing, SQL Server, SQL Server 2012, SQL Server licensing

So you may have heard earlier today about the license changes that are coming out for SQL Server 2012.  I know that the official announcement can be a little hard to get through and the changes can be a little confusing at first (or even second or third) glance.  The SQL Server licensing team was kind enough to spend some time sitting down with me to try and work through how the license changes will be effecting customers.  Please note that any prices that I quote in here are list (retail) prices and are shown in US dollars.  If you have an Enterprise Agreement with Microsoft you will probably be paying less.

Editions

First lets start with the edition lists.  SQL Server is reducing the number of editions from 6 down to 4 (I’m excluding all the different appliances here).  Those editions will be Express, Standard, BI, and Enterprise Edition.  The express edition will still be a free edition with limited functionality.  I wouldn’t expect the features that are available in the express edition to change.  As for the features which will be available, the specific features which will be in each edition haven’t yet been announced.  Look for this sort of feature matrix to be available sometime between now and when SQL Server 2012 goes GA in 2012.  Until that feature matrix is released any specific feature to edition information that you here is just a rumor.

Web Edition

The web edition will still exist even though I didn’t list it above.  The reason that I didn’t list it above is because the web edition will no longer be available for purchase via enterprise license agreements or retail channels.  The only way that you will be able to get the web edition of SQL Server 2012 will be through a hosting provider like Amazon EC2, RackSpace, Dream Host, Genesis Hosting Solutions, etc.  These hosting providers will be able to provide you with the web edition of SQL Server 2012 through their hosting provider license.

Workgroup Edition

The workgroup edition is no more.  I don’t have any specific information about how the upgrades will work for customers who have purchased the workgroup edition under their enterprise agreements.  If you are one of the few customers who has purchased SQL Server Workgroup edition under your enterprise agreement you’ll want to contact your sales rep or Microsoft Partner on what options are available to you when it comes time to upgrade that server to SQL Server 2012.

Standard Edition

The standard edition is still around and kicking.  Standard edition will be available in both a Server+CAL license model or a CPU Core license model.  The costs for a server license or a CPU Core license remain the same as they were under SQL Server 2008 R2 (I’ll explain the core licenses later on).  However if you are using the Server+CAL license model the CALs are going up in price by 27% from $164 (US) to $208.28 (US).

There are some things to note about Standard Edition which are changing from prior editions.  First there is a hard limit of 16 cores for Standard Edition.  This is hard set and is a license limitation so SQL Server will only use the first 16 cores in the server if there are more than 16 cores in the server.

A big change to the Standard Edition high availability story that has been announced is that Database Mirroring in Standard Edition remains but has been marked as deprecated.  This means that it is support available, and supported but will be removed from the product in a future release, typically 2-3 major releases from now.

As for the support for other features in the standard edition of SQL Server 2012 not a lot of the details have been made available yet.  For the complete support matrix you’ll have to wait until we get closer to the release of the product.

The features that I do know will NOT be available in the standard edition are:

  • Data Quality Services
  • Master Data Services
  • Power View
  • PowerPivot for SharePoint Services
  • Semantic Model
  • Advanced Auditing
  • Transparent Data Encryption
  • Column Store Index
  • Data Compression
  • Table/Index Partitioning
  • Always On

BI Edition

There is a new edition between the Standard Edition and the Enterprise Edition called the BI Edition.  The BI Edition will include all the features of the Standard edition plus some extras.  The licensing for the BI edition will only be available in a Server+CAL model.  There is no CPU core license available for the BI Edition.  If you need a CPU license for the BI Edition you’ll need to look to the Enterprise Edition (as it includes all the BI edition features).  Now the BI Edition doesn’t include only the BI stack.  It includes the database engine as well with a slightly higher cap on the number of cores that are supported.  While the standard edition supports 16 cores for the database engine, the BI edition supports up to 20 cores for the database engine.  All of the BI services (SSAS, SSIS, SSRS) will support up to the OS maximum number of cores.

The BI edition will include the following features:

  • Data Quality Services
  • Master Data Services
  • Power View
  • Power Pivot for SharePoint Services
  • Semantic Model
  • Advanced Analytics

The BI Edition will not include the following features:

  • Advanced Auditing
  • Transparent Data Encryption
  • Column Store Index
  • Data Compression
  • Table/Index Partitioning
  • Always On

The BI Edition is listed as having “Basic” High Availability.  I have gotten no answer on what “Basic” High Availability is.

The cost for the BI Edition server license will be the same as the SQL Server 2008 R2 Enterprise Edition server license which is $8,592.

If you need an internet facing SSAS cube you can not use a BI Edition license, you will then need to look at purchasing an Enterprise Edition license for that server.

Enterprise Edition

The Enterprise Edition is still around and is the big daddy of the SQL Server stack again as the Data Center Edition of the product is being removed.  This means that the Enterprise Edition will hold all the new features that are being introduced.  When it comes to licensing Enterprise Edition there is no longer a Server+CAL model available for new purchases.  You will only be able to purchase CPU Core licenses.  If you are upgrading an existing server that has Server+CAL using your Software Assurance or Enterprise Agreement you can continue to use the Server+CAL license model for as long as you continue to pay for Software Assurance.  If you terminate your Software Assurance license you’ll then need to purchase the correct number of CPU Core licenses for that server (again I’ll be talking about the CPU Core licenses further down).

The Enterprise Edition of SQL Server 2012 will include all the features of the BI Edition as well as the Standard Edition.

Datacenter Edition

The Datacenter Edition of SQL Server is going away.  All users of the data center edition will be switching into the Enterprise Edition of SQL Server 2012.  There will be some special benefits when you do this change out as you switch your CPU licenses of SQL Server 2008 R2 to CPU Core licenses of SQL Server 2012 which I’ll cover below.

CPU Core Licenses

By now you’ve probably noticed that I’ve mentioned CPU Core licenses and not CPU licenses.  That’s right, Microsoft has decided to follow the unholy beast (aka. Oracle) into CPU core licensing hell.  It’s actually not that bad, and for most of your servers the costs should stay the same.

The CPU Core licenses (available only for Standard and Enterprise edition) are sold in two core “packs” (I don’t know the actual term for them yet, so I’ve decided to call them “packs”).  If you have quad core CPUs you need two of these packs per socket.  These license packs will cost half of what a SQL Server 2008 R2 CPU license cost.  The catch here is that you have to purchase at least 4 cores per CPU.  The reason that Microsoft decided not to just sell the licenses in 4 core “packs” is because the people with 6 core CPUs would then be screwed having to buy more licenses than they actually needed.

If you have two sockets with 2 cores each, you need 4 license “packs” (8 core licenses)

If you have two sockets with 4 cores each, you need 4 license “packs” (8 core licenses)

If you have two sockets with 6 cores each, you need 6 license “packs” (12 core licenses)

If you have two sockets with 8 cores each, you need 8 license “packs” (16 core licenses)

SQL Server in a VM

When you are running SQL Server within a virtual machine if you are using CPU Core licenses then you will need to license at least 4 cores for the VM.  If you have more than 4 vCPUs on the VM you’ll need to have a CPU Core license for each vCPU that you have assigned to the VM.

Host Based Licensing

SQL Server 2012 will still include host based licensing as on option for those customers who have Software Assurance and an Enterprise Agreement.  The host based licensing works just like it did before, you purchase enough Enterprise Edition CPU Core licenses for the host and you can run as many virtual machines running SQL Server as you would like to.

Upgrading using your Software Assurance Rights

This is probably the section that you were looking forward to (or dreading) the most.

When you upgrade from SQL Server 2008 R2 (or below) to SQL Server 2012 using your Software Assurance rights you can continue to use your existing license model until the end of your Software Assurance cycle.  This means that if you have CPU licenses under SQL Server 2008 R2 you can continue to use those CPU licenses under SQL Server 2012 until your Software Assurance expires.  Once it expires you will need to true up on the number of CPU Cores.  When this true up happens each CPU license that you had before will count towards a minimum of 4 CPU Cores (if you have Data Center Edition CPU licenses of SQL Server 2008 R2 you will be credited a minimum of 8 CPU Cores for each CPU license).  If you have a large server that needs upgrading talk to you Microsoft Account Manager or Microsoft Partner about the core credit that you’ll get for your legacy CPU licenses.  You’ll notice the magical word “minimum” shown here.  This is because the Microsoft Account Managers and Microsoft Partners have the ability to give you more credit that these minimums.

If you have a SQL Server today with two sockets and you upgrade it to SQL Server 2012, then you need to add two more sockets to the server you can still do that.  Contact your Microsoft Account Manager (if you have a Software Assurance or an Enterprise Agreement then you have a Microsoft Account Manager, your boss probably knows how to contact them) or Microsoft Partner and they will be able to see you CPU licenses for existing servers only.  The SKU won’t be published, but it will be available internally for them to get to you.

Server Licenses and Upgrading

If you have a Standard Edition server that uses Server+CAL licensing that you are upgrading to SQL Server 2012 do keep in mind that hard limit of 16 cores for Standard Edition.

If you have an Enterprise Edition server that uses Server+CAL licensing that you are upgrading to SQL Server 2012 don’t freak out.  There is an upgrade path for you even though there is no more Server+CAL license available for SQL Server 2012, provided that you have Software Assurance for the SQL Server license.  If you do then you can upgrade that server continuing to use the Server+CAL model (don’t forget that your CALs need upgrading as well if they don’t have Software Assurance) until your Software Assurance expires at which point you would need to downgrade to the BI edition or the Standard Edition; or upgrade to the CPU Core license model to remain on Enterprise Edition (look at the features you need before making this decision).  Now the catch here is that using the SQL Server 2012 Enterprise Edition with a Server+CAL license option (that you’ve upgraded into as you can not purchase this new) is ONLY available for servers that have 20 cores or less.  If you have a SQL Server with more than 20 cores running Enterprise Edition using a Server+CAL license model you’ll need to talk to your Microsoft Account Manager or Microsoft Partner to see what options are available to you.

Costs For Buying New

For Standard Edition when using the Server+CALs license the cost will remain the same as SQL Server 2008 R2 ($898).  When using the CPU Core license model if you have 4 cores or less per CPU socket the cost will remain the same as a CPU license for SQL Server 2008 R2 ($7,171 US).  If you have more cores than 4 per socket the costs will go up depending on the number of cores that you have.  For every two cores in the server you’ll need a CPU Core “pack” which will cost $3,585.50.

For the BI Edition your only option is Server+CALs.  The SQL Server 2012 BI Edition server license will be the same as the SQL Server 2008 R2 Enterprise server license cost ($8,592).

For the Enterprise Edition your only option is the CPU Core license model.  For a server with 4 cores or less per CPU the cost will remain the same as a CPU license for SQL Server 2008 R2 ($27,495).  If you have more cores than 4 per socket the costs will go up depending on the number of cores that you have.  For every two cores in the server you’ll need a CPU Core “pack” which will cost $13,747.50.

If you have an Enterprise Agreement check with your Microsoft Account Manager or Microsoft Partner for actual pricing information.  Customers with an Enterprise Agreement should be paying less than these numbers show.  Of course Software Assurance will made the numbers go up but that includes its own set of benefits.

Questions?

I hope that this answers your questions about the SQL Server 2012 licensing model that will be coming out.  If I didn’t please post your questions here or on Twitter and I’ll either answer them or get them to someone who can answer them if I can’t and get the answer published either in the Q&A here or as a new blog post.

My Opinion

Now you may think from this rather long post that I like everything that I’ve talked about here, which isn’t the case.  This blog post is to simply explain what is coming in as clear a way as possible given that the presentations on this topic that I’ve seen to date have left me with a lot more questions than answers.  After all I did spend the better part of a day on the phone with two people at Microsoft to get all of this explained in such a way that I understood it.  As not everyone out there will get that chance I put together this blog post to explain what I learned.  Personally I don’t think that killing off the Workgroup edition was not a good idea.  Lots of people were using it, including everyone running SBS 2008 R2 (or what ever it is called).

I’m not a big fan of the core based licensing model that we have coming out with SQL Server 2012, but I do understand why Microsoft is going to this model.  I think we were pretty lucky to have gotten away with CPU based licenses for as long as we did.  Personally I really think that moving away from a Server+CAL license model for Enterprise Edition is a bad plan given that thinks like the Utility Control Point require Enterprise Edition (I’m assuming this will still require EE).  If you want to manage a large company with UCP that requires a pretty beefy Enterprise Edition server which is dedicated to UCP, yet this server now will cost A LOT compared to just buying an Enterprise Edition Server+CAL license for your UCP server.

I think that a lot more information is going to be needed before any real decision can be made on how much I like or don’t like these changes.  When the full feature to edition matrix is released that will be a big deciding factor in how well some of these changes will go over.

Obviously the knee jerk reaction of SQL Server 2012 is going to cost me more money so the chance sucks is an easy one to take, hell that’s where I started from (you should have seen the first emails I sent to Microsoft).  But when you look at some of the variables that they have put into getting people upgraded, specifically the ability to convert a CPU license to at least 4 core licenses (or at least 8 if coming from Data Center edition) gives some flexibility.  This is when it is going to be very important to have a good working relationship with your Microsoft Account Manager or Microsoft Partner as the ability to upgrade for as little as possible will be sitting squarely in their hands.  I would reach out to this person now, get to know them.  Bake them some cookies, send them a bottle of booze, what ever it takes to get on their good side.

Denny

October 31, 2011  2:00 PM

VMware’s vCenter and SQL Server Database Backups

Denny Cherry Denny Cherry Profile: Denny Cherry

VMware’s vCenter system uses SQL Server as it’s default database platform.  One thing that needs to be looked into for a lot of VMware installations is the backup setup for the vCenter databases.  Often I find the vCenter databases configured in SIMPLE recovery.  Now this doesn’t mean that the database is simple to recover in the event of a failure.  What this actually means is that there is no point in time recovery of the vCenter databases.  Now this might be OK for smaller systems as there probably aren’t a whole lot of changes being made to the vCenter farm other than performance data.  As long as you are OK with loosing a days worth of performance data then keeping the vCenter databases in SIMPLE recovery might be OK.

However for larger farms lots of changes are probably being made to the configuration of the farm on a daily bases.  Because of this you won’t want to run the database in SIMPLE recovery.  Instead you’ll want to run the database in the FULL recovery mode.  This enables the ability to do point in time recovery of the database once you start doing database transaction log backups.  This means that when (note that I didn’t say “if”) the hard drives on the vCenter server fail and cause the database to be lost
of become corrupt you’ll only loose a few minutes of data at the most.

To ensure that you don’t loose a lot of data from the vCenter database you will need to begin doing transaction log backups.  This will do a couple of things.  First it’ll make it so that you can restore the database successfully.  Second it’ll make sure that the hard drive on your vCenter server doesn’t fill up with one giant transaction log file.  If you don’t have a DBA in your shop, and you’ve got a SQL Server with a license that you paid for (something other than the SQL Express that VMware installs if needed) you can setup the SQL Server Maintenance Plans to handle the backups using SQL Server Management Studio.  You can always bring in a SQL Server consultant to setup backups and make sure that everything is setup correctly and working if you don’t have a SQL Server DBA handy.  If you’ve got a DBA handy in your shop you’ll probably want to have them setup SQL Server backups as per your normal company policies so that the databases are backed up, and those backups tested to ensure that they are functional.

Personally I like to set the recovery mode for the vCenter databases to FULL when working at clients, but it’s totally up to you, as long as you understand what the risks are with each of the options available to you.  (Yes I know that I didn’t cover the BULK_LOGGED recovery model, for the purposes of VMware’s vCenter databases the BULK_LOGGED recovery model and the FULL recovery model are the same.)

Hopefully I have helped demystified the SQL Server backup concepts for you a little bit.


October 27, 2011  10:20 PM

Learn Storage and Virtualization online

Denny Cherry Denny Cherry Profile: Denny Cherry

Now you as a database professional have the chance to earn Storage and Virtualization from home, work, or where ever is convenient for you.  As a partnership with SSWUG.org I’ve recorded an 11 session class titled “Storage and Virtualization for the DBA” which is being delivered online via SSWUG’s virtual conference platform which will allow you to view the videos on demand on your schedule between November 15th and December 14th.  If you want to follow up with the class and watch the videos later a DVD of the class is available as well.

These classes from SSWUG include office hours with the instructor daily.  Because of a trip to Europe for SQL Saturday and SQL Server Days the office hours won’t be available until starting on November 20th.  However any questions can be emailed to me and I’ll get back to you as quickly as I can.  You’ll receive more information about the office hours when you sign up for the class.

The cost for the class is very reasonable at just $199 for those who are not SSWUG members and the DVD is another $199, so get registered now.

I’ll see you online during the office hours, I look forward to your questions.

Denny


October 27, 2011  2:00 PM

Out of office messages, gmail and mailing lists

Denny Cherry Denny Cherry Profile: Denny Cherry

Email is pretty much the most annoying necessary evil out there.  Another annoying necessary evil out there is the out of office message.  Normally these aren’t to annoying over all as they let you know that someone that you’ve sent a message to isn’t going to be responding for a while.

Where they start getting a bit of a pain is when you are using gMail and you are on mailing lists, specifically mailing lists that are setup correctly so that when you hit reply the response goes back to the mailing list not to the person who sent the email.  Thanks to the Microsoft MVP program I’m on several of these email lists.  With out of office replies everyone on the mailing list quickly wants to kill you because every time a new person emails the mailing list everyone on the list gets your out of office reply.  When you are on a mailing list of 400 people this gets old, fast.

GMail has an out of office feature but it is “lacking” to say the least.  You basically get to turn it on and off, set dates for auto on and off, set the subject and the message.  The two big options after that allow you to only send your out of office messages to people in your contact list or on your domain.  The in your contact list option in my case is pretty useless.  I get emails from people all the time that aren’t in my contact list and I want them to get the out of office message, and I half the people (if not more) exist in my contact list as I email them off list as well as on (not to mention that I need their phone numbers so I can properly stalk them during the PASS and MVP summits).  So that option is useless.  I really don’t need the last option as the only other person with a mrdenny.com email address is my wife Kris and she knows where I’m at.

So my big problem becomes how to use out of office without all the other MVPs wanting to kill me.

And I’ve found the solution, and without having to move all my mailing list email traffic to a separate mailbox.  First go into the options for gMail and select the “Labs” tab.  Then find the “Canned Responses” option and turn this on.  We don’t care about the main functionality of this feature, but we do care about the fact that this lets us setup a filter and have it send a reply automatically.  So set this “lab” to enabled and click save.

Now compose a new email including just a subject and body of the message.  Then click the new “Canned responses” drop down (next to attachments) and tell it to save the canned response.  Name it something useful like “Out Of Office” or something so you can easily find it.

Then go back into your mail settings and go to the “Filters” tab.  Create a new filter and set the to address to exclude the mailing lists that you are on.  In my case I get email from a couple of different email list domains so I just excluded everything from those domains as you can see in the graphic.  I also excluded myself from the filter as I don’t want to get emailed my own out of office message.  I leave all the other boxes blank and click the “Test Search” button to make sure that everything but your mailing list emails are found then click “Next Step >>”.  On the second page check the new “Send canned response” check box and select the out of office message you just saved.  Then click the “Update Filter” button.  What ever you do, do NOT check the “Also apply filter to N conversations below” (In my case N actually says 12000, your number will vary).  If you check that box before you click the “Update Filter” button you will send everyone that has ever emailed you the out of office message that you just setup.  Not exactly a good way to become popular.

At this point you are done, and you’ve got an out of office setup that will ignore your mailing lists.  This may take a little playing to get right.  There is one thing that the normal gMail out of office does that this technique doesn’t do which is nice.  The normal gMail out of office will only email someone your out of office once every four days where this emails them every time.  That’s a minor annoyance compared to pissing off everyone on the mailing lists that I’m on.

Don’t forget to delete the filter (as there is no way to disable a filter) when you get back from vacation so that you don’t keep sending out of office messages to people.

Denny


October 24, 2011  2:00 PM

Working with replication snapshots to large to fit on your distributor

Denny Cherry Denny Cherry Profile: Denny Cherry

I ran across an interesting problem the other day.  I was setting up replication for a couple of VERY large tables.  These two tables between there are about 553 Gigs in size.  There isn’t enough room on the distributor, and there is a slow link between the distribution server and the subscriber.  After 6 days of the snapshot trying to be pushed to the subscriber I killed it because it just wasn’t getting anywhere fast.

Now normally I would just copy the snapshot to the subscriber and run the distribution agent on the subscriber while the snapshot loads (I thought I had a blog post about this but apparently I don’t, so I’ll blog about that later).  The problem with doing that in this case is that the distributor doesn’t have enough drive space to create the snapshot and being that the servers are hosted with RackSpace adding 600 Gigs of SAN drive space to this cluster would cost about about $12,000 a month with a one year contract on the new storage.  Fortunately there’s another cluster that has enough DAS space available so I can simply point the replication agent to use a network share on this drive instead by using the sp_changepublication procedure as shown below.

exec sp_changepublication @publication='PublicationName', @property='alt_snapshot_folder', @value='\\Server\NetworkShare\'

However when I get to the subscriber I don’t want to read from the network share.  I want to copy the files to the local disk and read from there.  There’s no switch when running the distribution agent manually so before starting the distribution agent on the subscriber you have to manually edit the dbo.syspublications table on the subscriber, specifically editing the alt_snapshot_folder column so that it points to the local path that the files will be stored in on the subscriber.  You can then run the distribution agent on the subscriber and the snapshot will load.  Once the snapshot is loaded kill the snapshot on the subscriber and start it on the distributor as normal and all will once again be well with the world.

A question that I get when I talk about doing this is why copy the files over the network instead of just letting the distribution agent load the snapshot remotely.  I’ve found (and I’ve done this several times to deal with slow networks) that the time spent compressing the snapshot files (and they compress really well as they are just text) and copying them over the network is almost way less time then it would take to load the snapshot remotely from the distributor.

In the case of this project the publisher and distributor were in Dallas and the subscriber was in New York City.  But I’ve used this same technique from Orange County, CA to Dallas Texas; from Dallas, Texas to Los Angeles, CA; and from Los Angeles, CA to Beijing, China.  In every case it’s a little slow, it’s a little annoying, but it works (in the other cases the distributor did have enough space to hold the snapshot so I didn’t have to modify the dbo.syspublications table, but everything else I did was the same).

Kendal Van Dyke (blog | @SQLDBA) has another great technique that he posted a couple of years ago.

Denny

SQL Server Days 2011


October 20, 2011  2:00 PM

What changing the recovery mode of your vCenter database really means

Denny Cherry Denny Cherry Profile: Denny Cherry

Apparently one of the things that VMware’s support department will often recommend that people do is to change the recovery mode of the vCenter database within SQL Server from FULL to SIMPLE recovery because it will magically make a lot of disk problems go away.  While this is true, your disk space issues on the vCenter database will go away at this point, this isn’t magic and you have have just lost some functionally that you weren’t expecting to loose.

The big thing that you loose when doing this is that you can no longer restore the SQL Server database to a specific point in time if you have a system failure.  If you don’t need the ability to restore so a specific point in time, and restoring to your latest full or differential backup that you have (you do know that you need to backup the vCenter databases right?) then changing the recovery mode from FULL to SIMPLE is just fine.  However if you do need that ability (and I would recommend that you have that ability so that if the vCenter server fails or someone deletes data you can get back as much trending data as possible) then you will want to leave the recovery mode as FULL, but instead setup transaction log backups (hopefully you have a DBA or a database consultant that can help you with this) so that they run every few minutes (I typically recommend every 10-15 minutes).  Just keep in mind that when selecting how often they should run in the event of a server failure you may loose up to that amount of data from the database.  So if you aren’t aloud to loose more than 10 minutes worth of data, don’t backup the log every 15 minutes, back it up every 10, or maybe every 8 just to be safe.  Make sure that you get in writing from your management that this is the config that they want so if you get yelled at you are covered.

If you haven’t ever backed up the transaction log your disk is probably pretty full (or totally full).  In this case you will want to shrink the database just once.  But before you do you need to know what size you should shrink it to.  This can be much easier if a DBA does it, but I’ve got faith in you to not screw it up.

First connect to the SQL Server using Management Studio.  Take a backup of the transaction log to disk by right clicking on the database, selecting Tasks from the context menu then selecting Backup from the pop out menu.  Select transaction log backup, set a file to backup the log to (you’ll need a lot of disk space for this) and make sure that there’s only one file listed in the file list then start the backup.  This will probably take a while.

If you don’t care about loosing the log for now change the database from full to simple recovery and back.  Do this by running this script.

ALTER DATABASE vcdb
SET RECOVERY SIMPLE
GO
ALTER DATABASE vcdb
SET RECOVERY FULL
GO

Now do a full backup as shown above (just do a full database backup instead of a transaction log backup).

Now with everything running as normal wait for how ever long it’ll be between transaction log backups.  Then connect to the SQL Server in SQL Server Management Studio.  Right click on the vCenter database (vcdb) and select Reports from the context menu, then “Disk Usage” from the list of reports.  At the top it’ll give you the size of the transaction log in MB, and the graph on the right will show you a percentage breakdown of how full the transaction log file is.  Take the Used percentage and double it.  Now take that percentage and multiple it by the size of the transaction log.  That’s the minimum size you should set your transaction log for.  Personally I would round up to the 8 gig size above that size.  If you need 150 Megs of log space, set the log for 8 Gigs in size.  If you need 10 Gigs of log space, let the log for 16 Gigs of space.  A little extra space used is fine, it is better for performance of the database if we don’t need to grow this transaction log.  You can shrink the transaction log by using the DBCC SHRINKFILE statement as shown.

DBCC SHRINKFILE ('vbdc_log', 8388608)

Keep in mind that the number, in this case 8 Gigs, is passed in in megs.  Do NOT for any reason create any sort of scheduled task to regularly shrink any database file.  This will do nothing good and will cause performance problems of its own.  If anyone has told you to setup a job which shrinks the database file regularly they are WRONG.

Denny

SQL Server Days 2011


October 17, 2011  9:37 PM

Been to PASS and now you are feeling a little depressed?

Denny Cherry Denny Cherry Profile: Denny Cherry

Going to the PASS summit is an amazing experience as you’ve probably noticed from all the wrap up posts that you’ve seen. However the downside to PASS is the sad reality that after the summit has ended you have to get back to normal work and your normal life. But that isn’t actually true. You can extend that awesome PASS feeling by attending other events like SQL Excursions, SQL Saturdays, etc. Were you can continue to get your SQL learning on, networking with other SQL Server professionals and having a good time while you enhance your career, improve your skills for your current role, have a great time and make new friends for life.

What ever you do, don’t make the SQL PASS experience a short one week a year experience, make it something that you do through out the year, every year. Your employer will thank you as you develop new skills to help them now, and you and your family will thank you as you work on moving your career to the next level.

For you managers reading this, don’t worry that your employees will jump ship when they get back from a conference. While some might, if they do they were going to leave anyway and not sending them wasn’t going to stop them. Odds are you’ll probably get some great leads for potential new SQL professionals when you send your people to conferences. I’ve even heard of people getting home from a conference with a new job, and giving their old employees a great referral for a replacement.

Hopefully everyone is able to slowly get back into regular life, until the next time we all meet.

Denny


October 17, 2011  2:00 PM

Quick and dirty way to run a batch over and over

Denny Cherry Denny Cherry Profile: Denny Cherry

Everyone has their favorite way or running a block of code over and over again.  If you have to run an entire batch over and over again there is nothing as quick and easy as using the “GO n” syntax.  When you put a number after the GO batch separator SQL Server Management Studio (as well as SQLCMD) will run that entire batch however many times that you have specified.

Recently I needed to create some dummy data in a table, so instead of writing a loop that counted the number of times the loop ran, I simply used code like this.

CREATE TABLE SampleData
(id int identity(1,1),
SomeData varchar(100))
go
insert into SampleData
(SomeData)
values
('This is my sample data')
go 100000

When run, the batch with the insert statement ran 100,000 times generating all the data that I needed with just 7 extra keystrokes.

Denny

SQL Server Days 2011


October 14, 2011  6:52 PM

Something old, something new, something borrowed, something blue and a Confio umbrella

Denny Cherry Denny Cherry Profile: Denny Cherry

Since I didn’t make it to the SQL PASS keynotes this morning, I figured that I’d tell you about another little event that happened late last night.  Last night there was a SQL PASS first.  There was a wedding here at the PASS summit.  Tamera Clark (blog | @tameraclark) and Terry Tyler (blog | @airbornegeek) who are both DBAs, and first time attendees decided a while back that if they both were able to come to the SQL PASS summit that they would get married at the SQL PASS summit.  This wasn’t exactly a rash decision on their part as they’ve been living together for many years. Conveniently enough I’m a registered minister and they asked me to perform the ceremony.

The wedding was very short and was held down by the water over looking Waterfront Park here in Seattle, WA just a short walk from the Washington State Convention Center and all our hotels.  Microsoft sponsored the rehearsal dinner (there was a Microsoft sponsored Tweet Up that everyone in the wedding party went to) before hand.  Confio was kind enough to provide the “bouquet” (it was a Confio umbrella that was given out to first timers, so it was new, borrowed as well as blue).  And PASS was kind enough to sponsor the wedding reception (the annual PASS party over at Game Works) which the happy couple and several of the wedding guests went to.

I’ve got to say, this was definitely a first for the SQL PASS summit and for me and I’m so glad that they offered to let me be a part of their wedding.  Pat Wright was kind enough to take some pictures before, during and after the wedding ceremony which either he or Tamara and Terry will be posting online somewhere when they get a chance.  Please join me in congratulating the happy couple either here, on their blogs somewhere or on Twitter.

Hopefully next year the SQL PASS summit can be just as awesome as this years was (really is as it isn’t over yet).  It sure as hell will be hard to beat this year.

Denny


October 13, 2011  4:50 PM

SQL PASS 2011 Keynote Day 2

Denny Cherry Denny Cherry Profile: Denny Cherry

Welcome to SQL PASS Summit Keynote on Day 2.  Today was another day of announcements made by Quentin Clark who is the Corporate VP for Microsoft who is in charge of the SQL Server database engine.

Quentin started his talk about how SQL Server 2012 will improve the way that SQL Server provides many 9s of up time with the introduction of Always On.  The StreamInsight feature now also includes an HA option which will allow you to keep the SteamInsight system processing all the time.

The first demo for todays keynote was my favorite SQL Server 2012 feature, Always On.  In the demo the showed a three node cluster which hosted the production application with the cluster in two data centers.  During the demo the presenter quickly and easily setup.  After setting up a readable secondary the Availability Group was marked within the Always On routing configuration as a readable secondary and the reporting services reports were configured to automatically route their connections to the readable secondary in order to keep the reporting workload off of the production SQL Server.  (I’ll be blogging more about this feature later.)

In the second part of the keynote Quentin talk about the ColumnStore indexes and how this feature will make running reports and data analysis much, much quicker.  Quentin then talked about how PowerPivot has been enhanced, specifically when implemented along with SharePoint which will allow users to create their own reports within the IT controlled platform of SharePoint (which is great unless you don’t have / can’t afford SharePoint).

The next demo was for a set of features which are the Data Quality Services (DQS) and Master Data Services (MDS) features.  DQS is a pretty cool feature which allows you to clean up data problems like incorrect lat/long based on the address of a location by using data from the SQL Azure data market place.  During the demo there were a couple of stores which were located in the middle of the water around Seattle.  By using DQS the stores location was able to be quickly and easily moved from the middle of the water to the correct location in Seattle.

Quentin then talked about the Fast Track program which allows hardware vendors to sell pre-packaged, pre-configured systems with SQL Server on them which can make it easier for customers to purchase their SQL Server servers.  Another great program that Microsoft has is the appliance solution where you can get a server from the vendor up and running in just a couple of hours ready for data to be loaded into it.

Microsoft introduced the HP Enterprise Database Consolidation Appliance.  This system is basically a private cloud appliance which is a rack of servers with 192 cores, 400 disk drives (supporting up to 60k SQL Server IOs) and 2 TB of RAM.  This can be purchased as a half rack or a full rack appliance.  As I understand this appliance this is effectively a Windows Hyper-V cluster where you can fire up VMs within the system quickly and easily.  Where the system really saves you is the ability to deploy as a single unit quickly and easily without having to manually test all the components as HP will be testing and certifying that the hardware is working correctly before shipping the unit.  From the support side of things you get a single phone number to call in order to get support for both the hardware as well as the software.

The next announcement for today was a traditional ODBC driver for the Linux platform in order applications running on non-Microsoft platforms that can connect to a SQL Server database without any real change to the application.  Quentin also announced that Change Data Capture was now supported for SSIS as well as Oracle (not really sure what this means or how this works).

Semantic search was the next feature which was demoed.  Semantic Search is used along with the new file table feature which can load files into the database through the file system, then Semantic Search can be used to read the files and not just make the values searchable but it can understand the data within the files so that you can find files based on specific searches but you can then find related documents to what you just found by using not just specific search terms but by using the original document to find the matches.

Another name change for this week is that project Juneau is now called SQL Server Data Tools.

You can now from within SQL Server Management Studio right click on a database and migrate the database directly to SQL Azure.  This is done by creating a  DACPAC (v2.0) and pushing it to the Azure with just a few clicks.  Another Azure feature which is now supported is the ability to backup local or Azure database to Azure file storage by creating what is called a bacpac which is then copied to the Azure file storage.  So for we haven’t seen a way to do this through T/SQL.  You can also use this technology to backup your Azure database to the Azure file storage then use SSMS to connect to the file storage and restore the database to a local SQL Server instance for dev and testing.

SQL Server Azure is now supporting federations on the database side of the Azure platform in order to dynamically scale the database platform quickly and easily through the management portal without any application changes.  With the next year end release of SQL Azure larger databases up to 150 Gigs will be supported as well as the ability to use a different collation (this is using contained databases).

Available today in CTP Microsoft has introduced the SQL Azure Reporting CTP as well as the SQL Azure Data Sync CTP.  The reporting CTP allows you run SSRS reports from the SQL Azure cloud.  The Data Sync CTP (which is the final CTP) allows you to easily sync data between a local instance and an Azure instance or from an Azure instance to a SQL instance at another cloud provider.

Hopefully you are as excited about some of these new features as I am, as some of this stuff just looks really, really cool.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: