So it turns out that the code that I posted last week for discounts to SSWUG as stopped working for some reason. SSWUG has created a new code for $30 off. The new code is DCSPVC10.
Everything else is my old post is still valid.
I’ll be presenting several sessions at SQL Saturday as are several other great speakers. However they are still looking for additional speakers both new and experienced to give presentations at the event. They are also looking for additional sponsors to help make the event even better.
So if you have been looking for an event to give your first presentation at, or if you have a stack of presentations, this would be a great event for you to speak at.
If you’ve been looking for some free SQL Server training in the Southern California area, this is the event for you. There will be hours of free SQL Server training all available for just showing up (and for $10 to cover the lunch).
I hope to see you there.
Were you planning on attending the SSWUG vConference this April 7-9, 2010 but you wanted to save some extra cash in the process? Have I got the perfect solution for you. When signing up use VIP code DCHERRYSPVC10 and this will save you an extra $30 off the registration cost of $249.00. If you sign up now, you’ll get the Early bird rate of $190, minus the $30 off which brings the price down to a very reasonable $160.
Conference registration includes 6 month membership/extension to SSWUG.org, all materials, 3 days of sessions, 45 days of on-demand viewing and a lot more!
Conference details are stolen strait off of the sign up page.
I’ve got three sessions which I just finished recording earlier in the week, and I’m planning on being online during them to handle any Questions which come up via the Live Q&A (I may have a scheduling conflict as I’ll be in Redmond at the Microsoft Offices for part of one of the days, but hopefully we can work around that.)
See you there.
The timing on this post might seam a little strange, but I’ve been meaning to write this for a while and I finely got a chance to do it.
Back when Hyper-V 1.0 was released it wasn’t all that great of a product. It showed some promise, but it really wasn’t there. I had all sorts of people (mostly from Microsoft) telling me that it was way better than ESX and that I needed to give it a shot. My personal feelings are that it wasn’t anywhere near where ESX was, and for my production environment I needed the better product, so we went with ESX 3.5.
Well a while back Microsoft released Hyper-V 2.0 and it is a much better release than it was at the time. I’d even be willing to stack it up next to VMware’s ESX 3.5 which was VMware’s competing version at the time of release. Put next to ESX 3.5 you would have two well matched products. Both included a real time online migration solution vMotion for ESX and life migration for Hyper-V. Both support being put into a high availability cluster. Both support pass through disks so that the guest OS has direct access to the fibre channel storage.
However shortly after Hyper-V 2.0 was release VMware released vSphere 4.0 which is the successor to ESX 3.5 and with vSphere 4.0 they’ve blown the doors off of Hyper-V yet again.
vSphere gives us FT or Fault Tolerance which basically runs a single VM on two machines with only one of the machines being active at a time. In the event that one host fails the other host being running the VM actively with no outage to the guest. Users connected to the guest won’t even know that the guest has switched to another machine.
VMware has also introduced some interesting features as experimental which means that we will probably see them show up as full on features in a future release. This includes the ability to map an HBA directly to a virtual machine to give the VM actual direct access to the HBA. At the moment that HBA can only be mapped to a single VM, but hopefully in the next release they will fix that.
Now don’t get we wrong, I think that Hyper-V has come a long way since the v1 release. Do I think that Hyper-V is an Enterprise Ready solution? Yes I do. Do I think Hyper-V is ready to be called the winner in the virtualization server space? No, not at all. I think it is anyone’s game still before we have a clear winner. Hyper-V has a big selling point to it, the cost to get into Hyper-V is free, as long as you don’t want to cluster it. Then you’ll need to purchase a management tools license for each host machine. How with VMware you’ll want the management tool weather you cluster the machines or not, but its a single purchase from VMware at least.
What it really comes down to is that you need to fully evaluate both platforms and due a solid CBA (Cost Benefit Analysis) as well as a full feature analysis before picking a platform for your enterprise. Because once you pick one platform moving from one to the other is very tough to do.
I read this on the net tonight and thought it was an awesome idea.
A friend and colleague at NetApp let me know they were doing a drive for St. Baldricks – a campaign where people shave their heads to help raise funds for fighting cancer in children. So, in the spirit of competition driving positive things and staying above the fray, we made a little wager.
If they can crack $10K before mid-day tomorrow, my NetApp brothers will shave “EMC” into their heads – as of this moment, they have raised $3,600 from EMCers.
So in the spirit of fanning the flames, I’m posting a link to it. So it you work for EMC or NetApp (or know someone who does) hit the link and donate, donate, donate.
There’s a lot of talk about the new SQL Server 2008 R2 pricing. To give you an idea of why people are complaining…
(Keep in mind that all prices on this page a CPU licensing.)
|SQL 2008 R2||$7,499||$28,479||$57,498|
Now you are getting a lot of new features in SQL 2008 R2 over what was included in SQL Server 2005 and SQL Server 2008. However these prices are quite high, especially on the lower end when you compare them to the other database, you know, Oracle.
|Difference from SQL 2008||$7,501||$22,501|
Now you’ll notice that I didn’t compare Oracle’s Standard Edition One to a SQL Server edition. There isn’t really a SQL Server edition that compares with this edition as it falls somewhere between the Workgroup and Standard editions. It is lower than the SQL Server Standard edition as it only supports two CPUs, but it is better that the SQL Server Workgroup edition as it supports as much RAM as the SQL Server supports.
Now when it comes to features SQL Server is going to be the clear winner. SQL Server includes things like Replication, Auditing, and so on. As best I can tell (god knows I’m not an Oracle expert) these features aren’t available as part of the non-Enterprise editions of Oracle. And if you want them on Oracle you had better be able to pay for them.
A few of the extras that you can buy for your Enterprise edition Oracle server include:
Advanced Data Compression – $10,000 per CPU
Advanced Security – $10,000 per CPU
OLAP – $20,000 per CPU
If you want to connect your Oracle database to another database platform, that’ll cost you as well. As best as I can figure this is basically Oracle’s version of linked servers. (These prices are per database server not CPU.)
SQL Server access – $15,000
Sybase access – $15,000
Informix access – $15,000
Teradata access – $95,000
Websphere MQ – $40,000
Now Oracle doesn’t have a data center edition or a Parallel Data Warehouse Edition so there’s really no way to directly compare Oracle to SQL Server Datacenter edition or Parallel Data Warehouse Editio. To get close we have to take the Oracle Enterprise Edition at $40,000, add in Partitioning at $10,000, and add in RAC (Real Application Clusters) at $20,000 for $70,000.
Now when dealing with the Oracle Enterprise edition you have to keep in mind that you aren’t paying per socket (Oracle licenses the Standard and Standard One editions per socket) but you are paying per CPU core. According with the Oracle Pricing Book if you are using SPARC multi-core processors then each core is charged at .75 CPUs for each CPU core, and if you are using x86 or x64 multi-core processors then each core is charged at .5 CPUs for each CPU cores.
So if you have a quad chip quad core server, which is a pretty standard database server these days, that’s 16 cores, so you are paying for a total of 8 CPU licenses. Assuming that you have Enterprise Edition with no optional features that is a $320,000 in license fees. SQL Server 2008 R2 Enterprise Edition comes out at $229,992 which makes SQL Server 2008 R2 $90,008 less for that server.
Now, on the lower end database server side Oracle is going to start to win on price. If you need a dual chip SQL Server you’ll probably want the Standard Edition (otherwise you are limited to 4 Gigs of RAM). SQL Standard for two CPUs will come out to $14,998 for the server, but the Oracle Standard One comes out to $9,990 making Oracle $5,008 less that SQL Server. Now if your database server needs four CPUs then Oracle Standard Edition will come in at $60,000 where the SQL Server 2008 R2 Standard Edition will come in at $29,996 making the SQL Server license $30,004 less expensive. Apparently Oracle standard doesn’t support more than 2 CPUs based on the information put in the comments below. But with the 6 core CPUs out and the 8 core CPUs coming out soon having a 2 socket server is probably going to become a more and more popular option.
I think that if Microsoft is going to keep the Standard edition pricing where it is, they should increase, or remove the memory limit of the Workgroup edition so that it is a better competitor with the Oracle Standard Edition One product. With the memory limit removed from the Workgroup edition the Workgroup edition would be superior than the Oracle Standard Edition One as the SQL Server Workgroup edition would will on features.
P.S. All prices are based on posted list prices as announced by Microsoft as the SQL PASS summit, or from the Oracle Pricing Book. These should be considered list prices and if you pay these prices for either product you aren’t trying hard enough to get a discount.
UPDATE: I’ve corrected the Oracle pricing and commented out the information about the quad chip Standard Oracle Server as apparently a quad chip Oracle server requires Enterprise Edition.
I presented three sessions at SQL Saturday 33 this last weekend. Here are the slide decks and sample code that I used during my presentations.
It was great presenting this weekend.
So today is #TSQL2sDay, and this months topic is Disk IO. Storage is something I love working in, so I figured why not, I’ll post something.
Hopefully everyone knows that your storage solution is the most critical when it comes to keeping your database up and running day after day. If your databases are two slow, then your database will be slow and there just isn’t anything you can do about that besides adding more disks to the storage system.
However figuring out that the problem is actually slow disks can be a problem in itself, especially if you work in a large company and don’t have access to the servers themselves or the storage array that is hosting the databases.
Within SQL Server you have a couple of places you can look. The easiest is to look at the sys.dm_io_pending_io_requests DMV. If you have a lot of rows being returned that say that the IO is pending then you may have a problem as SQL may be trying to push more IO than your storage solution can handle.
Another DMV you can look at is the sys.dm_io_virtual_file_stats DMV. This will tell you how many IO requests have been processed since the instance was started, as well as how many of those requests were stalled when writing them. Using these numbers requires doing some math to see how your doing over the current runtime of the instance. As the instance has been up for longer and longer these numbers can get harder and harder to make sense of.
Within Windows you’ll be using our good friend Performance Monitor to see what’s going on. There’s a few counters which are really critical to looking at. They include the Reads and Writes per second, the seconds per read and write, and the queuing counters.
The reads and writes per second will tell you how many requests are going between the server and the disks per second. If these numbers are very high and stay there they you are pushing the disks very hard. If this is the case, stop here, and make sure that you don’t have any indexes that need to be built, and that all your statistics are up to date.
The seconds per read and write are very critical numbers. This tells us how fast the disks are processing each read and write request. These numbers should be very very low, somewhere in the .00n range with the smaller the number the better. If you are seeing numbers which are higher than .010 then you may be pushing your disks to hard. Anything over 1 second and your SQL Server is probably dying for more disks.
The disk queuing numbers are also very important. These will tell you how many commands are backing up while the disks are processing the other commands which were given to them. The general number is that the queue shouldn’t ever reach n*2 the number of disks that you have which are actively serving the data. So if you have a 10 disk RAID 10 array, the queue should go no higher than 10 as there are only 5 disks serving the data, but those same 10 disks in a RAID 5 array are OK with a disk queue of about 18 as there are 9 disks actively serving the data.
Now this doesn’t mean that you should always have a disk queue. Disks work best when data is sent to or read from them is bursts, not in a constant massive stream. This means that you want to aim for an average queue length of 0, with occasional spikes up.
On the array
If you are working with your basic local disk array, then there isn’t much you’ll be able to look at past the server unless your RAID card has metrics which it can expose to you through the diagnostic tools which come with it.
However if you are working in with a SAN solution your SAN will have some diagnostics available to your SAN admin. These diagnostic numbers will give you the full story as you’ll be able to see what Windows sees from the Server, as well as what the array is seeing.
When looking at the array itself you can now see not only what the performance on the LUN which is presented to Windows, but what the performance of each specific disk which is under the LUN is doing. This will allow you to for example see if a specific spindle under the LUN is causing the slow down, perhaps because it is failing.
Getting the full picture is very important when it comes to looking at storage performance issues. This means looking at the performance numbers from all sides so that you can get a full understanding on exactly where the performance problem may be coming from.
I forgot to put in the link to Mike’s post about T-SQL Tuesday, so here it is.
Every once and a while you have to kill a SPID in SQL Server. And on a rare occasion the SPID will rollback, but won’t actually rollback and go away. While this is annoying there isn’t actually anything bad going on. The SQL Server is running just fine, however you won’t be able to kill this SPID without restarting the SQL instance.
Typically when I’ve seen this the client application has been disconnected from the SQL Server. From what I understand is happening is:
- The SPID is killed
- The SQL Server rolls back the transaction
- The client is informed of the rollback
- The client acknowledges that the rollback is complete
- SQL terminates the SPID
Every time that I’ve seen this on my servers the client has already disconnected, do to a reboot, network drop, client crash, etc which stops the SQL Server from telling the client that the rollback is complete. This breaks something within step 3 and 4 leaving the process sitting there.
The upside to this problem is that the rollback is complete and the transaction has been completely rolled back and closed so it isn’t holding any locks. The downside is that you’ll need restart the SQL Instance in order get rid of the process. Killing the process won’t do anything for you as it will only tell you that there are 0 seconds remaining and that the rollback is at 0%.
If you have one of these processes show up on you and you have to leave it for a day or two until you can restart the instance there shouldn’t be any harm in this as the process is idle. It is using up a small amount of memory, but once the rollback has completed it isn’t using any CPU or memory. Upon restart of the instance it won’t add any time to the instance restart as the transaction has been rolled back so it’ll come back online quickly.