I find myself with a little extra time between my last session of the day, which was a SQL 2008 Session, and the Grant Opening Reception, so I figured that a blog post was in order.
The SQL 2008 session was a what’s coming in SQL 2008. Most of the information was stuff that I already knew and have covered here in some form or another: FILESTREAM, new data types, etc. There was a new piece feature which was presented. The new feature it called the Remote Blob Storage Architecture. It’s not actually part of the SQL Server but the SQL Native Client (the SQL 2008 SQL Server driver). the RBSA is an API which lets the application write the file directly to the file system within the context of a SQL Server transaction. This way if the transaction is rolled back the file is deleted automatically as part of the client transaction rollback.
This is the first that I have heard of it, so I’ll have to find out more about and post about it. The only thing that Google can find is an MSDN forum thread from March 2008. There was some very ruff VB.NET code presented as an example. I’ll try and get my hands on that, clean it up a bit and see what I can figure out. As soon as I do I’ll get it posted.
If anyone has any additional information about the RBSA please feel free to post it. I’m sure that I’m not the only one who would like to see more information about it.
I’ve got 5 or 6 sessions tomorrow, but they are more focused on VMware and the EMC CLARiiON product line. So be sure to check out tomorrow if you are interested in those.
Here are some of the key points from the keynotes from EMC world.
Gartner says that by 2011 there will be 1337 Exabytes.
179 Exabytes of information has been created so far this year.
In the next two years we will see a trend moving further away from using Tape for backup and recovery. It will still be used for long term archiving, but day to day restores will come from disk.
During the raw data into useful information is becoming more and more of a challenge for IT departments and business units.
What I’m gathering from the keynotes is that the amount of data is exploding. I see this as a great thing for us DBAs as, the bulk of data that companies have is going to be stored within the database. I know that the company which I work for (Awareness Technologies) creates about 110 Gigs of information per day. Now with our products policy we keep this data for 14 days (unless the customer pays us to keep the data longer). This means that we have about 1.5 Terabytes of information within our data center that our customers are looking at on a daily basis. This much data, with such a high data change rate is just a crazy amount of data change.
Sites with high data change rates have specific challenges some of which I will be talking about (specifically how they relate to Microsoft SQL Server) on 6/11/2008 at 12:00 PM PST when I present a geekspeak session “geekSpeak: Spatial Data in SQL Server 2008 with Denny Cherry”. I’ll post the link to this webcast when I get it.
Since my laptop battery is dieing I’ll have to wrap this up for now. Check back later for more.
While waiting for the first keynote to start I figured that this would be as good a time as any to write a quick post.
I just got out of the first session of the morning. I decided to catch the VMware VI3 session which went over the new features of ESX 3.5.
It was a very good session packed with great info about the new features of VMware ESX 3.5 which came out a few months ago.
3.5 introduces the Storage Vmotion. It’s similar to the regular Vmotion which moves VMs from one host to another, but it’s designed to move a VMs storage from one disk (LUN, array, etc) to another disk within the same host without taking the VM down. In prior versions of VMware to move a VM from on disk to another you would need to power the machine down and then move the machine. It can now be done as a live operation. It does temporally double the memory and CPU requirements of the guest OS, and it is a CPU and disk intensive operation and should be done off hours, but it’s a great way to move from a full disk to an empty one.
Another great technology which I was introduced to is the VDM or Virtual Desktop Manager. As I understand this technology it allows you to take a single Windows XP guest OS, and allow many people to connect to it. The VDM places a connect pool in front of the Windows XP guest OS and as people connect to it, it takes a snap of the guest OS, and lets the new user use the snap. This allows many people to use a single Virtual Machine without increasing the amount of disk space required, and reducing the IO requirement of your Windows XP (or Vista) guests on the host.
As the Keynotes proceed I’ll try and go over what they are talking about.
Funniest comment of the concert would have to be “This is the first all you can eat buffet rock show I’ve ever been to.”
[kml_flashembed movie="http://www.youtube.com/v/er3xnXS_9vs" width="425" height="350" wmode="transparent" /]
The video was taken by my wife (who also had a great time, thanks for coming with me). What you don’t see (but you can here) is John Rzeznik talking to one of the people serving the food about 2 feet from the front of the stage. If anyone has better shots or video please post them and links to them.
I’ll continue to post from the conference as best I can.
I’ve added another picture so the group.
The message type is the first of the service broker specific objects. The message type defines the name of the message, and performs any validation that the service broker needs to do on the contents of the message prior to sending the message. If the message is being sent to another database (on the same server or a different server) then the message type must exist on both databases.
Validation can be one of four options.
- None (My personal favorate especially for sample code)
- Valid_XML With Schema Collection
1 and 2 are pretty straight forward. None = No validation is done. Empty = The message must be empty. Well_Formed_XML requires that the XML be a valid XML document. The fourth option requires that not only the XML be valid, but that it fits within a pre-existing XML schema collection (an XSD which has been loaded into the SQL Server).
The syntax to create a Message Type is pretty basic.
CREATE MESSAGE TYPE [MessageTypeName]
VALIDATION = NONE
That’s it. If you need to change the schema use the AUTHORIZATION clause to set the owner of the object.
When using the validation of NONE it is up to the receiving code or application to verify that the data within the message exists and is valid.
Next week I’ll be stepping into the lion’s den of sorts. I’ll be off to Las Vegas to attend EMC World. I went last year, and I’m pretty sure that I was about the only full time DBA there. The storage folks treated me pretty nice, even after they found out that I was a DBA.
Hopefully when I get back I’ll have all sorts of new storage tips and tricks to share with everyone.
If anyone else if brave enough to attend let me know here. Maybe we can meet up at one of the events.
Back in the old days one of the big reasons that people enforced such strict rules on data normalization was because it would greatly increase the amount of information you could fit on the hard drive. Back when everything was measured in kilobytes or megabytes storing redundant information simply wasn’t an option. You had to be able to cram as much information into that little database.
These days space isn’t the issue. Storage is very cheep these days, and if you have the money you can get absolutely massive amounts of storage. 1 TB hard drives are easily found, and when you start talking about RAIDing drives together the storage limits start to become just insane. 100 TB of storage just isn’t what it used to be.
The problem now becomes that with so much data for the database platform to search through we need to make sure that the indexes which SQL Server is searching as as small as possible so that we can get those indexes read from the disk and into memory as quickly as possible.
Say we have an Employee table which looks like this.
CREATE TABLE Employee
Now this table is clearly not setup as a normalized table. We are assuming that this table has best case indexing. If we have 1000 employees records and we are going to search the Department column the index must be read from disk. Now assuming that the department names are nice long bureaucratic names which average 30 characters each we have about 30000 bytes (more if you take the fill factor into account) which needs to be read from the disk. Now if we had a relational version of the table.
CREATE TABLE Employee
When we search the Employee table we now are only loading an integer value from the disk into memory. This means that we only have to load 8000 bytes of data from the disk into memory. Now knowing that SQL Server reads data from the disk in 8k blocks even with a 50% fill factor the index only takes 2 blocks on the disk and can then be completed in two read operations to the disk. Now our original table with no fill factor will require 30 data pages (20.297 to be specific), about 60 if we use the same fill factor.
With tables of this size this isn’t much. But as your database grows the number of additional data pages which would have to be read for a simple search of the table grows exponentially.
The SQL Server Service Broker is a fantastic new addition to SQL Server. For those who have used Microsoft Message Queue (MSMQ) the service broker will be easy to understand as it’s the same basic concept. Messages are sent from one place to another, within a database, from database to database, or from server to server.
While the configuration can be a daunting task, once the service broker is setup it is a very solid system which can handle a large message load.
The Service Broker give you guaranteed delivery order, with single processing of messages always in the order received. Messages can be processed on demand (you write software which queries the service broker on a schedule) or automatically via the queue activation.
Check back for information about all the various service broker object an how to configure each of them.
Quest Software has just published the first of a few videos which I recorded with them.
You can download or view the video from here.
I’ve got to tell you, I had an absolute blast recording the video.
The nice folks at the Inland Empire .NET User Group have invited me to come and speak to them. This is a great speaking opportunity for me as they are about 20 minutes from my house.
I won’t be speaking there until December 9, 2008 (it’s amazing just how far in advance some of this stuff gets scheduled).
I’ll be giving two presentations at the meeting. The first will be the ever popular Query Tuning, and the second will be a talk on the SQL Server 2008 Resource Governor. The address and directions to the meeting can be found on the IE .NET User Group web site. If you are going to attend there meetings they have an RSVP link on the site.
I’ll post about this meeting again closer to the meeting.