Here are some of the key points from the keynotes from EMC world.
Gartner says that by 2011 there will be 1337 Exabytes.
179 Exabytes of information has been created so far this year.
In the next two years we will see a trend moving further away from using Tape for backup and recovery. It will still be used for long term archiving, but day to day restores will come from disk.
During the raw data into useful information is becoming more and more of a challenge for IT departments and business units.
What I’m gathering from the keynotes is that the amount of data is exploding. I see this as a great thing for us DBAs as, the bulk of data that companies have is going to be stored within the database. I know that the company which I work for (Awareness Technologies) creates about 110 Gigs of information per day. Now with our products policy we keep this data for 14 days (unless the customer pays us to keep the data longer). This means that we have about 1.5 Terabytes of information within our data center that our customers are looking at on a daily basis. This much data, with such a high data change rate is just a crazy amount of data change.
Sites with high data change rates have specific challenges some of which I will be talking about (specifically how they relate to Microsoft SQL Server) on 6/11/2008 at 12:00 PM PST when I present a geekspeak session “geekSpeak: Spatial Data in SQL Server 2008 with Denny Cherry”. I’ll post the link to this webcast when I get it.
Since my laptop battery is dieing I’ll have to wrap this up for now. Check back later for more.
While waiting for the first keynote to start I figured that this would be as good a time as any to write a quick post.
I just got out of the first session of the morning. I decided to catch the VMware VI3 session which went over the new features of ESX 3.5.
It was a very good session packed with great info about the new features of VMware ESX 3.5 which came out a few months ago.
3.5 introduces the Storage Vmotion. It’s similar to the regular Vmotion which moves VMs from one host to another, but it’s designed to move a VMs storage from one disk (LUN, array, etc) to another disk within the same host without taking the VM down. In prior versions of VMware to move a VM from on disk to another you would need to power the machine down and then move the machine. It can now be done as a live operation. It does temporally double the memory and CPU requirements of the guest OS, and it is a CPU and disk intensive operation and should be done off hours, but it’s a great way to move from a full disk to an empty one.
Another great technology which I was introduced to is the VDM or Virtual Desktop Manager. As I understand this technology it allows you to take a single Windows XP guest OS, and allow many people to connect to it. The VDM places a connect pool in front of the Windows XP guest OS and as people connect to it, it takes a snap of the guest OS, and lets the new user use the snap. This allows many people to use a single Virtual Machine without increasing the amount of disk space required, and reducing the IO requirement of your Windows XP (or Vista) guests on the host.
As the Keynotes proceed I’ll try and go over what they are talking about.
Funniest comment of the concert would have to be “This is the first all you can eat buffet rock show I’ve ever been to.”
[kml_flashembed movie="http://www.youtube.com/v/er3xnXS_9vs" width="425" height="350" wmode="transparent" /]
The video was taken by my wife (who also had a great time, thanks for coming with me). What you don’t see (but you can here) is John Rzeznik talking to one of the people serving the food about 2 feet from the front of the stage. If anyone has better shots or video please post them and links to them.
I’ll continue to post from the conference as best I can.
I’ve added another picture so the group.
The message type is the first of the service broker specific objects. The message type defines the name of the message, and performs any validation that the service broker needs to do on the contents of the message prior to sending the message. If the message is being sent to another database (on the same server or a different server) then the message type must exist on both databases.
Validation can be one of four options.
- None (My personal favorate especially for sample code)
- Valid_XML With Schema Collection
1 and 2 are pretty straight forward. None = No validation is done. Empty = The message must be empty. Well_Formed_XML requires that the XML be a valid XML document. The fourth option requires that not only the XML be valid, but that it fits within a pre-existing XML schema collection (an XSD which has been loaded into the SQL Server).
The syntax to create a Message Type is pretty basic.
CREATE MESSAGE TYPE [MessageTypeName]
VALIDATION = NONE
That’s it. If you need to change the schema use the AUTHORIZATION clause to set the owner of the object.
When using the validation of NONE it is up to the receiving code or application to verify that the data within the message exists and is valid.
Next week I’ll be stepping into the lion’s den of sorts. I’ll be off to Las Vegas to attend EMC World. I went last year, and I’m pretty sure that I was about the only full time DBA there. The storage folks treated me pretty nice, even after they found out that I was a DBA.
Hopefully when I get back I’ll have all sorts of new storage tips and tricks to share with everyone.
If anyone else if brave enough to attend let me know here. Maybe we can meet up at one of the events.
Back in the old days one of the big reasons that people enforced such strict rules on data normalization was because it would greatly increase the amount of information you could fit on the hard drive. Back when everything was measured in kilobytes or megabytes storing redundant information simply wasn’t an option. You had to be able to cram as much information into that little database.
These days space isn’t the issue. Storage is very cheep these days, and if you have the money you can get absolutely massive amounts of storage. 1 TB hard drives are easily found, and when you start talking about RAIDing drives together the storage limits start to become just insane. 100 TB of storage just isn’t what it used to be.
The problem now becomes that with so much data for the database platform to search through we need to make sure that the indexes which SQL Server is searching as as small as possible so that we can get those indexes read from the disk and into memory as quickly as possible.
Say we have an Employee table which looks like this.
CREATE TABLE Employee
Now this table is clearly not setup as a normalized table. We are assuming that this table has best case indexing. If we have 1000 employees records and we are going to search the Department column the index must be read from disk. Now assuming that the department names are nice long bureaucratic names which average 30 characters each we have about 30000 bytes (more if you take the fill factor into account) which needs to be read from the disk. Now if we had a relational version of the table.
CREATE TABLE Employee
When we search the Employee table we now are only loading an integer value from the disk into memory. This means that we only have to load 8000 bytes of data from the disk into memory. Now knowing that SQL Server reads data from the disk in 8k blocks even with a 50% fill factor the index only takes 2 blocks on the disk and can then be completed in two read operations to the disk. Now our original table with no fill factor will require 30 data pages (20.297 to be specific), about 60 if we use the same fill factor.
With tables of this size this isn’t much. But as your database grows the number of additional data pages which would have to be read for a simple search of the table grows exponentially.
The SQL Server Service Broker is a fantastic new addition to SQL Server. For those who have used Microsoft Message Queue (MSMQ) the service broker will be easy to understand as it’s the same basic concept. Messages are sent from one place to another, within a database, from database to database, or from server to server.
While the configuration can be a daunting task, once the service broker is setup it is a very solid system which can handle a large message load.
The Service Broker give you guaranteed delivery order, with single processing of messages always in the order received. Messages can be processed on demand (you write software which queries the service broker on a schedule) or automatically via the queue activation.
Check back for information about all the various service broker object an how to configure each of them.
Quest Software has just published the first of a few videos which I recorded with them.
You can download or view the video from here.
I’ve got to tell you, I had an absolute blast recording the video.
The nice folks at the Inland Empire .NET User Group have invited me to come and speak to them. This is a great speaking opportunity for me as they are about 20 minutes from my house.
I won’t be speaking there until December 9, 2008 (it’s amazing just how far in advance some of this stuff gets scheduled).
I’ll be giving two presentations at the meeting. The first will be the ever popular Query Tuning, and the second will be a talk on the SQL Server 2008 Resource Governor. The address and directions to the meeting can be found on the IE .NET User Group web site. If you are going to attend there meetings they have an RSVP link on the site.
I’ll post about this meeting again closer to the meeting.
As we all know SQL Server, will for good or bad, cache execution plans. Up until now it’s been very tough to see the cached execution plans. You pretty much had to grab the query or procedure (with the parameters) and run it with Query Analyzer and have it return the execution plan. The only other option was to use SQL Profiler to capture the query along with the execution plan which it used.
Starting in SQL Server 2005 you now have the ability to query the system catalog and see what query plans are stored in the cache, as well as the queries which trigger that plan to be used, as well as how many times that cached plan has been used.
The first DMV to look at is sys.dm_exec_cached_plans. This DMV gives you the size of the plan, the handle of the plan (which is binary), and the number of times the plan has been used.
The next DMV to look at is sys.dm_exec_requests. This DMV gives you the currently running processes, wait information, all the session settings for the processes, etc. It also includes the handle for all the plans which are currently being run. If you join this sys.dm_exec_requests to sys.dm_exec_cached_plans you can see exactly which processes are using which plans. Because this shows you the spid (the session_id column) and the plan_handle you can see what users are running which commands.
The next object is a system table function called sys.dm_exec_query_plan. This function accepts a single input parameter of a plan handle. There are several columns in the output, the most important of which is the query_plan. This is the actual xml plan which is stored in the SQL Server.
The last object we’ll be looking at is the sys.dm_exec_plan_attributes table function. This function also accepts a single input parameter of a plan handle. This function returns the various session settings which were in place when the plan was created. This is important information to have when working with query plans, as changing a single ANSI connection setting will force SQL to create a new execution plan.
Don’t forget how to view the XML execution plan in the UI.