Yes, provided that your transaction logs have their own LUN from the other files as the write cache is enabled and disabled at the LUN level. By default read and write cache will be enabled for every LUN which is created on the array.
There aren’t to many cases where you would want to disable the write cache on a LUN except for maybe a data warehouse LUN where no data is updated, only new rows are written. The reason for this is that these will be sequential writes, and the array will bypass the write cache when it detects that sequential writes are being done as these sequential writes can be done directly to disk about as quickly as they can be done to cache as once head gets into the correct place the writes are put done very quickly as the head and the spindle don’t need to move very far between each write operation.
A question that comes up when building a new virtual SQL Server is how should the disks be laid out when using the default VMDK (VMware) or vDisks (Hyper-V)? Should the disks be on a single LUN, or different LUNs, etc.
I’m sure that it will surprise no one when I say that it depends. On a virtual database server where the disk IO load is high you will want to separate the virtual disks out just like you would in the physical world. If the virtual database server has low or minimal IO then like in the physical world it may be ok to put the virtual disks on the same LUN.
It is important to look not just at the virtual machines disk load, but at the load of the other virtual machines which will be sharing the LUN(s) as well as what those other servers disks are doing. If you have the logs from one server on a LUN you don’t want to put the data files from another virtual SQL Server onto that LUN as you’ll have disk performance issues to contend with. For virtual database servers which have very high IO requirements you will want to dedicate LUNs for each of the virtual disks, assuming that you don’t use iSCSI or Raw Device Mappings (VMware) / Pass Through Disks (Hyper-V), just like you would in the physical world.
Hopefully this helps clear some stuff up.
If you have a SQL Server, and you are doing transaction log backups on the database server you’ll notice that 99% of the messages in your ERRORLOG file are worthless. All they say is that the log was backed up successfully. Personally I don’t care that every 15 minutes all 30 databases had their logs backed up without error. I only care about the errors or problems when doing the backups. I can easily enough write a script to query the msdb database and figure out when the last good backup was.
Fortunately there is a trace flag which can fix this little problem. This trace flag is 3226. Simply add -T3226 as a startup parameter for the SQL Server instance, and restart the instance, and the SQL Server will suppress the backup succeeded message from being put into the log. Any errors which are thrown by the backup processes will still be logged as expected. This makes the logs not only much easier to read, but also much, much smaller.
As far as I can tell this trace flag works on all versions from 2000 to “Denali”.
Many thanks to who ever it was at the PASS summit that pointed this little bit of awesomeness out to me.
I’m a couple days late getting this post out, thanks to being at EMC World, but here it is.
This year the PASS Summit is asking for your help in deciding what sessions should be presented at the SQL PASS summit in Seattle later this year. To do this they have created the Session Preference Tool which will allow you to mark the sessions which you think you would like to attend. The selection committees will be taking the numbers from this tool into account when making their selections.
Hopefully you’ll like the session abstracts that I’ve submitted and vote for them.
Yesterday was EMC World day 2 and it was another great day at the conference. I started the day a little late as I was out pretty late on Monday night at a party. I was abe to great some great sessions in during the day however.
The first session that I hit was “SQL Server on VMware – Architecting for Performance” which was a bit of a let down. The first half of the session was mostly a SQL Server consolidation 101 session, and a lot of the points the speaker talked about in solution design I didn’t agree with. Some examples include her recommendation to set max server memory at ~500 megs below the memory allocated to the VM. Personally I feel that the max server memory setting should be set about 2-4 gigs below the amount of memory allocated to the VM (depending on what other software is installed on the VM, how much SQLCLR is used, etc.). There were also recommendations to enable lock pages on all servers as well as to disable the ballon drivers which I didn’t agree with either.
The second session that I went two was “VNX Block Oriented Performance” which was a great session. During this 500 level session the speaker talked about the hardware layout of the new EMC VNX storage array, specifically exactly how much data can be pushed through each internal component of the VNX and VNXe storage arrays. I’m not going to put the numbers in this post as I want to get the deck downloaded from the EMC World website so i can double check all the numbers before I do. With all the info that the speaker was giving out there was no way I could type it all fast enough on my laptop, much less my iPad which I was using to take notes.
The third session that I went to had a crazy long title. In my notes I titled it as “Building a highly available enterprise data warehouse using a bunch of shit”. The session was all about using the EMC GreenPlum database to build a distributed data warehouse and levering some of of their other products like the VMAX, Data Domain, and SRDF replication to protect through DR and back up processes. If you don’t know what GreenPlum is, it is a very scalable data warehouse product which is based on the Progress SQL platform. The system is configured as a fully redundant system which is scaled out by adding in more x86 servers into the farm. The systems scales easily into the petabyte range and EMC says they have several customers with multiple-petabyte databases running within GreenPlum. The nice thing with GreenPlum is that it comes as a software package you can install on your own hardware but also as a preconfigured appliance as a full rack. The next version will allow you to chain multiple appliance racks together to create a massive GreenPlum appliance farm. It posts some pretty expressive data load and data query rates, and i would love to put it side by side with the Microsoft Parallel Data Warehouse and see how they stack up against each other as far as load times, and data processing times but I’m guessing that neuter company will loan me one of these massive devices for a few weeks.
After the sessions were done came the hard work, figuring out which parties to attend that night. I started with a dinner that my great VAR Ahead IT threw. From there i moved to the Emulex party and ended the night at Brocades party. Don’t get me wrong I wasn’t a perfect angel at these parties but i did take it easy as I do have to be able to do this all again next week at Tech Ed and there is the official EMC party tonight.
I’ve run out of time to blog as I’ve got to get to another sessions, so I’ll wrap up here. Sorry for any spelling errors and the lack of links. I’m using my iPad to write this and may not have caught all the problems.
So today was the first official day of EMC world. I had a great time meeting some fantastic people like Chuck Hollis from EMC.
There were also some fantastic sessions. This year EMC is doing something new and putting the sessions online through an iPad app as well as putting them on their Facebook page.
The first session that I went to today was listed as a Federated Live Migration hands on workshop which I thought was a VMware session on doing live migration between data centers, which sounded really cool. Turns out it was a hands on lab about doing live migrations from a DMX to a new VMAX array. This sounded pretty cool as well so I stuck through it. The concept is pretty cool. When you have an EMC DMX array and you purchase a new EMC VMAX array you can migrate the data from the old array to the new array with no down time. There was about 10-15 minutes of slides then we moved onto the lab. Sadly one of the two EMC DMX arrays which we were using for the lab had stopped accepting logins so half of us (including me) couldn’t do the lab. We walked through the lab process, but other than that it was kind of a bust.
The second session that I made it to was a little disappointing as well. The session ended up being a little more basic than I personally was looking for. This sessions was “What’s new in VNX operating environment”. As the VNX has been released and is shipping I assumed that this would be pretty in depth and would go over the hardware changes between the CX4 and the VNX. This however wasn’t the case. The session did go over some of the new features of the VNX array such as the new EMC Storage Integrator which is a Windows MMC application which allows Windows admins to provision and manage storage from the server without needing a login to the EMC VNX array. A similar plugin is also being released for VMware’s vCenter server.
Unlike the CX line of arrays the VNX supports being both a NAS and a SAN in a single unit. The array is basically a SAN, with a NAS bolted on. When a LUN is created you’ll be asked if you want it to be a block level LUN which would be a traditional LUN or if you want a file system volume which would be a NAS mount point. When you create the NAS mount point a LUN is created which is then automatically mounted to the NAS and the NAS then formats the LUN.
They also talked about the FAST cache which is available within the VNX array. This cache takes their flash drives and mirrors them in a RAID 10 mirror which is then used as a second stage cache. As blocks are accessed if the same block is touched three times the block is then loaded into the FAST cache so that on the fourth request for the block the block is now loaded from the cache instead of the spinning disks. Blocks can be moved into the FAST cache because of either reads or writes as the cache is writable. You can add up to 2.1 TB of FAST cache per VNX array. When using sequential workloads these won’t work well with FAST cache very well because the FAST cache doesn’t work with pre-fetched data.
The really cool thing about FAST cache for SQL Server databases is that all FAST cache work is done in 64k blocks the same IO size that SQL Server users. The downside that I see about this is that during a reindex process you might see stale data loaded into the FAST cache as the 64k blocks are read and written back during an index rebuild process, especially is an update stats is then done which would be a third operation on the block causing the block to move into the FAST cache. This will take me getting my hands on an array and doing a lot of testing to see how this works.
One thing which I thought was really interesting was a graph that was showed were EMC tested Microsoft Exchange on an iSCSI setup, a NFS setup, and a fiber channel setup. In these setups the iSCSI traffic had the slowest reads of the three followed by NFS then fiber channel being the faster. For writes NFS was the slowest, iSCSI was next then fiber channel was the fastest. For the IOPs throughput fiber channel had the most bandwidth, followed by NFS with iSCSI being the slowest. (I don’t have the config of network speeds which were used in the tests.)
Check back tomorrow for more information about day 2.
So today began EMC World 2011. It started in true EMC style with a great party, sponsored by UniSys. The party was held at the Hotel pool which seems a little strange but EMC made it work. It was a little windy last night, apparently not even EMC can control the weather.
I met some great people at the party tonight and I met up with some friends that I’ve met at EMC World during prior years.
This week looks to be a great lineup of sessions based on what I’ve seen in the session schedule. I’ll be posting what I can from the sessions during the week.
EMC World 2011 is back in Las Vegas this week. The official hotels are pretty pricey, if you are looking for a much cheaper place to stay on the strip, I’m at the Imperial Palace which isn’t anything fancy. It’s a basic room and shower, surely nothing special but for $30 a night instead of the $250 a night that the official hotels want it’s a damn good deal. The best part is, its only a 10 minute walk to the convention at the Venetian which is worth saving the $1000 in my mind.
Enterprise environment are moving more and more towards service oriented functional groups. SAN as a service, database as a service, etc. So SAN guys are saying DBAs shouldn’t worry about this, they use gigantic arrays with 100’s of spindles, they guarantee performance, etc. What are your thoughts on this?
I would say that the SAN admins are nuts if they think that the DBAs aren’t going to worry about storage performance. There is no magic SAN dust which keeps everything running fast all of the time. If you put enough load on the array then performance will be impacted, no matter how many disks there are in the array. The physics say that eventually the little disks can only spin so quickly before they break apart.
In my opinion if they tell you not to worry about performance, then they aren’t worried about the performance, which means that they aren’t going their job.
So this year EMC World is back in Las Vegas, and I’m damn happy that I’m going again. Instead of taking the quick flight from Ontario to Las Vegas like I normally word, I’ll be getting there a little differently, I’ll be riding my motorcycle.
As I can only go about 130 miles on a tank of gas, I’ll have to stop a couple of times to fill up each way. I’ve created a Google map to show my route. I’ll be driving from home to Barstow, then to State-line, NV then on to Vegas. If you see a pretty blue motorcycle with a leather clad motorcycle along this route with a license plate that says “mrdenny” give me a wave.
If you aren’t driving in from California and are attending EMC World, I’ll see you in Vegas at the welcome reception.
If i have 3Par SAN, is there any specific recommendations, as it does its custom stripping across the drives. or any other specifics related to 3PAR?
The biggest recommendation that I have (pretty much the only one since I haven’t worked with a 3PAR in a while) is to ensure that your LUNs are sized correctly so that they are evenly spread out across all the disks in the disk set that you are using.
For example, if you have 100 disks in the disk set (I don’t remember the 3PAR term here), then you have 50 disks available (as the other 50 are the mirror). If you have a 1 Gig chunk size you want to ensure that all your LUNs are divisible by 50 Gigs. The basic math is ChunkSize*ActiveDisks then multiple the number you get until you get to the first size larger than the size that you need. You will end up wasting a little bit of space, but you will have a faster array.