SQL Server with Mr. Denny


July 14, 2010  11:00 AM

While you don’t get a ‘Free Lunch’, you can do some good work with some great stuff.

Denny Cherry Denny Cherry Profile: Denny Cherry

For those that were wondering what I did with my third MSDN license, I’ve donated it to a project that Arnie Rowland is putting together.  The full details of the project are on Arnie’s blog.

The jist of the project is

To recap, we are inviting unemployed or underemployed developers to propose a software project for a non-profit agency, school, or church. The idea is that we will provide a package of the latest software, tools, and training resources to help you improve your skills, get up to date with current technologies, gain practical experience, potentially earn a recommendation for your efforts, and in general, enjoy the feeling of accomplishing something useful for others. We are not giving out a ‘free lunch’, just supporting your efforts to personally gain from your own ‘sweat equity’.

The selected project will receive:

Please submit your project information via Google Docs.

The selection for July will occur on July 30, 2010 so make sure that you’ve got your project submitted before then.

Denny

July 13, 2010  11:00 AM

Gettin’ Schooled #TSQL2sDay

Denny Cherry Denny Cherry Profile: Denny Cherry

This months T/SQL Tuesday topic is about learning and teaching.  It’s been a while since I’ve written one of these, so I decided that this was a good one to try and get back into it.  The guidelines are pretty wide open, so I went with the how I learn best, and how I like teaching the best.

Learning

I do my best learning by doing, not to say I don’t get something from lectures, but I get more from doing.  I know a lot of people out there that are the different from me in this regard.  I think what really helps me the most is when doing a lab and something goes wrong.  That’s when I can really dig into the product or whatever it is that I’m working with and really see how it works.

Teaching

When it comes to teaching, interactive lecture style is really the only way I know how to do it.  This lets me do my best to customize the session to the needs to the group that is sitting in the session so that they are able to get the most out of it.  And that’s really the goal, that the people who are listening to the session get the most possible information out of it.

Denny


July 12, 2010  11:00 AM

Spin, spin, spin, it’s all about the spin

Denny Cherry Denny Cherry Profile: Denny Cherry

So I recently got an email from my NetApp sales rep telling me how awesome the Flash Cache is on the NetApp arrays.  The email was really short and to the point.

Not sure if you’ve heard about how we’re leveraging cache to augment storage performance.

Here’s a recent article…

NetApp Customers Purchase More Than a PetaByte of Flash Cache for Greater Performance and Storage Efficiency.

Then there was a link to a press release telling about how NetApp customers have purchased a PetaByte of Flash Cache for their systems.

If you don’t know what the NetApp flash cache is, its a flash based IO card (kind of like a Fusion IO card) that the NetApp array uses as cache for reads.  Each flash cache card gives you either 256 or 512 Gigs of cache that is used to speed up reads.  You can put up to 4 TB of flash cache per NetApp array.

There’s two ways that you can take the statement that NetApp Customers have purchased more than a PetaByte of Flash Cache (which is VERY expensive to purchase).  The first is that NetApp customers have such high IO loads that they need this cache layer to get the performance level.  The second is that because NetApp arrays are all RAID 6 (yes I know that NetApp calls it RAID DP but the DP just stands for dual parity, which is RAID 6) that to get the write performance that others can get with RAID 10.

Given that so many NetApp customers are purchasing the Flash Cache, 5000 units have been sold since September 2009 and it’s shipping in 20% of the units that you can cram it into (according to the NetApp press release) this leads me to believe that its more about the later than the former.  If that many customers needed this level of performance this soon after the option became available this leads me to believe that the NetApp arrays just weren’t able to give the level of performance that people needed until this Flash Cache can deliver.

But that’s just my take on the marketing spin.

Denny

UPDATE 7/12/2010: Corrected the post to show that NetApp’s flash cache only speeds up reads.  Thanks to TechMute for pointing out the error in the post.


July 8, 2010  11:00 AM

So what’s up with this storage array that doesn’t use any sort of RAID?

Denny Cherry Denny Cherry Profile: Denny Cherry

So a few years ago a new Storage concept was introduced to market.  That platform is know known as the IBM XIV platform. What makes this system so different from every other storage platform on the market is that the system doesn’t have any hardware level RAID like a traditional storage array does.  What the system does is assigns 1 Meg chucks of space in pairs on disk throughout the system so that there is always a redundant copy of the data.

The Hardware

The hardware that makes up the system is relatively standard hardware.  Each shelf of storage is actually a 3U server with up to two CPUs (depending on if the shelf is a data module or an interface module, I’ll explain the differences in a little bit) and 8 Gigs of memory for use as both system memory and read/write cache.  Because of this architecture as you add more shelves you also add more CPUs and more cache to the system.

As I mentioned above there are two kinds of shelves, interface modules and data modules.  There are effectively the same equipment with some slight additions for the interface modules.  Each module is a single chip quad core server with 8 Gigs of RAM and 4 one Gig Ethernet ports for back-end connectivity (I have it on good authority that this will be increasing to a faster back end in the future).  Each shelf contains 12 1TB or 2TB SATA hard drives spinning at 7200 RPM.  The Interface modules have a second quad chip CPU, 4 four gig fibre channel ports, and 2 1 Gig iSCSI ports.

Now the system comes with a minimum of 6 shelves which gives you 2 interface modules, and 4 data modules.  From there you can upgrade to a 9 shelf system which gives you 4 interface modules and 5 data modules.  After you have the 9 shelf system you can upgrade to anywhere from 10 to 15 shelves with new interface modules being added 11 and 13 shelves.  There’s a nice chart in this IBM PDF (down on page 2) which shows how many fibre and iSCSI ports you get with each configuration.

All these modules are tied together through two redundant 1 Gig network switches which use iSCSI to talk back and forth between the shelves.  My contacts at IBM tell me that the haven’t ever had a customer max out the iSCSI back plane, but personally I see the potential for a bottleneck.  Because of how distributed the system is, I can see this, but if things don’t balance across the interface modules just right I can see a bottleneck potential here (my contacts tell me that the next hardware version of the product should have a faster back plane so this is something they are addressing).  There’s a nice picture on this IBM PDF which shows how the modules talk to each other, and how the servers talk to the storage modules.

The really nice thing about this system is that as you grow the system you add processing power and cache to the system as well as fibre and iSCSI ports so that should really help eliminate any bottlenecks.  The downside that I see here is the cost to get into the system is probably a little higher than some of the competitor products as you can’t get a system with less than 6 shelves.

How it works

From what I’ve seen this whole thing is pretty cool when you start throwing data at it.  When you create a LUN and assign it to a host the system doesn’t really do a whole lot.  As the write requests start coming in it starts writing two copies of the data at all times to the disks in the array.  Now as the data is written one copy of the data is written to an interface module, and one copy of the data is written to a data module.  This way even if an entire interface module were to fail, there would be no loss of data.  That’s right, looking back to the hardware config, we can loose all 12 disks in a shelf and not loose any data, because that data is duplicated to a data module.  So if we had the largest 15 shelf system with all 6 interface modules, we could loose 5 of those 6  interface modules and not loose any data on the system.  Now if we had a heavily loaded system we might start to see performance problems as we start to max out the fiber on the front end ports, or the 1 Gig back end interconnect ports, until those interface modules are replaced but that’s probably an acceptable problem to have as long as the data is intact.

Because there is no RAID there’s no parity overhead to deal with which keeps everything nice and fast.  Because the disks aren’t paired up in a 1 to 1 like they would be in a series of RAID 1 arrays if a disk fails the rebuild time is much quicker because the data is coming from lots of different source disks, so the odds of a performance problem during that rebuild operation is next to nothing.

The system is able to keep everything running very quickly because every LUN is evenly distributed across every disk.  When you create a LUN within the management tool it correctly sizes the LUN for you for maximum performance.  While this will cost you a few gigs of space here or there the performance benefits are going to greatly out weight the lost storage space; especially when you remember that these are SATA disks, so the cost per Gig is already very low.

My Thoughts on the System

Now I’ve only had a couple of hours to work with one of these units.  I’d really like to get access to one for a week or two to really pound on the system and really beat the crap out of the system to see what I can really make the system do (hint, hint IBM).

The potential IO that this system can serve up to a host server, such as a SQL Server, is massive.  Now once you load up a few high IO servers against it the system should be able to handle the load pretty well.  The odds are getting a physical hot spot on one disk is pretty low since the LUNs aren’t laid out in the same manor on each disk (in other words the first meg of each LUN isn’t on disk 1, the second meg of each LUN isn’t on disk 2, etc).

The management tools for the XIV system are pretty cool.  They make data replication between two arrays very easy.  It’s just a quick wizard and the data is moving between the systems.  One thing which is very cool with the management tools where this system is a step above other arrays is that the XIV is aware of how much space has been used in each LUN.  This makes disk management much easier as companies where the storage admins don’t have server access, and the server admins don’t have access to the storage array can each monitor free space from their respective sides which gives a better chance of someone seeing full disks quicker and being able to do something about it quicker before it becomes a problem.

Like every system with RAID if you loose the right two disks you’ll have some data loose.  If you have a standard RAID 5 RAID array if you loose any 2 disks in the array then you loose all the data on the array.  If you have a RAID 10 array if you loose a matching pair of disks then you loose everything on the array.  With the XIV system if you loose two disks you’ll probably be ok, as the odds that you loose two disks that have the same 1 Meg block of data on it are very slim, but if you did loose those two disks before the system was able to rebuild you could loose the data on that LUN, or at least some of the data on the LUN. Now IBM’s docs say that the system rebuilds from a failed disk to a hot spare within 40 minutes or less (page 4), but I’d want to see this happening under a massive load before I would put my stamp on this.

Overall I would say that the XIV platform looks pretty stable.  With what I’ve heard about the next generation of the hardware it appears that most if not all of the issues that I see with the platform appear to be resolved.   The one thing which I’d really like to see would be three copies of each block of data through out the system; as the odds of loosing three disks all containing the same 1 meg block of data would be next to 0.  Maybe this will be a configuration option with the 2TB disks, or maybe when the 3TB disks come up (when ever that happens).  But then again, I’m a DBA so I love multiple copies of everything.

Now I’m sure that some of the other storage vendors have some opinions about the XIV platform, so bring it on folks.

Denny


July 7, 2010  1:52 AM

And the winner of an MSDN license is…

Denny Cherry Denny Cherry Profile: Denny Cherry

The winner of my little mini contest is @ocjames.

He’s got a great idea for a project, so now it’s on us to make sure that he builds it.  Here’s what he submitted as his idea.

Talk about a step up in difficulty from finding out your email address! haha

I would leverage an MSDN ultimate license to attempt to build a kick ass SQL Server DBA repository. I’m not talking about a single table holding a list of all the SQL Server instances you manage either. The ultimate goal would be an automated process that gathers information about all of the instances in the environment daily. This information can be viewed on demand from reporting services or web pages. There would be configurable alerting rules that would email the DBA distribution list. An example would be for databases without a backup in x amount of days. The features would be selectable so you can get a basic amount of information without any changes on the production instances.
There are similar programs / scripts that get you comparable information but I have found most of them either only gets some of the information DBA’s need or require configuration on each instance. I’m hoping to setup something that provides all the information with minimal footprint. This would be a great tool in troubleshooting issues as you can easily identify any login changes, sudden database file growth, or schema changes regardless of the SQL Server instance version.

Darn forgot to add that my app and source code would be available freely to the SQL Server community. Hopefully people would contribute and make the application even more useful for everyone!

Later this week I’ll announce what I’m doing with the third license.

Denny


July 6, 2010  5:56 PM

Time for another MSDN Ultimate Giveaway!

Denny Cherry Denny Cherry Profile: Denny Cherry

The first MSDN giveaway that I did today was just to easy.  Time for something that takes a little more work.

Put together a quick blog post (or feel free to put it in a comment here) about what really kick ass software you are thinking you could develop with this free MSDN license.  (If you do a blog post be sure to do a ping back to this post so I can find your post.)

I’m not going to hold you to it, but hopefully you’ll actually make the software.

This is for a full blown MSDN Ultimate license.  It comes with everything that a paid for MSDN license comes with except: No MSDN Magazine, no support calls, no free Office 2010 license.  You get the rest of the Microsoft software suite for development and testing.

I’ll take the people who respond and put the names into a hat and pick one at random.

All comments and blog posts need to be posted by 6pm Pacific time today (winner will be announced shortly after that on my blog and twitter).  Be sure that I know how to get a hold of you, or that your contact info is in your about page on your blog or something.

Good luck,

Denny

PS. If you’ve already won a license from me, no you can’t win a second one.


July 6, 2010  11:00 AM

The Windows 7 book that I worked on has finely arrived.

Denny Cherry Denny Cherry Profile: Denny Cherry

A while back I was asked to pick up a chapter in a Windows 7 book.  It took a while to get my copy of the book, but it has finely shown up.  The book is titled “Microsoft Windows 7 Administrator’s Reference: Upgrading, Deploying, Managing and Securing Windows 7“.  So far the book has 2 reviews on Amazon, and they are both very positive.

Hopefully some more reviews will be posted.

I’ve just had Amazon add the book to my Author page as well.

Denny


July 1, 2010  11:00 AM

Tell me more about this pre-con thing at #sqlpass.

Denny Cherry Denny Cherry Profile: Denny Cherry

The abstract for my SQL PASS Summit 2010 Pre-Con has been posted up on the SQL PASS website.  The abstract for my session is:

This session will be a two part session in which we will be focusing on two of the biggest topics in the DBA field, how to properly design your storage and virtualization solutions. Storage can be one of the biggest bottlenecks when it comes to database performance. It’s also one of the hardest places to troubleshoot performance issues because storage engineers and database administrators often do not get along. We’ll be digging into LUNs, HBAs, the fabric, as well as the storage itself. In the second half of the day we’ll be looking into the pros and cons of moving SQL Servers into a virtual environment. Specifically we’ll be looking into when it’s a good idea and when it’s probably not a good idea. Like everything in the database world there are no hard set answers as to if virtualization is a good idea or not. We’ll look into how tie the virtual platforms to the storage array so that you can maximize the storage performance for your SQL Servers and the virtual environment.

In order to register for my pre-con (or any of the fantastic pre and post cons sessions) simply register for the PASS Summit and on the third page or so you’ll be given a list of the available Pre-Conference and Post-Conference sessions.

Hopefully you’ll join me on Monday November 8th, 2010 for 7 awesome hours of “Storage and Virtualization For The DBA”.

Denny


June 28, 2010  11:00 AM

Slide Deck from this weekends #SoCalCodeCamp

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve just posted the slide decks for my sessions from this weeks SoCal Code Camp.  I’d like to thank everything that gave me great feedback on how to improve the sessions.

Exploring the DAC and everyone’s favorite feature the DACPAC

Storage for the DBA

There’s more to know about storage?

For those of you at the storage sessions, watch this blog for my announcement about the longer storage presentation that I’ll be doing up in Irvine.

Denny


June 24, 2010  4:59 PM

PASS PreCon Followup

Denny Cherry Denny Cherry Profile: Denny Cherry

So apparently I need to actually read ALL the emails from PASS instead of letting my ADD kick in.  I’ve been selected for a Pre-Con on Monday November 8th, 2010.  You see PASS sends you a few emails when you are selected.  The first tells you which pre-con and spotlight sessions have been accepted.  The second has the speaker contract, and apparently tells you when your pre-con will be.

I read the first, saw the second and simply opened the attachment.  That’ll teach me.

Many thanks to Allen White (Blog | Twitter) who told me to stop running around like a puppy about to piddle him self and actually read the damn email.

I’ll hopefully be seeing everyone bright and early on Monday the 8th for my Pre-Con session.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: