SQL Server with Mr. Denny


April 13, 2011  1:19 AM

Some job descriptions just have to be shared

Denny Cherry Denny Cherry Profile: Denny Cherry

So I’m going along doing my work thing, shrinking databases rebooting servers at random, etc. and I get an email some of which I’ve included below asking if I’d be interested in this job opening.  I’ve cut out the names, etc. to protect the stupid.

Below is the response that I sent back.

Industry Confidential
Location Palo Alto, CA
Job Title SQL Server DBA
Duration 03 month(s)(Possible Extension)
Profile/ Job description
  • The SQL Database Administrator is part of a data center team providing application and infrastructure support for a large healthcare engagement. The SQL Database Administrator is responsible for managing and monitoring database platforms which support multiple OLTP and OLAP applications. THE SQL Database Administrator will also be responsible for installing and configuring a high availability, and disaster recovery environments and planning upgrades and patches as required to maintain each environment.

Specific responsibilities will include:

  • Working experience with SLA’s
  • Monitor software database license process
  • Monitor performance activity
  • Familiar with Clustered environment and load balancing when required
  • Hands on experience with Mirroring, replication and recovery models
  • Hands on experience on performance monitoring and database counters
  • Manage technical action reports to resolve bugs/issues with DBMS software
  • Support capacity planning
  • Participate in implementing database strategies and solutions
  • Provide database maintenance activities
  • Monitor database logs for potential problems
  • Participate in planning software upgrades and patches
  • Provide outage resolution support and perform analysis to resolve recurring database issues
  • Install and configure DBMS
  • Provide database support for software development/application projects
  • Verify database backups have completed successfully
Basic Qualification:
  • 2+ years Database Administrator experience
  • 2+ years SQL 2000, 2005, 2010
  • 2+ years MCITP – Microsoft SQL 2008 Database Administrator
  • 2+ years MCDBA 2003/2005
  • Adaptive Server Enterprise Administrator professional certification
  • MCITP: SQL Server 2008 Database Administrator
  • MCDBA 2003/2005 Certification
Professional Skill Requirements
  • Operations Management
  • Project Management
  • Service Level Agreement (SLA) Management
  • Problem Solving and Escalation Management
  • Service Performance Management and Metrics
  • Estimation and Planning
Travel/Shift Hours/OT Requirements Minimal travel.

  • Work location is Palo Alto and candidates must live within a 100 mile radius of client site locations. Will require shift work and on-call duty and off-hours and weekend work as needed, as this project is a 24 x 7 support team.
Preferred Skills:
  • Detail Oriented
  • Quality and Process Improvement
  • Microsoft Office Proficient

If you would like to pursue this position, please reply to this e-mail and provide us details in following areas. Also, please attach a copy of your updated resume, in word format, elaborating your projects in tune with the client’s requirement. This will assist us in placing you with one of our clients.

Here’s what I sent back.

You should probably work with your client to get some actual requirements.

If you are going to list the Industry as “Confidential” you probably shouldn’t mention it in the job description.

There is no such thing as SQL 2010.
There is no certification as the MCDBA 2003.
There is no certification as the MCDBA 2005.

You want someone with way more than 2 years’ experience installing high availability and disaster recovery servers.

People with 2 years’ experience won’t have experience with mirroring or replication.  They probably won’t have much experience with server monitoring or looking at performance monitor counters (there is no such thing as database counters).  Someone with 2 years’ experience will have no idea how to open tickets with Microsoft, or how to work the Microsoft CSS support team to get the problems resolved.  The odds of someone with 2 years’ experience being able to correctly identify a new bug in the database engine is next to 0 (if not lower than 0).

It is barely possible for people to have the SQL Server 2008 certifications for two years.  They were only released a little over two years ago.  If someone only has two years’ experience with SQL Server, they probably aren’t ready to take the certification exams, much less the MCITP.

You are asking for a Sybase (Adaptive Server Enterprise) certification, yet you never mention Sybase anywhere in the job description.

Please learn at least something about the position you send out so that you don’t look like a total moron. Based on this job description I would NEVER allow you to represent me on any job posting. Open job positions are not high priority. Production servers being offline, that’s a high priority email. The government closing, that’s a high priority email. Family member is sick and in the hospital, that’s a high priority email. This is not.

And yes, this email was sent as a high priority email.

Denny

April 11, 2011  2:00 PM

Join me at “SQL in the City” presented by Red Gate Software

Denny Cherry Denny Cherry Profile: Denny Cherry

The fine folks at Red Gate Software have made the critical mistake of asking me to be a presenter at the Los Angeles “SQL in the City” event at the Skirball Center on October 28th, 2011.

I’ll be joined by a few familiar faces including Steve Jones (blog | twitter), Brad McGehee (blog | twitter) and Grant Fritchey (blog | twitter).  This is a free event including what I’m sure will be a great lunch provided by Red Gate.  So be sure to register today before all the seats are taken.

The session that I’ll be presenting will be on data security, and data encryption.  Knowing Steve, Brad and Grant I’m sure that they all have some excellent sessions lined up for the day as well.

Denny


April 7, 2011  2:00 PM

Slide decks for Desert Code Camp (#dcc11)

Denny Cherry Denny Cherry Profile: Denny Cherry

I presented three sessions at Desert Code Camp 2011.1, and I had a great time doing so.  In case you wanted to grab a copy of the slide decks, here they are.

Getting SQL Service Broker Up and Running

SQL Server Indexing for the .NET Developer

Where should I be encrypting my data

Thanks,
Denny


April 4, 2011  2:00 PM

Can tempdb data and log put on on RAID 1 one physical disk, or keep it separate physical disk?

Denny Cherry Denny Cherry Profile: Denny Cherry

It really depends on the amount of load that you put on the tempdb database.  Everything which is done within the tempdb database is going to be logged in the transaction log just like any other database so if the tempdb database has a very high IO then it may make sense to put the tempdb log on a separate disk.  If however the tempdb database isn’t used very much by your databases then putting the log on the same disk as the tempdb data file should be fine.

If you present multiple disks to the tempdb data files with a subset of the tempdb data files on each disk, then present a separate disk for the tempdb log file just to ensure that the log isn’t causing uneven IO load on the disk.  Remember that creating multiple log files won’t do anything for you like it does when you have multiple data files as the log files are filled sequentially not in a round robin where the data files are done via a round robin.

Denny


March 31, 2011  2:00 PM

Answering forum questions can now get you cold hard Amazon Gift cards

Denny Cherry Denny Cherry Profile: Denny Cherry

Tech Target has upped the perks of posting on their IT Knowledge Exchange forum (the same site that hosts my blog).  Tech Target recently announced that by posting on their site, and earning badges by gaining points, and getting sent Amazon Gift cards for earning the badges.  Not a bad deal if you ask me.

The prizes which they announced are:

  • Bronze Member Badge: Sticker and ITKnowledgeExchange t-shirt
  • Silver Member Badge: $25 Amazon.com Gift Card
  • Gold Member Badge: $50 Amazon.com Gift Card
  • Platinum Member Badge: $100 Amazon.com Gift Card

Not surprisingly there are some rules which are also posted.

  • Must be active within the past six months to qualify for initial prize.
  • For prizes that must be shipped, allow 2 to 4 weeks for delivery. Shipping only permitted to United States, Canada, and Europe.
  • For further TechTarget contest rules, please see the official contest rules.

But that’s a pretty slim set of rules, for the chance to get some pretty major cash in Amazon gift cards.  (Don’t forget that you can use these Amazon gift cards to pick up a copy of my book “Securing SQL Server“.)


March 28, 2011  2:00 PM

MySpace’s failures had next to if not nothing to do with the Microsoft stack or being in LA

Denny Cherry Denny Cherry Profile: Denny Cherry

If you were on twitter in the last couple of days you may have seen some links to a blog post titled “MySpace’s death spiral: insiders say it’s due to bets on Los Angeles and Microsoft” in which the author tries show that because MySpace used the Microsoft stack (ASP.NET, IIS, Windows, and SQL Server) that this was a major cause of MySpace’s demise.

The Technology

As someone who used to work for MySpace’s infrastructure team (specifically I was one of the Senior level Database Administrators) that worked there, I can say that it was not the Microsoft technology stack which was MySpace’s problem.  What caused MySpace’s demise was a piss poor UI (User Interface) and because of this poor UI design the slow page load times.  It really didn’t matter what platform that the site was built on, crap code is crap code.

No before you assume that I’m a Microsoft Fan Boy, I’m not.  You should flip through my blog and you’ll see that I’m actually quite critical of Microsoft and specifically the SQL Server product for which I’m a Microsoft MVP.  I have a reputation within the Microsoft SQL Server product group of telling them the truth about their products, both the good and the bad.

The Los Angeles Area

The author also claims that they failed because there weren’t enough .NET programmers in the Los Angeles area which could work in the .NET startup mode.  This is just crap.  I’ve spent most of my career working at startup shops, all of which were .NET shops and there were always enough .NET programmers who could work at startup speed.  What MySpace needed was a management team that understood that programming new features requires more than a few hours to do it correctly, and requires a solid scalable test environment to test in.  The developers at MySpace weren’t given either one of these to work in.  While the production database environment (for example) had several hundred database servers, the testing environment had only a single database server.  Rolling changes wasn’t really tested before they went into production, and there was NO load testing that I saw at all.  The basic excuse was that they couldn’t generate a production work load, so they didn’t load test it.

The Management

A major failure of the MySpace management was that they couldn’t retain the talent which they had.  They didn’t appear to value the employees which they had.  They were constantly hiring .NET developers and they were losing .NET developers just as fast if not faster than they could hire them.  In the 6 or 7 months that I worked there I was the 3rd or 4th DBA that left the company.  I’ve never seen DBAs come and go as frequently as they did there.  During my tenure at MySpace I personally never felt like anything other than another cog in the wheel, with the exception of my last few days when they begged me to stay on board.

One of the points in the article which is valid (sort of) is that Los Angeles is a large area, so commuting to the office can suck (it would take me at least an hour to get to work, longer if I took mass transit).  There is an easy way around this, allow your employees to work from home, and this problem goes away.  However MySpace (more specifically the parent company Fox Interactive Media) has a strict no working from home policy.  Killing this one policy would have made working there a lot easier as the employees could have lived anywhere in the area, only coming to the office when it was absolutely needed.

In Conclusion

Were there problems at MySpace? yes.

Was the problem Microsoft? No

Was the problem being in Los Angeles? No

Was the problem the management at MySpace? Yes

Could these have been overcome? Yes

Will they be, allowing MySpace to make an amazing comeback? My guess would be probably not. At least not without a major house cleaning.

Denny


March 25, 2011  2:05 PM

Microsoft has made the decision to unleash me on Tech Ed North America

Denny Cherry Denny Cherry Profile: Denny Cherry

Apparently someone at Microsoft has completely lost their mind.  They have signed me up as a speaker for Tech Ed NA 2011.  I’ll be presenting a session titled ‘What’s New in Manageability for Microsoft SQL Server Code-Named “Denali”‘.

This is a 300 level session so it will be full of all sorts of “Denali” goodness.  Exactly what I’m keeping a secret for now, you’ll just have to come to the session to find out.  That or hang out with me at Dev Connections, Rally, etc. and I’m sure I’ll let it slip. :)

This will be my first time speaking at Tech Ed, so needless to say I’m very thrilled to be given this opportunity.  This just goes to show, that if you are loud, obnoxious, rude, etc. you can still get the KICK ASS speaking gigs.  My session is currently listed as TBD still, but you can find my session in the Tech Ed Catalog.  As of when I’m writing this (Thursday night) it has me listed as “Denham Cherry”, but hopefully they’ll get that fixed to Denny like it should be.

See you in Atlanta in May.

Denny


March 25, 2011  2:00 PM

SQL Backup Compression and Backup Dedup are mortal enemies

Denny Cherry Denny Cherry Profile: Denny Cherry

If you have a modern tape backup solution you probably have some sort of dedup process in your backup environment.  What these deduplication processes do normally is look at the data which is being written to the backup platform and remove duplicate values in order to save space on the tape backup system.

However if you are using backup compression, either the native backup compression or a 3rd party backup compression tool, this will pretty much completely kill the effectiveness of the deduplication application.

How the deduplication systems work

There are two different kinds of deduplication systems out there (which come as an appliance or as a software package), which do fixed length deduplication or variable length deduplication.  The fixed width deduplication systems use a fixed width process to look for duplicate data.  They take the file and break it down into fixed width strings (different vendors use different length string, but for the sake of argument we’ll assume a 1k fixed width string) of 1k in size.  Then everywhere the string is duplicated a pointer is put in which points to the original 1k length string.

The variable length systems do basically the same thing, but they don’t use the fixed length string.  Because they use variable length strings they have a better change of removing smaller duplicate values, possibly as small as every word if needed.

For example, if you have a table with employee names in it and you are using a deduplication system which has a fixed width string size of 1k, you probably won’t get any deduplication.  However if you have a system which has a variable length string size, if there are duplicate names (such as within the last name column) then you will get some deduplication with the amount of deduplication simply depending on the data within the database file.

Why doesn’t this work with compression?

The reason this doesn’t work with a compressed database backup (either the fixed or variable length deduplication) is because every value will be unique.  When the data is compressed every row which has the last name “Smith” in it will already have the duplicates removed.  Besides that databases (at least OLTP databases) already have the bulk of the duplicate data removed via the process of normalization.

A couple of weeks ago, I took a compressed database backup (some of the data within the backup was encrypted) to Quantum’s lab here in Southern California so we could see just how well it would dedupe a compressed backup.  We achieved what the guys at Quantum considered to be the impossible, we got 0% dedupe of our compressed backup file.

As the DBA, why do I care?

As the DBA we care about this, because we want to play nice with the rest of the Enterprise environment and not waste resources if we don’t need to.  Because of this you will want to work with your storage and backup teams to come up with the best plan for your systems.  That may mean that you are taking backups which are not compressed so that the deduplication process can deduplicate the file as depending on your data set that may be more efficient.  On the SQL Server side this will make your backups take a little longer, but as long as the backups still fit within your backup window that may be ok.  You’ll only be able to find this out after working with your backup and/or storage admins to find out what will work best in your specific environment.

Now I’ll be going back to Quantum in the next couple of days to try and dedup an uncompressed backup file to see how well that works compared to the traditional SQL Server compression.  I’ll be sure to post an update after we run this second set of tests to get some example numbers comparing the two.

Denny


March 25, 2011  3:41 AM

Join me Friday morning as I talk SQL Server and Clustering with Keith Combs

Denny Cherry Denny Cherry Profile: Denny Cherry

Friday morning I’ll be joining Keith Combs on the “Talk TechNet” at 9:00am Friday March 25th, 2011.  You should register now (sorry about the late notice, I thought I posted about this already).

Denny


March 24, 2011  2:00 PM

What is faster SAN or DAS (local disk), and most importantly why?

Denny Cherry Denny Cherry Profile: Denny Cherry

So here’s an answer to the great myth of SAN, it isn’t always going to be faster than local disk (or DAS, JBOD, etc.).  The reason for this is actually pretty straight forward.

Local disk is cheap.  That’s the reason in a nut shell.  Let me see if I can’t explain in a little more detail.

Because local disk is so cheap, I can buy a 300 Gig SAS drive from Dell for $469, we can easily throw a lot of them at the problem, getting really, really fast storage really, really cheaply (at least compared to a SAN).  Throwing 10 or 20 disks at a server is only ~$4600 or ~$9200 respectively which in the grand scheme of things isn’t all that much.

Those same 300 gig disks in an EMC array as an example will retail for ~$2500 each (~$25,000 for 10 or ~$50,000 for 20).  So why would I purchase SAN storage, instead of buying a ton of local disks?  The local disk is faster, and cheaper so where is the benefit?

The benefit from the SAN storage comes in a few places.

1. Centralized Administration

2. Better use of resources

3. Lower power utilization

4. Lower TCO

Lets look at each of these one at a time (and yes there is some overlap).

Centralized Administration

Instead of having to get bad disk alerts from all the servers that the company owns I get them all from one place.  Instead of having to connect to each server to manage its storage configuration I have a single point that I can do this from.

Better use of resources

When using local disk I have to purchase storage for each server as a one off purchase.  If a server I bought 6 months ago doesn’t need anywhere near the amount of storage that I purchased for it I’m stuck.  That storage will sit there eating power doing nothing while I go out and purchase new storage for my next server.

When using a storage array, each server only has what it needs.  If a server needs more storage later, that can be easily assigned.  If a server has more storage than it needs you can shrink the LUN (only a couple of vendors can do this so far, the rest will catch up eventually) and that storage can be easily reallocated to another server in the data center for use.  If a server needs faster storage, or is on storage which is just to fast and the faster storage could be better utilized somewhere else these changes can be made on the array, with no chance for loss of data on the array, and with no impact to the system.

Lower Power Utilization

This goes back to the better use of resources point above.  When you have shelves of disks sitting around doing nothing, or next to nothing, those disks need to be powered.  Power costs money which effects the bottom line.  When you can re-utilize the disks the over all power costs are lower, especially when multiple servers are all sharing the spindles.

Lower TCO

This goes back to the Power Utilization above.  When you are using more power, you are generating more heat.  The more heat you generate the more cooling that you need to keep everything up and running.  Along with this, and tied into the better use of resources, when you need 50 Gigs of storage you use 50 Gigs of storage.  When you need 1 TB of storage you use 1 TB of storage, no more no less.  So while you have to purchase a bit more up front (which is always recommended so that you can get the best possible prices), when you use the storage, you’ll only need to use the storage that you actually need.  If you do charge backs this will be very important.

Storage arrays also provide all sorts of extra goodies that they can do.  The array it self can help with your backup and recovery process.  It can help present full data sets to your Dev/QA/Test/Staging systems without using up full sets of data via the built in snapshot technologies.  When migrating or upgrading from one server to another, the storage array can make this very easy.

Migrating between servers is just a matter of disconnecting the LUN(s) from the old server, and attaching them to the new server.

Upgrading SQL Server?  That’s no problem.  Disconnect the LUNs from the old server, and take a snapshot of the LUNs.  Then attach the LUNs to the new server.  You can then fire up the database engine, and in the event of a failure to upgrade the databases, just roll back the snapshot and attach the LUNs back to the original system, or take another snapshot and try attaching the databases again.

Want to keep a copy of every database that you have in the company, no matter the version of SQL Server at your DR site?  Storage based replication can replicate the data for any application, it doesn’t matter if that application supports replication or not, from one array to another.  Every time a new or changed block is written to the array, the array will grab that block and sent it over the wire to the remote array.  This can be done in real time (synchronously) or on a delay as specified by the admin (asynchronously).

Hopefully this opened up the array a little to you, and gave you some insight into how the magic box works.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: