SQL Server with Mr. Denny


September 19, 2011  7:13 PM

Slide Decks for #sqlsat95

Denny Cherry Denny Cherry Profile: Denny Cherry

This last weekend was SQL Saturday 95, and it was a great event.  I was speaking during 5 sessions.  Three were my own sessions, and two were panels (one panel I did solo as my co-presenter got sick and couldn’t make it).  For my three sessions you’ll find my slide decks below.

Table Indexing for the .NET Developer

Indexing Internals

SQL Server Clustering 101

I hope that everyone had as good a time at SQL Saturday as I did.

Denny

P.S. Something to note, don’t leave early after your raffle ticket is drawn.  We had so many great prizes that we ran out of raffle tickets and had to start drawing evals from the box for prize winners, so you might have won something else if you stuck around till the end.

September 15, 2011  8:41 AM

Heading to Europe in November for 2 great events

Denny Cherry Denny Cherry Profile: Denny Cherry

I’ve managed to put together a nice two city European tour this November thanks to the SQL Server community and some good friends over there.

My first stop will be November 12th, 2011 for the SQL Zaterdag IV in the Netherlands (the site is in Dutch and hasn’t been setup for the next event, yet).  This is a good old fashioned SQL Saturday except that it isn’t listed on the SQL Saturday website.  Something about the SQL Saturday site only being in English, and everyone over there speaking Dutch.  Hopefully they don’t expect a presentation in Dutch, if they do it’s going to be a really long hour.  I suppose I’ll have to learn how to say “Hello, where can I get a beer” at the very least.  Good thing there’s a 12 hour flight to get over there.

My second stop will be SQL Server Days over in Belgium where the organizers have asked me to come and speak this November 14th and 15th; presented by SQLUG.be and Microsoft Belux.  I’m not up on the schedule (yet), I don’t even know what sessions I’m going to be presenting yet (they haven’t picked them yet) but I’m going to be there and having a great time with all the attendees (as soon as they pick the sessions I’ll be presenting I’ll post them here).  I can’t wait to see everyone there.  SQL Server Days is a two day conference with some Microsoft speakers on the 14th and community speakers on the 15th.

This is a great opportunity for me because I’ll get to meet some new people who hopefully have read some of my stuff before, so I can get some new feedback and make some new friends.  (This is one of the things that I love about going to new places to present is meeting new people.)

There are some great speakers that I’m really looking forward to seeing again, or for the first time, such as Kevin Kline, Chris Webb, Dandy Weyn and Jennifer Stirrup.  Cost for SQL Server Days is VERY reasonable.  If you register in September it’s €79, after that it is €99.

So if you are in Europe and you’ve got the time, you should come check out these two great events.  As soon as I’ve got more information about the sessions that I’ll be presenting I’ll be sure to get that information published.

Hopefully I’ll see you at one or both events,

Denny


September 12, 2011  7:08 PM

SQL Karaoke is getting sponsored, and you get the benefits

Denny Cherry Denny Cherry Profile: Denny Cherry

So this year for the first time ever we have some sponsors for SQL Karaoke at the SQL PASS summit. A few fine companies have offered to open up the bar for our enjoyment. There will be a limited number of wristbands available for drinks, so be sure to find me Wednesday during the summit to get yours (more details about when and where I’ll be handing these out to follow as we get closer to the summit).  Many thanks to our sponsors NEC and Genesis Hosting Solutions.  Their support of our event is nothing short of awesome.

NEC is a regular SQL PASS so be sure to go to their booth and give them some love at the summit. NEC offers a wide range of products including great storage arrays and some of the most highly available servers around which are great for running SQL Server on.

Genesis sadly won’t be at the SQL PASS summit this year but there were kind enough to sponsor us anyway.  If you see Genesis Hosting at another conference be sure to show them the love in the booth.  Genesis Hosting Solutions offers a wide variety of hosting and virtualization solutions designed to meet the needs of any company from needing just a few VMs to hosting an entire virtual production or DR facility with hundreds or thousands of virtual machines.

Thanks again to both sponsors for helping us out.

See you at the summit,

Denny


September 12, 2011  2:00 PM

Cloud probably isn’t going to be a very good backup solution for you

Denny Cherry Denny Cherry Profile: Denny Cherry

Back on July 19th, there was a blog post that I was pointed to which talked about tossing your backup solution and using cloud for your backup instead.  Basically the points which are made are that because someone else is now holding your data you don’t have to worry about DR plans, keeping multiple copies, etc. because someone else is worrying about this stuff now.

On paper this all sounds great, but I work in reality.  In reality as the admin I can’t just trust that someone else is going to manage my DR solution.  When things break and we lose the site and have to restore to DR, as the admin I’m the one on the hook with management to get the company up and running again not whoever I’ve out sourced the backups to.

When it comes to my backups (and pretty much any other data at all) I trust no one with it.  If I sent it out to some cloud provider how do I know that no one is going to look at it, change it, sell it, etc.  If I don’t control everything from end to end, I can’t be sure that my data is secure.  I can encrypt it before I sent it up to the cloud, but that’s only giving me so much protection.  Encryption can be broken; it just requires having enough machines working on the problem.

There’s also another little problem with using the cloud for backups.  Large companies (and even small companies) have lots of data, and I mean lots of data.  These days it isn’t crazy for a 10-20 person company to have a couple of terabytes of data.  If you are backing all that data up to the cloud on a regular basis you need a lot of bandwidth to get your backup uploaded to the cloud in a timely fashion.  Bandwidth sure isn’t free, not even here in the US much less in other countries.  Many other countries have bandwidth caps in place where you may by the meg to upload data.  If you have to upload 100 Gigs of data a week (a 10% total data change rate is pretty standard) that could take 10-12 hours to upload on a fast connection, and could cost hundreds or thousands in bandwidth charges if you are bandwidth capped.

Running your app in the cloud is a totally different thing.  When you do this you have control of the setup, and can control how many sites your data is located in.  With the cloud backup solutions that I’ve looked at so far you don’t have this sort of control.  You just have to trust that the company that you are paying is doing the right thing.  After all what happens if they store your data close to you, for quicker access then what happens when your site loose power because of a natural disaster and they are down for the same reason.  Who do you call?  You can’t fire anyone because the plan was to let them handle it.  You can’t get your site back up because you have to wait for them to get their site back up.

In my world, that’s just not a reasonable solution.

Denny


September 8, 2011  7:53 PM

24HOP: BI Workload Follow-Up

Denny Cherry Denny Cherry Profile: Denny Cherry

This is a total repost of Stacia’s blog post from this morning so that hopefully everyone will see it.  So pretend that Stacia wrote this and that I didn’t.

Yesterday, Denny Cherry (blog|twitter) and I co-presented a 24HOP session for the Fall 2011 lineup, “So How Does the BI Workload Impact the Database Engine?” 24HOP stands for 24 Hours of PASS and is a semiannual roundup of speakers from the SQL Server community. Initially, this event consisted of 24 consecutive sessions, each lasting an hour, but later it became a two-day event with 12 consecutive sessions each day. The sessions are free to attend and feature many great topics covering the spectrum of SQL Server things to know. Even if you missed previous 24HOP events, you can always go back and view recordings of sessions that interest you at the 24HOP site for Spring 2011 and Fall 2010.

And if you missed Denny and me yesterday, a recording will be available in a couple of weeks and I’ll update this post with a link. Our hour-long session for 24HOP was a sneak preview of our upcoming half-day session of the same name that we’ll be presenting at the PASS Summit in Seattle on Thursday, October 13, 2011 from 1:30 pm to 4:30 PM. In our half-day session, we’ll dig into the details and spend more time on database engine analysis, whereas in our 24HOP session, we focused on reviewing the architecture and highlighting the connection between BI components and the database engine.

We were able to answer a few questions at the end, but one question in particular could not be answered easily in the time allotted in a single sentence or two: How much RAM do I need to plan for Integration Services (SSIS)? Andy Leonard (blog|twitter) did manage a succinct response: All of it! I, on the other hand, am not known for being succinct, so deferred the question for this post.

Andy is right that SSIS wants as much memory as you can give it, which can be problematic if you’re executing an SSIS package on the same box as SQL Server. On the other hand, there are benefits to executing the package on the same box as well, so there is no one-size-fits-all solution. And the solution for one data integration scenario might not be the right solution for another data integration scenario. A lot depends on what CPU and RAM resources a given server has and how much data is involved. In order to know how much horsepower you need, you’re going to have to do some benchmark testing with packages. Here are some good resources for SSIS if you’re concerned about memory:

Is there a rule of thumb for deciding how much memory you’ll need for SSIS? Well, no less than 4 GB per CPU core is a good place to start. But if that’s not possible, you certainly want to have memory that’s at least two or three times the size of data that you expect to be processing at a given time. So if you’re processing 1 GB of data, you’ll want at least 2-3 GB of memory and, of course, more memory is even better!


September 8, 2011  2:00 PM

You know what the difference between a skirt and a kilt is? A kilt has a belt.

Denny Cherry Denny Cherry Profile: Denny Cherry

That quote stated by the one and only Allen White (blog | @sqlrunr) at the SQL PASS Summit last year is my inspiration for today’s blog post about the SQL PASS summit.

For many years now the SQL PASS summit has been a place where the database loving folks of the world come together learn a thing or two and have a little fun.  For the last couple of years the braver among us have burred our shame and barred our legs for all to see for our own personal amusement.

So if you are crazy like we are order away for your kilt, or if you would prefer to see what you are buying hit up one of the kilt shops in downtown Seattle, and join us on Thursday at the PASS summit.

Just don’t be like Allen and forget your belt at home, because then you are just wearing a skirt and that would just be silly.


September 7, 2011  9:50 PM

The sqlhelp hash tag and sponsored tweets.

Denny Cherry Denny Cherry Profile: Denny Cherry

Please don’t.  It isn’t that we hate marketing, but the sqlhelp hash tag is for Q&A only.  All the other hash tags are totally fair game, but please, pretty please don’t market in the sqlhelp hash tag on Twitter.

The sponsors, venders, etc. so far have all been really awesome about not marketing to that specific hash tag and we as the SQL Server community would REALLY like to keep it that way.

Hugs and kisses,

Your customers, friends and drinking buddies.

P.S. There was a slight transgression which is being fixed (and shall be totally forgiven) as I type this, so think of this more as an open letter and friendly reminder.


September 6, 2011  8:25 PM

Recorded version of my #sqlpass first timers session

Denny Cherry Denny Cherry Profile: Denny Cherry

In case you missed my SQL PASS first timers session it was recorded, which failed several times and I had to stitch them together in order to get it uploaded.  You can view the video below, or you can just download the slide deck.  I would highly recommend watching the video as you’ll get a lot of information which just isn’t included in the slide deck.

[kml_flashembed movie="http://www.mrdenny.com/downloads/2011.09.06_sqlpass_firsttimers/SQL PASS 2011 First Timers Session_controller.swf" width="616" height="480" wmode="transparent" /]

Sorry about the peeping/clicking/whatever during the recording. Apparently I forgot to mute TweetDeck and I think that’s what all the noise is.

Hopefully you find the video useful.  If you do please drop me a line, and let me know the kind of information that you would have liked to have seen (I can’t talk about the sessions that’ll be presented because I don’t know the content) please let me know that as well so that I can make this session better next year.

Thanks,

Denny


September 5, 2011  2:00 PM

Tomorrow is my #sqlpass First Timers webcast

Denny Cherry Denny Cherry Profile: Denny Cherry

Don’t forget to join me tomorrow at my SQL PASS First Timers webcast (I blogged about this a couple of weeks ago).  This webcast is a must (in my opinion) for all first time PASS summit attendees, even if you have attended PASS in another city (remember when PASS used to move from city to city?) you still may want to catch this session as I’ll have a lot of info about the city, the convention center, and the local food and drink scene.

The topics for tomorrow’s webcast include:

  • Getting to Seattle from SEA
  • Navigating the Convention Center
  • Tourist Stuff To Do
  • Food
  • Drinks
  • Fun

There will be plenty of time during the sessions for Q&A during and at the end of the session, so bring your questions about Seattle, PASS, etc and we will see if we can’t get all your questions answered before you even arrive at the summit.

And if you’ve been to the PASS summit before feel free to attend tomorrow’s session as well.  There is always something to learn about Seattle and the summit.

Don’t forget to grab the calendar entry and import it into your calendar so that you get a reminder tomorrow.

Denny


September 2, 2011  2:00 PM

VMworld Day 4

Denny Cherry Denny Cherry Profile: Denny Cherry

Thursday was the final day of VMworld.  Like with all conferences I’m sad that it’s over, but I’m damn glad that it’s over.  These things are just exhausting to go to.

Today was an interesting (and short) day.  On the final day of VMworld, VMware has decided that they won’t give a product related keynote.  Instead they will have a keynote that is pretty much unrelated to VMware and their technologies.  So today’s keynote was all about the human brain.  There were three PHDs who were speaking at the keynote, about 20 minutes per.  The first (and the best) was Dr. David Eagleman (website | @davideagleman) who is a neuroscientist from Baylor College of Medicine.  He gave a very interesting presentation why people think that time slows down during traumatic events such as falling from a tree or bike.  He and his team came up with an experiment where they basically threw a person off a building (it was actually a big scaffolding looking thing) into a net so they could test if their brain actually thought time was slowing down, if it just felt like it after the fact.

The second Dr. V.S. Ramachandran (website) and third speakers Dr. Deb Roy (website | @dkroy) while good speakers, simply weren’t as good as Dr. Eagleman as he was a very hard act to follow.  Unfortunately I don’t actually remember what Dr. Ramachandran spoke about.  Dr. Roy talked to us about the interactions between people and the way that those interactions can be tracked at the micro level (within the home) and at the macro level (worldwide).

At the micro level he installed camera and microphones in his home and recorded everything for 36 months starting shortly after his first child was born.  His research team then developed some software that tracked movement through the house and matched it to his child’s learning to speak and they were able to visually map on a diagram of the house in what parts of the house different words were learned.  For example the word “water” was learned in and near the kitchen and bathroom while the work “bye” was learned near the front door.

At the macro level he founded a company which tracks just about every TV show in the TV and analyses Twitter, Google+, Facebook, etc. traffic to see what people are talking about online so that studios and marketing companies can see how people are reacting to specific items online when they see them on TV.  It was interesting (all be it a little creepy) to see.

As far as sessions went today, there were only three slots, and I only had two sessions scheduled.  The first that I attended was a Resource Management Deep Dive into vSphere 4.1 and 5.0.  During this session they really went into how vSphere allocates resources (CPU, Memory and IO) to various VMs on the host depending on how reservations, guarantees, resource pools, etc. are all configured.  I’m not going to try to talk too much about this at the moment.  It’s going to take me a couple of times listening to the recording online to catch everything.

One thing that I did want to share that I didn’t know was how much data the DPM (Distributed Power Management) uses when it’s deciding to power down hosts at night and bring them back up in the morning.  When vCenter is deciding to power down a host is looks at the last 40 minutes of data to decide if there is little enough load to bring down a host.  As for bringing a host back online it only looks at the last 5 minutes. vCenter will never power a host down if it will lead to a performance problem.  When deciding to power hosts down performance is considered first with the after effect being that power is saved.  Power will always be used to get performance.

The second session was one on performance tuning of the vCenter database itself, which I figured would be pretty interesting.  It was interesting, but also frustrating as the speaker didn’t really know much about SQL Server (the default database to host the vCenter database).  Some of the information presented was pretty interesting about how the tables are laid out and what is stored in which table, and I’ve got a much better understanding about how the performance data gets loaded into the database and how the rollups are done now.  I also now know that I need to put together some scripts to jump start the process if it gets backed up as well as put together a best practices document for DBAs (and VMware folks that don’t know SQL at all) so that they can get better performance on their vCenter databases.

If you need to find the historical performance data within your vCenter database look into the tables which start with vpx_hist_stat.  There are 4 of these tables vpx_hist_stat1, vpx_hist_stat2, vpx_hist_stat3 and vpx_hist_stat4.  The historical data is rolled up by daily, weekly, monthly and annually into those four tables respectively.  You’ll also want to look into the vpx_sample_time tables of which there are also 4 tables vpx_sample_time1, vpx_sample_time2, vpx_sample_time3 and vpx_sample_time4.

Apparently vCenter 4.0 and below has a habit of causing deadlocks when loading data, especially in larger environments.  The fixes that were provided are pretty much all hit or miss when it comes to if they will work, and his description of the cause of the problem was pretty vague.  The jest of what I got was that the data loading is deadlocking with the code which handles the rollups and causing problems.  Something which could be tried to fix this would be to enable snapshot isolation mode for the database.  Personally I think this would have a better chance of fixing the problem then the crappy work around which he provided (which I’m not listing here on purpose).

The work around which VMware came up with for this problem, introduced in vCenter 4.0 can have its own problem is large environments. This problem is that data is missing for servers at random intervals.  This is because VMware came up with the idea of creating three staging tables and using each staging table for 5 minutes, then processing the data from that staging table into the vpx_hist_stat and vpx_sample_time tables while then moving on to the next staging table.  However if it takes too long to process the data from the first table, and the third table has been used it is now time to move back to the first table and data is now lost as it can’t write the data into the first table.  VMware needs to do some major redesign of this entire process for the next release to come up with a better solution that won’t allow for data loss.  There are plenty of was to do it that won’t cause problems.  Don’t you love it when developers that don’t know databases very well try and come up with screwy solutions to problems?

Based on what was talked about in this session there are a lot of SQL Scripts that need to be written to help people improve performance of their vCenter databases.  Guess I’ve got a lot of script writing to do.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: