SQL Server with Mr. Denny


September 8, 2011  7:53 PM

24HOP: BI Workload Follow-Up

Denny Cherry Denny Cherry Profile: Denny Cherry

This is a total repost of Stacia’s blog post from this morning so that hopefully everyone will see it.  So pretend that Stacia wrote this and that I didn’t.

Yesterday, Denny Cherry (blog|twitter) and I co-presented a 24HOP session for the Fall 2011 lineup, “So How Does the BI Workload Impact the Database Engine?” 24HOP stands for 24 Hours of PASS and is a semiannual roundup of speakers from the SQL Server community. Initially, this event consisted of 24 consecutive sessions, each lasting an hour, but later it became a two-day event with 12 consecutive sessions each day. The sessions are free to attend and feature many great topics covering the spectrum of SQL Server things to know. Even if you missed previous 24HOP events, you can always go back and view recordings of sessions that interest you at the 24HOP site for Spring 2011 and Fall 2010.

And if you missed Denny and me yesterday, a recording will be available in a couple of weeks and I’ll update this post with a link. Our hour-long session for 24HOP was a sneak preview of our upcoming half-day session of the same name that we’ll be presenting at the PASS Summit in Seattle on Thursday, October 13, 2011 from 1:30 pm to 4:30 PM. In our half-day session, we’ll dig into the details and spend more time on database engine analysis, whereas in our 24HOP session, we focused on reviewing the architecture and highlighting the connection between BI components and the database engine.

We were able to answer a few questions at the end, but one question in particular could not be answered easily in the time allotted in a single sentence or two: How much RAM do I need to plan for Integration Services (SSIS)? Andy Leonard (blog|twitter) did manage a succinct response: All of it! I, on the other hand, am not known for being succinct, so deferred the question for this post.

Andy is right that SSIS wants as much memory as you can give it, which can be problematic if you’re executing an SSIS package on the same box as SQL Server. On the other hand, there are benefits to executing the package on the same box as well, so there is no one-size-fits-all solution. And the solution for one data integration scenario might not be the right solution for another data integration scenario. A lot depends on what CPU and RAM resources a given server has and how much data is involved. In order to know how much horsepower you need, you’re going to have to do some benchmark testing with packages. Here are some good resources for SSIS if you’re concerned about memory:

Is there a rule of thumb for deciding how much memory you’ll need for SSIS? Well, no less than 4 GB per CPU core is a good place to start. But if that’s not possible, you certainly want to have memory that’s at least two or three times the size of data that you expect to be processing at a given time. So if you’re processing 1 GB of data, you’ll want at least 2-3 GB of memory and, of course, more memory is even better!

September 8, 2011  2:00 PM

You know what the difference between a skirt and a kilt is? A kilt has a belt.

Denny Cherry Denny Cherry Profile: Denny Cherry

That quote stated by the one and only Allen White (blog | @sqlrunr) at the SQL PASS Summit last year is my inspiration for today’s blog post about the SQL PASS summit.

For many years now the SQL PASS summit has been a place where the database loving folks of the world come together learn a thing or two and have a little fun.  For the last couple of years the braver among us have burred our shame and barred our legs for all to see for our own personal amusement.

So if you are crazy like we are order away for your kilt, or if you would prefer to see what you are buying hit up one of the kilt shops in downtown Seattle, and join us on Thursday at the PASS summit.

Just don’t be like Allen and forget your belt at home, because then you are just wearing a skirt and that would just be silly.


September 7, 2011  9:50 PM

The sqlhelp hash tag and sponsored tweets.

Denny Cherry Denny Cherry Profile: Denny Cherry

Please don’t.  It isn’t that we hate marketing, but the sqlhelp hash tag is for Q&A only.  All the other hash tags are totally fair game, but please, pretty please don’t market in the sqlhelp hash tag on Twitter.

The sponsors, venders, etc. so far have all been really awesome about not marketing to that specific hash tag and we as the SQL Server community would REALLY like to keep it that way.

Hugs and kisses,

Your customers, friends and drinking buddies.

P.S. There was a slight transgression which is being fixed (and shall be totally forgiven) as I type this, so think of this more as an open letter and friendly reminder.


September 6, 2011  8:25 PM

Recorded version of my #sqlpass first timers session

Denny Cherry Denny Cherry Profile: Denny Cherry

In case you missed my SQL PASS first timers session it was recorded, which failed several times and I had to stitch them together in order to get it uploaded.  You can view the video below, or you can just download the slide deck.  I would highly recommend watching the video as you’ll get a lot of information which just isn’t included in the slide deck.

[kml_flashembed movie="http://www.mrdenny.com/downloads/2011.09.06_sqlpass_firsttimers/SQL PASS 2011 First Timers Session_controller.swf" width="616" height="480" wmode="transparent" /]

Sorry about the peeping/clicking/whatever during the recording. Apparently I forgot to mute TweetDeck and I think that’s what all the noise is.

Hopefully you find the video useful.  If you do please drop me a line, and let me know the kind of information that you would have liked to have seen (I can’t talk about the sessions that’ll be presented because I don’t know the content) please let me know that as well so that I can make this session better next year.

Thanks,

Denny


September 5, 2011  2:00 PM

Tomorrow is my #sqlpass First Timers webcast

Denny Cherry Denny Cherry Profile: Denny Cherry

Don’t forget to join me tomorrow at my SQL PASS First Timers webcast (I blogged about this a couple of weeks ago).  This webcast is a must (in my opinion) for all first time PASS summit attendees, even if you have attended PASS in another city (remember when PASS used to move from city to city?) you still may want to catch this session as I’ll have a lot of info about the city, the convention center, and the local food and drink scene.

The topics for tomorrow’s webcast include:

  • Getting to Seattle from SEA
  • Navigating the Convention Center
  • Tourist Stuff To Do
  • Food
  • Drinks
  • Fun

There will be plenty of time during the sessions for Q&A during and at the end of the session, so bring your questions about Seattle, PASS, etc and we will see if we can’t get all your questions answered before you even arrive at the summit.

And if you’ve been to the PASS summit before feel free to attend tomorrow’s session as well.  There is always something to learn about Seattle and the summit.

Don’t forget to grab the calendar entry and import it into your calendar so that you get a reminder tomorrow.

Denny


September 2, 2011  2:00 PM

VMworld Day 4

Denny Cherry Denny Cherry Profile: Denny Cherry

Thursday was the final day of VMworld.  Like with all conferences I’m sad that it’s over, but I’m damn glad that it’s over.  These things are just exhausting to go to.

Today was an interesting (and short) day.  On the final day of VMworld, VMware has decided that they won’t give a product related keynote.  Instead they will have a keynote that is pretty much unrelated to VMware and their technologies.  So today’s keynote was all about the human brain.  There were three PHDs who were speaking at the keynote, about 20 minutes per.  The first (and the best) was Dr. David Eagleman (website | @davideagleman) who is a neuroscientist from Baylor College of Medicine.  He gave a very interesting presentation why people think that time slows down during traumatic events such as falling from a tree or bike.  He and his team came up with an experiment where they basically threw a person off a building (it was actually a big scaffolding looking thing) into a net so they could test if their brain actually thought time was slowing down, if it just felt like it after the fact.

The second Dr. V.S. Ramachandran (website) and third speakers Dr. Deb Roy (website | @dkroy) while good speakers, simply weren’t as good as Dr. Eagleman as he was a very hard act to follow.  Unfortunately I don’t actually remember what Dr. Ramachandran spoke about.  Dr. Roy talked to us about the interactions between people and the way that those interactions can be tracked at the micro level (within the home) and at the macro level (worldwide).

At the micro level he installed camera and microphones in his home and recorded everything for 36 months starting shortly after his first child was born.  His research team then developed some software that tracked movement through the house and matched it to his child’s learning to speak and they were able to visually map on a diagram of the house in what parts of the house different words were learned.  For example the word “water” was learned in and near the kitchen and bathroom while the work “bye” was learned near the front door.

At the macro level he founded a company which tracks just about every TV show in the TV and analyses Twitter, Google+, Facebook, etc. traffic to see what people are talking about online so that studios and marketing companies can see how people are reacting to specific items online when they see them on TV.  It was interesting (all be it a little creepy) to see.

As far as sessions went today, there were only three slots, and I only had two sessions scheduled.  The first that I attended was a Resource Management Deep Dive into vSphere 4.1 and 5.0.  During this session they really went into how vSphere allocates resources (CPU, Memory and IO) to various VMs on the host depending on how reservations, guarantees, resource pools, etc. are all configured.  I’m not going to try to talk too much about this at the moment.  It’s going to take me a couple of times listening to the recording online to catch everything.

One thing that I did want to share that I didn’t know was how much data the DPM (Distributed Power Management) uses when it’s deciding to power down hosts at night and bring them back up in the morning.  When vCenter is deciding to power down a host is looks at the last 40 minutes of data to decide if there is little enough load to bring down a host.  As for bringing a host back online it only looks at the last 5 minutes. vCenter will never power a host down if it will lead to a performance problem.  When deciding to power hosts down performance is considered first with the after effect being that power is saved.  Power will always be used to get performance.

The second session was one on performance tuning of the vCenter database itself, which I figured would be pretty interesting.  It was interesting, but also frustrating as the speaker didn’t really know much about SQL Server (the default database to host the vCenter database).  Some of the information presented was pretty interesting about how the tables are laid out and what is stored in which table, and I’ve got a much better understanding about how the performance data gets loaded into the database and how the rollups are done now.  I also now know that I need to put together some scripts to jump start the process if it gets backed up as well as put together a best practices document for DBAs (and VMware folks that don’t know SQL at all) so that they can get better performance on their vCenter databases.

If you need to find the historical performance data within your vCenter database look into the tables which start with vpx_hist_stat.  There are 4 of these tables vpx_hist_stat1, vpx_hist_stat2, vpx_hist_stat3 and vpx_hist_stat4.  The historical data is rolled up by daily, weekly, monthly and annually into those four tables respectively.  You’ll also want to look into the vpx_sample_time tables of which there are also 4 tables vpx_sample_time1, vpx_sample_time2, vpx_sample_time3 and vpx_sample_time4.

Apparently vCenter 4.0 and below has a habit of causing deadlocks when loading data, especially in larger environments.  The fixes that were provided are pretty much all hit or miss when it comes to if they will work, and his description of the cause of the problem was pretty vague.  The jest of what I got was that the data loading is deadlocking with the code which handles the rollups and causing problems.  Something which could be tried to fix this would be to enable snapshot isolation mode for the database.  Personally I think this would have a better chance of fixing the problem then the crappy work around which he provided (which I’m not listing here on purpose).

The work around which VMware came up with for this problem, introduced in vCenter 4.0 can have its own problem is large environments. This problem is that data is missing for servers at random intervals.  This is because VMware came up with the idea of creating three staging tables and using each staging table for 5 minutes, then processing the data from that staging table into the vpx_hist_stat and vpx_sample_time tables while then moving on to the next staging table.  However if it takes too long to process the data from the first table, and the third table has been used it is now time to move back to the first table and data is now lost as it can’t write the data into the first table.  VMware needs to do some major redesign of this entire process for the next release to come up with a better solution that won’t allow for data loss.  There are plenty of was to do it that won’t cause problems.  Don’t you love it when developers that don’t know databases very well try and come up with screwy solutions to problems?

Based on what was talked about in this session there are a lot of SQL Scripts that need to be written to help people improve performance of their vCenter databases.  Guess I’ve got a lot of script writing to do.

Denny


September 1, 2011  2:00 PM

VMware Day 3

Denny Cherry Denny Cherry Profile: Denny Cherry

Today was day 3 of VMware.  All the sessions that I attended today were pretty much a recap of the things which I covered earlier in the week. I went to these more in depth sessions because the information learned today will help me with my day to day deployments of VMware as well as helping me learn more about the specific items within VMware that I need to look at to ensure that VMware is running smoothly day to day.

The big event of tonight was the attendee party with a performance by The Killers which was opened by Recycled Percussion.  It was a great concert especially as I’m not a mega fan of The Killers, and hadn’t head of Recycled Percussion before tonight.  After the show and party VMware had after parties at the hotel pools which was a blast.  I meet and had some great conversations with some great guys from a variety of places and companies.

While today doesn’t make a great blog post, it was another amazing day at VMworld 2011 and I can’t wait for tomorrow.

Denny


August 31, 2011  2:00 PM

VMworld Day 2 – Lots of product announcements today

Denny Cherry Denny Cherry Profile: Denny Cherry

So today was day 2 of VMworld 2011 and today was a great day at the conference.  We had a great keynote with some demos which were pretty funny (I really hope that they were supposed to be funny).  Granted I was a little late to the keynote so I missed the first few minutes, but I over slept damn-it breakfast is the most important meal of the day.

The first thing I was was a project called Project Octopus.  This allows your users to access the same files via Windows, Mac or Linux PCs, phones, tablets, etc.  It also allows users to edit any files which they have access to on any device.  This is done via HTML 5 so as long as the device supports HTML 5 (which most everything new does) you can access full Windows applications on the machine.  In the demo the user was sent an Excel file via IM which they then opened on an iPad and they were able to edit it in a fully functional copy of Excel 2010.  There was a small application installed on the iPad which then connected to the server via the web browser, uploaded the file to the server (or opened the file from the server, not really sure here, but either way) then the user was able to edit the Excel sheet and save it back to the server.

The next product which we were shown was called VMware Go.  Go is a software as a service offering where the user signs into the site and then they are able to via the webpage scan an IP subnet looking for servers which are capable of running vSphere 5.0 on them.  The user can then select which Windows servers they would like to deploy vSphere 5.0 to.  vSphere 5.0 is then deployed to the servers.  I’m not sure what happens to the Windows OS and services which are already installed on the servers, so this could be very dangerous if pushed to the wrong server by accident.

A new product which I’m really excited about is aimed directly at the small / medium business (SMB) market and will allow you to take two servers with only local storage and configure them in a highly available vSphere 5.0 cluster.  This new product is called Virtual Storage Appliance (VSA).  The way this is done is that the VSA which is a virtual appliance which is installed on all the hosts (it supports two and three node clusters only).  When installed and configured it will take the local storage and present it to the cluster as shared storage.  Redundancy for this solution is done by using software based replication and setting up each VM to be replicated to another host in the cluster.  This way the cluster can always survive a single node failure without loosing the ability to run any guest on the cluster.

There are some big changes coming to vSphere Site Recovery Manager (SRM) 5.0 which is no longer called VMware vSphere SRM.  One of the biggest is the ability to automatically fail back after a site has failed and restored automatically.  In prior versions of SRM failover was a one way operation, in order to fail back to the first site you would have to totally reconfigure SRM and then trigger failover.  With the new 5.0 version of SRM you simply configure the failback as part of the policies then when the second site comes back online SRM will failback as configured.

Another cool thing you can do with SRM 5.0 now is the ability to DR your site to a cloud provider instead of to your own backup data center.  This allows you to run your primary site on your hardware, but rent your DR systems from a cloud service provider that is certified as a SRM site.  Currently there are only a couple of options, but as time goes on there will be more options available.

I went to a couple of sessions today, the most informative of which was about the new features of vSphere 5.0.  VMware is upgrading the VMFS version from 3 to 5, but this time it is a non-distruptive upgrade unlike the upgrade from VMFS 2 to 3.  The new version of ESXi is much thinner than the prior 4.1 version leaving more resources available for the guest machines.

vSphere will only officially supports 32 hosts in a cluster, however there was been clusters tested with over 100 nodes, but still only 32 are supported.  Something which will make a lot of Linux shops happy is vCenter no longer requires Windows as the OS for the vCenter server.  It can now be installed on a Linux OS (they didn’t specify which Linux flavor).  There is an embedded database which supports up to 5 hosts and 50 VMs.  For installs which are larger than this you’ll need to install an instance of Oracle.  Currently only Oracle is supported and eventually other databases will be supported.  Another limitation of running vCenter on Linux is that you can’t run the vCenter in linked mode.  Linked mode is where you have a vCenter server one at each site and they are linked so that you have redundancy at the vCenter level.

There is a new web based client which will be included with vSphere 5.0.  This won’t be a fully featured featured UI, but it will support most of the features.  The nice thing about this new web client is that it will work on Windows, Mac, and Linux.  Eventually the web client will become the default client for vSphere and vCenter but this isn’t the case yet.

The last change I want to talk about today is the fact that vMotion now supports slower links.  In vSphere 4.1 and below using vMotion required using a network which had a 5ms or lower network latency.  In vSphere 5.0 this limit is increased to 10ms latency which allows you to vMotion over city wide networks.

See you tomorrow for VMware Day 3.

Denny


August 30, 2011  2:00 PM

VMware Day 1

Denny Cherry Denny Cherry Profile: Denny Cherry

Today was day 1 of VMware and I had a blast, even though I was only able to attend for part of the day.  I flew into Vegas this morning instead of spending the night last night.  I didn’t hit any sessions today, but I did catch the keynote which was given by Paul Maritz the CEO of VMware.

The keynote was interesting, but didn’t provide a whole lot of new information.  Paul officially announced that vSphere 5.0 was released along with VMware View 5.0.

vSphere 5.0 is the third new major annual release of the vSphere product.  2009 had vSphere 4.0, 2010 had vSphere 4.1 giving 2011 vSphere 5.0.  vSphere 5.0 has 200 new features (which weren’t listed).  VMware has put 1 million man hours into building the new vSphere 5.0 platform and another 2 million man hours into testing the new version.

There were a few pieces of information which were talked about as far as new features which were basically boiled down to a few key points.  The first is probably the most important as with vSphere 5.0 VMware expects that they will be able to run almost every production workload.  Virtual machines running under vSphere 5.0 can now have up to 32 vCPUs and 1 TB of RAM each.  VMware has added in some storage load balancing features that I’m really hoping that I can learn more about as the week continues as well as the automatic storage tiering which looks very interesting.  There is also a storage load balancing feature which I’m quite interested to learn more about.

There were some interesting stats which Paul talked about as well.  There were 19000 attendees which actually made it to the conference.  There were over 20000 people registered but some people got stuck on the east coast thanks to the weather.  Some additional stats include that analysts currently estimate that worldwide there 50% of production workloads are running under a hypervisor today.  This means that every 6 seconds a new VM is built (which is faster than people are being born).    It is estimated that there are over 20 million virtual machines running under VMware’s hypervisor platform.  If these hosts were put end to end they would be twice the length of the Great Wall Of China.  More machines are being moved from host to host via vMotion than there are airplanes in the sky.

Needless to say there is a lot of great information which I’m hoping to learn and share with you.

Denny


August 25, 2011  11:00 AM

Join Thomas LaRock and I for a little Afternoon Ignite

Denny Cherry Denny Cherry Profile: Denny Cherry

Tom LaRock is starting up a video series for Confio called Afternoon Ignite.  He has asked me to be his first victim guest on the show.  We will be talking about pretty much what ever comes to mind which will probably involve performance tuning, VMs, PASS, SQL Excursions, and Bacon.

Feel free to join us via GoTo Meeting at 11am Pacific / 2pm Eastern and check out the excitement.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: