So this year for the first time ever we have some sponsors for SQL Karaoke at the SQL PASS summit. A few fine companies have offered to open up the bar for our enjoyment. There will be a limited number of wristbands available for drinks, so be sure to find me Wednesday during the summit to get yours (more details about when and where I’ll be handing these out to follow as we get closer to the summit). Many thanks to our sponsors NEC and Genesis Hosting Solutions. Their support of our event is nothing short of awesome.
NEC is a regular SQL PASS so be sure to go to their booth and give them some love at the summit. NEC offers a wide range of products including great storage arrays and some of the most highly available servers around which are great for running SQL Server on.
Genesis sadly won’t be at the SQL PASS summit this year but there were kind enough to sponsor us anyway. If you see Genesis Hosting at another conference be sure to show them the love in the booth. Genesis Hosting Solutions offers a wide variety of hosting and virtualization solutions designed to meet the needs of any company from needing just a few VMs to hosting an entire virtual production or DR facility with hundreds or thousands of virtual machines.
Thanks again to both sponsors for helping us out.
See you at the summit,
Back on July 19th, there was a blog post that I was pointed to which talked about tossing your backup solution and using cloud for your backup instead. Basically the points which are made are that because someone else is now holding your data you don’t have to worry about DR plans, keeping multiple copies, etc. because someone else is worrying about this stuff now.
On paper this all sounds great, but I work in reality. In reality as the admin I can’t just trust that someone else is going to manage my DR solution. When things break and we lose the site and have to restore to DR, as the admin I’m the one on the hook with management to get the company up and running again not whoever I’ve out sourced the backups to.
When it comes to my backups (and pretty much any other data at all) I trust no one with it. If I sent it out to some cloud provider how do I know that no one is going to look at it, change it, sell it, etc. If I don’t control everything from end to end, I can’t be sure that my data is secure. I can encrypt it before I sent it up to the cloud, but that’s only giving me so much protection. Encryption can be broken; it just requires having enough machines working on the problem.
There’s also another little problem with using the cloud for backups. Large companies (and even small companies) have lots of data, and I mean lots of data. These days it isn’t crazy for a 10-20 person company to have a couple of terabytes of data. If you are backing all that data up to the cloud on a regular basis you need a lot of bandwidth to get your backup uploaded to the cloud in a timely fashion. Bandwidth sure isn’t free, not even here in the US much less in other countries. Many other countries have bandwidth caps in place where you may by the meg to upload data. If you have to upload 100 Gigs of data a week (a 10% total data change rate is pretty standard) that could take 10-12 hours to upload on a fast connection, and could cost hundreds or thousands in bandwidth charges if you are bandwidth capped.
Running your app in the cloud is a totally different thing. When you do this you have control of the setup, and can control how many sites your data is located in. With the cloud backup solutions that I’ve looked at so far you don’t have this sort of control. You just have to trust that the company that you are paying is doing the right thing. After all what happens if they store your data close to you, for quicker access then what happens when your site loose power because of a natural disaster and they are down for the same reason. Who do you call? You can’t fire anyone because the plan was to let them handle it. You can’t get your site back up because you have to wait for them to get their site back up.
In my world, that’s just not a reasonable solution.
This is a total repost of Stacia’s blog post from this morning so that hopefully everyone will see it. So pretend that Stacia wrote this and that I didn’t.
Yesterday, Denny Cherry (blog|twitter) and I co-presented a 24HOP session for the Fall 2011 lineup, “So How Does the BI Workload Impact the Database Engine?” 24HOP stands for 24 Hours of PASS and is a semiannual roundup of speakers from the SQL Server community. Initially, this event consisted of 24 consecutive sessions, each lasting an hour, but later it became a two-day event with 12 consecutive sessions each day. The sessions are free to attend and feature many great topics covering the spectrum of SQL Server things to know. Even if you missed previous 24HOP events, you can always go back and view recordings of sessions that interest you at the 24HOP site for Spring 2011 and Fall 2010.
And if you missed Denny and me yesterday, a recording will be available in a couple of weeks and I’ll update this post with a link. Our hour-long session for 24HOP was a sneak preview of our upcoming half-day session of the same name that we’ll be presenting at the PASS Summit in Seattle on Thursday, October 13, 2011 from 1:30 pm to 4:30 PM. In our half-day session, we’ll dig into the details and spend more time on database engine analysis, whereas in our 24HOP session, we focused on reviewing the architecture and highlighting the connection between BI components and the database engine.
We were able to answer a few questions at the end, but one question in particular could not be answered easily in the time allotted in a single sentence or two: How much RAM do I need to plan for Integration Services (SSIS)? Andy Leonard (blog|twitter) did manage a succinct response: All of it! I, on the other hand, am not known for being succinct, so deferred the question for this post.
Andy is right that SSIS wants as much memory as you can give it, which can be problematic if you’re executing an SSIS package on the same box as SQL Server. On the other hand, there are benefits to executing the package on the same box as well, so there is no one-size-fits-all solution. And the solution for one data integration scenario might not be the right solution for another data integration scenario. A lot depends on what CPU and RAM resources a given server has and how much data is involved. In order to know how much horsepower you need, you’re going to have to do some benchmark testing with packages. Here are some good resources for SSIS if you’re concerned about memory:
- Top 10 SQL Server Integration Services Best Practices from the SQL Customer Advisory Team (blog | twitter): This article provides an overview of best practices (as the name implies!) and includes links to information about using performance counters to monitor resource usage and about optimizing the Lookup transformation, which is one of the big memory consumers in SSIS.
- SQL Server 2005 Integration Services: A Strategy for Performance, a whitepaper by my friend, former colleague, and co-author of my first book, Elizabeth Vitt. Although it was written for SSIS 2005, the principles related to tuning packages and how to benchmark still apply. The significant changes between SSIS 2005 and SSIS 2008 with regard to performance were improvements in thread management and in the Lookup transformation.
Is there a rule of thumb for deciding how much memory you’ll need for SSIS? Well, no less than 4 GB per CPU core is a good place to start. But if that’s not possible, you certainly want to have memory that’s at least two or three times the size of data that you expect to be processing at a given time. So if you’re processing 1 GB of data, you’ll want at least 2-3 GB of memory and, of course, more memory is even better!
For many years now the SQL PASS summit has been a place where the database loving folks of the world come together learn a thing or two and have a little fun. For the last couple of years the braver among us have burred our shame and barred our legs for all to see for our own personal amusement.
So if you are crazy like we are order away for your kilt, or if you would prefer to see what you are buying hit up one of the kilt shops in downtown Seattle, and join us on Thursday at the PASS summit.
Just don’t be like Allen and forget your belt at home, because then you are just wearing a skirt and that would just be silly.
Please don’t. It isn’t that we hate marketing, but the sqlhelp hash tag is for Q&A only. All the other hash tags are totally fair game, but please, pretty please don’t market in the sqlhelp hash tag on Twitter.
The sponsors, venders, etc. so far have all been really awesome about not marketing to that specific hash tag and we as the SQL Server community would REALLY like to keep it that way.
Hugs and kisses,
Your customers, friends and drinking buddies.
P.S. There was a slight transgression which is being fixed (and shall be totally forgiven) as I type this, so think of this more as an open letter and friendly reminder.
In case you missed my SQL PASS first timers session it was recorded, which failed several times and I had to stitch them together in order to get it uploaded. You can view the video below, or you can just download the slide deck. I would highly recommend watching the video as you’ll get a lot of information which just isn’t included in the slide deck.
[kml_flashembed movie="http://www.mrdenny.com/downloads/2011.09.06_sqlpass_firsttimers/SQL PASS 2011 First Timers Session_controller.swf" width="616" height="480" wmode="transparent" /]
Sorry about the peeping/clicking/whatever during the recording. Apparently I forgot to mute TweetDeck and I think that’s what all the noise is.
Hopefully you find the video useful. If you do please drop me a line, and let me know the kind of information that you would have liked to have seen (I can’t talk about the sessions that’ll be presented because I don’t know the content) please let me know that as well so that I can make this session better next year.
Don’t forget to join me tomorrow at my SQL PASS First Timers webcast (I blogged about this a couple of weeks ago). This webcast is a must (in my opinion) for all first time PASS summit attendees, even if you have attended PASS in another city (remember when PASS used to move from city to city?) you still may want to catch this session as I’ll have a lot of info about the city, the convention center, and the local food and drink scene.
The topics for tomorrow’s webcast include:
- Getting to Seattle from SEA
- Navigating the Convention Center
- Tourist Stuff To Do
There will be plenty of time during the sessions for Q&A during and at the end of the session, so bring your questions about Seattle, PASS, etc and we will see if we can’t get all your questions answered before you even arrive at the summit.
And if you’ve been to the PASS summit before feel free to attend tomorrow’s session as well. There is always something to learn about Seattle and the summit.
Don’t forget to grab the calendar entry and import it into your calendar so that you get a reminder tomorrow.
Thursday was the final day of VMworld. Like with all conferences I’m sad that it’s over, but I’m damn glad that it’s over. These things are just exhausting to go to.
Today was an interesting (and short) day. On the final day of VMworld, VMware has decided that they won’t give a product related keynote. Instead they will have a keynote that is pretty much unrelated to VMware and their technologies. So today’s keynote was all about the human brain. There were three PHDs who were speaking at the keynote, about 20 minutes per. The first (and the best) was Dr. David Eagleman (website | @davideagleman) who is a neuroscientist from Baylor College of Medicine. He gave a very interesting presentation why people think that time slows down during traumatic events such as falling from a tree or bike. He and his team came up with an experiment where they basically threw a person off a building (it was actually a big scaffolding looking thing) into a net so they could test if their brain actually thought time was slowing down, if it just felt like it after the fact.
The second Dr. V.S. Ramachandran (website) and third speakers Dr. Deb Roy (website | @dkroy) while good speakers, simply weren’t as good as Dr. Eagleman as he was a very hard act to follow. Unfortunately I don’t actually remember what Dr. Ramachandran spoke about. Dr. Roy talked to us about the interactions between people and the way that those interactions can be tracked at the micro level (within the home) and at the macro level (worldwide).
At the micro level he installed camera and microphones in his home and recorded everything for 36 months starting shortly after his first child was born. His research team then developed some software that tracked movement through the house and matched it to his child’s learning to speak and they were able to visually map on a diagram of the house in what parts of the house different words were learned. For example the word “water” was learned in and near the kitchen and bathroom while the work “bye” was learned near the front door.
At the macro level he founded a company which tracks just about every TV show in the TV and analyses Twitter, Google+, Facebook, etc. traffic to see what people are talking about online so that studios and marketing companies can see how people are reacting to specific items online when they see them on TV. It was interesting (all be it a little creepy) to see.
As far as sessions went today, there were only three slots, and I only had two sessions scheduled. The first that I attended was a Resource Management Deep Dive into vSphere 4.1 and 5.0. During this session they really went into how vSphere allocates resources (CPU, Memory and IO) to various VMs on the host depending on how reservations, guarantees, resource pools, etc. are all configured. I’m not going to try to talk too much about this at the moment. It’s going to take me a couple of times listening to the recording online to catch everything.
One thing that I did want to share that I didn’t know was how much data the DPM (Distributed Power Management) uses when it’s deciding to power down hosts at night and bring them back up in the morning. When vCenter is deciding to power down a host is looks at the last 40 minutes of data to decide if there is little enough load to bring down a host. As for bringing a host back online it only looks at the last 5 minutes. vCenter will never power a host down if it will lead to a performance problem. When deciding to power hosts down performance is considered first with the after effect being that power is saved. Power will always be used to get performance.
The second session was one on performance tuning of the vCenter database itself, which I figured would be pretty interesting. It was interesting, but also frustrating as the speaker didn’t really know much about SQL Server (the default database to host the vCenter database). Some of the information presented was pretty interesting about how the tables are laid out and what is stored in which table, and I’ve got a much better understanding about how the performance data gets loaded into the database and how the rollups are done now. I also now know that I need to put together some scripts to jump start the process if it gets backed up as well as put together a best practices document for DBAs (and VMware folks that don’t know SQL at all) so that they can get better performance on their vCenter databases.
If you need to find the historical performance data within your vCenter database look into the tables which start with vpx_hist_stat. There are 4 of these tables vpx_hist_stat1, vpx_hist_stat2, vpx_hist_stat3 and vpx_hist_stat4. The historical data is rolled up by daily, weekly, monthly and annually into those four tables respectively. You’ll also want to look into the vpx_sample_time tables of which there are also 4 tables vpx_sample_time1, vpx_sample_time2, vpx_sample_time3 and vpx_sample_time4.
Apparently vCenter 4.0 and below has a habit of causing deadlocks when loading data, especially in larger environments. The fixes that were provided are pretty much all hit or miss when it comes to if they will work, and his description of the cause of the problem was pretty vague. The jest of what I got was that the data loading is deadlocking with the code which handles the rollups and causing problems. Something which could be tried to fix this would be to enable snapshot isolation mode for the database. Personally I think this would have a better chance of fixing the problem then the crappy work around which he provided (which I’m not listing here on purpose).
The work around which VMware came up with for this problem, introduced in vCenter 4.0 can have its own problem is large environments. This problem is that data is missing for servers at random intervals. This is because VMware came up with the idea of creating three staging tables and using each staging table for 5 minutes, then processing the data from that staging table into the vpx_hist_stat and vpx_sample_time tables while then moving on to the next staging table. However if it takes too long to process the data from the first table, and the third table has been used it is now time to move back to the first table and data is now lost as it can’t write the data into the first table. VMware needs to do some major redesign of this entire process for the next release to come up with a better solution that won’t allow for data loss. There are plenty of was to do it that won’t cause problems. Don’t you love it when developers that don’t know databases very well try and come up with screwy solutions to problems?
Based on what was talked about in this session there are a lot of SQL Scripts that need to be written to help people improve performance of their vCenter databases. Guess I’ve got a lot of script writing to do.
Today was day 3 of VMware. All the sessions that I attended today were pretty much a recap of the things which I covered earlier in the week. I went to these more in depth sessions because the information learned today will help me with my day to day deployments of VMware as well as helping me learn more about the specific items within VMware that I need to look at to ensure that VMware is running smoothly day to day.
The big event of tonight was the attendee party with a performance by The Killers which was opened by Recycled Percussion. It was a great concert especially as I’m not a mega fan of The Killers, and hadn’t head of Recycled Percussion before tonight. After the show and party VMware had after parties at the hotel pools which was a blast. I meet and had some great conversations with some great guys from a variety of places and companies.
While today doesn’t make a great blog post, it was another amazing day at VMworld 2011 and I can’t wait for tomorrow.
So today was day 2 of VMworld 2011 and today was a great day at the conference. We had a great keynote with some demos which were pretty funny (I really hope that they were supposed to be funny). Granted I was a little late to the keynote so I missed the first few minutes, but I over slept damn-it breakfast is the most important meal of the day.
The first thing I was was a project called Project Octopus. This allows your users to access the same files via Windows, Mac or Linux PCs, phones, tablets, etc. It also allows users to edit any files which they have access to on any device. This is done via HTML 5 so as long as the device supports HTML 5 (which most everything new does) you can access full Windows applications on the machine. In the demo the user was sent an Excel file via IM which they then opened on an iPad and they were able to edit it in a fully functional copy of Excel 2010. There was a small application installed on the iPad which then connected to the server via the web browser, uploaded the file to the server (or opened the file from the server, not really sure here, but either way) then the user was able to edit the Excel sheet and save it back to the server.
The next product which we were shown was called VMware Go. Go is a software as a service offering where the user signs into the site and then they are able to via the webpage scan an IP subnet looking for servers which are capable of running vSphere 5.0 on them. The user can then select which Windows servers they would like to deploy vSphere 5.0 to. vSphere 5.0 is then deployed to the servers. I’m not sure what happens to the Windows OS and services which are already installed on the servers, so this could be very dangerous if pushed to the wrong server by accident.
A new product which I’m really excited about is aimed directly at the small / medium business (SMB) market and will allow you to take two servers with only local storage and configure them in a highly available vSphere 5.0 cluster. This new product is called Virtual Storage Appliance (VSA). The way this is done is that the VSA which is a virtual appliance which is installed on all the hosts (it supports two and three node clusters only). When installed and configured it will take the local storage and present it to the cluster as shared storage. Redundancy for this solution is done by using software based replication and setting up each VM to be replicated to another host in the cluster. This way the cluster can always survive a single node failure without loosing the ability to run any guest on the cluster.
There are some big changes coming to vSphere Site Recovery Manager (SRM) 5.0 which is no longer called VMware vSphere SRM. One of the biggest is the ability to automatically fail back after a site has failed and restored automatically. In prior versions of SRM failover was a one way operation, in order to fail back to the first site you would have to totally reconfigure SRM and then trigger failover. With the new 5.0 version of SRM you simply configure the failback as part of the policies then when the second site comes back online SRM will failback as configured.
Another cool thing you can do with SRM 5.0 now is the ability to DR your site to a cloud provider instead of to your own backup data center. This allows you to run your primary site on your hardware, but rent your DR systems from a cloud service provider that is certified as a SRM site. Currently there are only a couple of options, but as time goes on there will be more options available.
I went to a couple of sessions today, the most informative of which was about the new features of vSphere 5.0. VMware is upgrading the VMFS version from 3 to 5, but this time it is a non-distruptive upgrade unlike the upgrade from VMFS 2 to 3. The new version of ESXi is much thinner than the prior 4.1 version leaving more resources available for the guest machines.
vSphere will only officially supports 32 hosts in a cluster, however there was been clusters tested with over 100 nodes, but still only 32 are supported. Something which will make a lot of Linux shops happy is vCenter no longer requires Windows as the OS for the vCenter server. It can now be installed on a Linux OS (they didn’t specify which Linux flavor). There is an embedded database which supports up to 5 hosts and 50 VMs. For installs which are larger than this you’ll need to install an instance of Oracle. Currently only Oracle is supported and eventually other databases will be supported. Another limitation of running vCenter on Linux is that you can’t run the vCenter in linked mode. Linked mode is where you have a vCenter server one at each site and they are linked so that you have redundancy at the vCenter level.
There is a new web based client which will be included with vSphere 5.0. This won’t be a fully featured featured UI, but it will support most of the features. The nice thing about this new web client is that it will work on Windows, Mac, and Linux. Eventually the web client will become the default client for vSphere and vCenter but this isn’t the case yet.
The last change I want to talk about today is the fact that vMotion now supports slower links. In vSphere 4.1 and below using vMotion required using a network which had a 5ms or lower network latency. In vSphere 5.0 this limit is increased to 10ms latency which allows you to vMotion over city wide networks.
See you tomorrow for VMware Day 3.