So today was day 2 of VMworld 2011 and today was a great day at the conference. We had a great keynote with some demos which were pretty funny (I really hope that they were supposed to be funny). Granted I was a little late to the keynote so I missed the first few minutes, but I over slept damn-it breakfast is the most important meal of the day.
The first thing I was was a project called Project Octopus. This allows your users to access the same files via Windows, Mac or Linux PCs, phones, tablets, etc. It also allows users to edit any files which they have access to on any device. This is done via HTML 5 so as long as the device supports HTML 5 (which most everything new does) you can access full Windows applications on the machine. In the demo the user was sent an Excel file via IM which they then opened on an iPad and they were able to edit it in a fully functional copy of Excel 2010. There was a small application installed on the iPad which then connected to the server via the web browser, uploaded the file to the server (or opened the file from the server, not really sure here, but either way) then the user was able to edit the Excel sheet and save it back to the server.
The next product which we were shown was called VMware Go. Go is a software as a service offering where the user signs into the site and then they are able to via the webpage scan an IP subnet looking for servers which are capable of running vSphere 5.0 on them. The user can then select which Windows servers they would like to deploy vSphere 5.0 to. vSphere 5.0 is then deployed to the servers. I’m not sure what happens to the Windows OS and services which are already installed on the servers, so this could be very dangerous if pushed to the wrong server by accident.
A new product which I’m really excited about is aimed directly at the small / medium business (SMB) market and will allow you to take two servers with only local storage and configure them in a highly available vSphere 5.0 cluster. This new product is called Virtual Storage Appliance (VSA). The way this is done is that the VSA which is a virtual appliance which is installed on all the hosts (it supports two and three node clusters only). When installed and configured it will take the local storage and present it to the cluster as shared storage. Redundancy for this solution is done by using software based replication and setting up each VM to be replicated to another host in the cluster. This way the cluster can always survive a single node failure without loosing the ability to run any guest on the cluster.
There are some big changes coming to vSphere Site Recovery Manager (SRM) 5.0 which is no longer called VMware vSphere SRM. One of the biggest is the ability to automatically fail back after a site has failed and restored automatically. In prior versions of SRM failover was a one way operation, in order to fail back to the first site you would have to totally reconfigure SRM and then trigger failover. With the new 5.0 version of SRM you simply configure the failback as part of the policies then when the second site comes back online SRM will failback as configured.
Another cool thing you can do with SRM 5.0 now is the ability to DR your site to a cloud provider instead of to your own backup data center. This allows you to run your primary site on your hardware, but rent your DR systems from a cloud service provider that is certified as a SRM site. Currently there are only a couple of options, but as time goes on there will be more options available.
I went to a couple of sessions today, the most informative of which was about the new features of vSphere 5.0. VMware is upgrading the VMFS version from 3 to 5, but this time it is a non-distruptive upgrade unlike the upgrade from VMFS 2 to 3. The new version of ESXi is much thinner than the prior 4.1 version leaving more resources available for the guest machines.
vSphere will only officially supports 32 hosts in a cluster, however there was been clusters tested with over 100 nodes, but still only 32 are supported. Something which will make a lot of Linux shops happy is vCenter no longer requires Windows as the OS for the vCenter server. It can now be installed on a Linux OS (they didn’t specify which Linux flavor). There is an embedded database which supports up to 5 hosts and 50 VMs. For installs which are larger than this you’ll need to install an instance of Oracle. Currently only Oracle is supported and eventually other databases will be supported. Another limitation of running vCenter on Linux is that you can’t run the vCenter in linked mode. Linked mode is where you have a vCenter server one at each site and they are linked so that you have redundancy at the vCenter level.
There is a new web based client which will be included with vSphere 5.0. This won’t be a fully featured featured UI, but it will support most of the features. The nice thing about this new web client is that it will work on Windows, Mac, and Linux. Eventually the web client will become the default client for vSphere and vCenter but this isn’t the case yet.
The last change I want to talk about today is the fact that vMotion now supports slower links. In vSphere 4.1 and below using vMotion required using a network which had a 5ms or lower network latency. In vSphere 5.0 this limit is increased to 10ms latency which allows you to vMotion over city wide networks.
See you tomorrow for VMware Day 3.
Today was day 1 of VMware and I had a blast, even though I was only able to attend for part of the day. I flew into Vegas this morning instead of spending the night last night. I didn’t hit any sessions today, but I did catch the keynote which was given by Paul Maritz the CEO of VMware.
The keynote was interesting, but didn’t provide a whole lot of new information. Paul officially announced that vSphere 5.0 was released along with VMware View 5.0.
vSphere 5.0 is the third new major annual release of the vSphere product. 2009 had vSphere 4.0, 2010 had vSphere 4.1 giving 2011 vSphere 5.0. vSphere 5.0 has 200 new features (which weren’t listed). VMware has put 1 million man hours into building the new vSphere 5.0 platform and another 2 million man hours into testing the new version.
There were a few pieces of information which were talked about as far as new features which were basically boiled down to a few key points. The first is probably the most important as with vSphere 5.0 VMware expects that they will be able to run almost every production workload. Virtual machines running under vSphere 5.0 can now have up to 32 vCPUs and 1 TB of RAM each. VMware has added in some storage load balancing features that I’m really hoping that I can learn more about as the week continues as well as the automatic storage tiering which looks very interesting. There is also a storage load balancing feature which I’m quite interested to learn more about.
There were some interesting stats which Paul talked about as well. There were 19000 attendees which actually made it to the conference. There were over 20000 people registered but some people got stuck on the east coast thanks to the weather. Some additional stats include that analysts currently estimate that worldwide there 50% of production workloads are running under a hypervisor today. This means that every 6 seconds a new VM is built (which is faster than people are being born). It is estimated that there are over 20 million virtual machines running under VMware’s hypervisor platform. If these hosts were put end to end they would be twice the length of the Great Wall Of China. More machines are being moved from host to host via vMotion than there are airplanes in the sky.
Needless to say there is a lot of great information which I’m hoping to learn and share with you.
Tom LaRock is starting up a video series for Confio called Afternoon Ignite. He has asked me to be his first victim guest on the show. We will be talking about pretty much what ever comes to mind which will probably involve performance tuning, VMs, PASS, SQL Excursions, and Bacon.
Feel free to join us via GoTo Meeting at 11am Pacific / 2pm Eastern and check out the excitement.
So the other day I had to restore the SQL Server replication publisher. When I restored it I made sure to use the KEEP_REPLICATION option on the restore (also available in the SSMS UI) so replication should come back online. However when I restarted the log reader I the following error message.
The log scan number (6367:10747:6) passed to log scan in database ‘%d’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup. (Source: MSSQLServer, Error number: 9003)
Needless to say this error looks pretty damn scarey. In reality it isn’t actually that bad. What this error is basically saying is that the LSN that is returned from the database is older than the one logged in the replication database. The best part is that the fix is pretty easy. Simply run the stored procedure “sp_replrestart” in the published database.
As the SQL PASS summit is a big confusing event for people attending the summit for the first time, I’m taking it upon myself to do something about this.
On Tuesday September 6th, 2011 at 10am Pacific Time (1pm Eastern, 5pm GMT) I will be putting on a webcast to give first timers (and people that have attended before) some critical information about Seattle and the summit that they should probably have before actually getting to Seattle for the summit.
No registration is needed, just sign into the live meeting when the webcast is supposed to start. You can also go to the meeting lobby that day and I should be able to approve you in that way, but I think signing into the live meeting directly would be the best bet. I have made a calendar invite that you can download and import into your calendar. The invite is in iCal format.
I will be recording the session and I’ll make it available for viewing after the session, but this is your first chance to ask some questions about the summit during the Q & A at the end of the session.
All I ask is that you pass the information about this session along to others who are attending the PASS summit as they will hopefully get some useful information about the Seattle and the PASS summit as well.
I look forward to your attending my session on September 6th, and I’ll see you at the PASS summit.
When working with SQL in a cluster, the account rights on both nodes of the cluster need to be the same
Recently I was working with a clients SQL Server cluster. The managed service provide had installed some Windows patches causing the SQL Cluster to fail over to the other node. No big deal, everything appeared to be working as normal.
After a couple of days we noticed something a little strange. There was a very strange wait type which was showing a LOT of wait type. This wait type was PREEMPTIVE_OS_GETPROCADDRESS which means that SQL Server is waiting on something outside of the database engine to respond. When I looked into the spid which was doing the waiting I saw that it was running the extended stored procedure xp_delete_file. What this file does, in case you aren’t aware is remove old SQL Server backups from the hard drive of the server based on parameters that you specified.
First thing that I did was look at the permissions of the files, they appeared to be setup correctly. the local admin group had full control, users had no rights, owner has full control. Knowing that the SQL Account should be a member of the administrators group on these servers (I didn’t set the machine up, so don’t get me started on minimum permissions). However when I looked in the admin group for this node of the cluster, the SQL Account wasn’t a member of the admin group. I jumped on to the other node and it was in that machines.
The reason that this was a problem is because of the way that NTFS handles permissions on new files when the user is an owner of the folder and has full control rights. Because the folder is owned by the local admin group, and the SQL Server was a member of the local admin group when the files were created they inherited the rights from the folder which were admins had full control, users had no rights, and owner had full control. Except that in this case ownership of the folder and the files was built in\Administrators which also carried down to the files. So when the SQL Account came through on the second machine looking to delete files it didn’t have the rights because it wasn’t in the built in\Administrators group any more.
Fortunately fixing this problem was pretty easy. I simply put the SQL Account in the local admin group on the misconfigured node and scheduled a short outage to restart SQL on that node so that it could pickup the new permissions. Then the long waits went away and the older backups were able to be deleted as they should be.
If you’d like to read more about why you don’t normally want to have the SQL Server running with admin rights and what the minimum needed rights means might I recommend you check out my security book Securing SQL Server (paperback | kindle | website) available on Amazon.com and other online retailers.
I’m very please to tell you all about my new storage blog on sqlmag.com titled “Troubleshooting SQL Server Storage Problems“. On this new blog (there’s just the one post for now, but that will change shortly) I’ll be talking all about SQL Server and Storage and how these things should be working together.
My sqlmag.com blog is all about helping you solve your storage problems, so the blog will work best with your questions, issues and problems. So please post your questions on the blog, post them here, or email them to me and I’ll get them answered (I will only include your name and/or company if you ask me to) so that not only can we get your questions answered and your problems fixed, but we can help other peoples problems solved in the process.
Last night the NETDA User Group in Redmond was night enough to ask me to present to their group while I was up here this week. It was great talking to the group, and it was great giving a presentation at the Microsoft Corporate office.
As promised here’s the slide deck and sample code which was shown. Everything is included except for the linq to SQL code that I captured from my production environment.
I had a great time talking to the user group, hopefully I’ll be able to present there again in the future.
The 2011 conference season is starting to come to a close. There are only a few large conferences left such as VM World, SQL PASS, and Dev Connections. Many people are currently working with their bosses to get sent to these conferences.
However while going through these conversations here’s something else to keep in mind. Odds are your company has just started going through the 2012 budget process. Now is the time to get some requests in to attend some conferences next year. When talking to your bosses about conferences for 2012 don’t just request one conference, request them all. The way that the budget process works is that your boss starts with a big number, and slowly hacks that number down to get to a number that his/her boss can approve. If you request a single conference with a budget of say $4000 for ticket, hotel and flight and it’s time to reduce that line item, there’s only one place to go, $0. Boom, no more conferences for the year. However if you has a team of 3 people, and you all want to go to one conference request a budget line item of $96,000 (3 people * 8 conferences * $4000 each). When asked for a list of the conferences be prepared to provide a list.
- SQL PASS
- VM World
- EMC World
- Dev Connections (There’s a couple of those)
- Build (This is similar to what PDC used to be)
- Tech Ed
- Oracle Open World
There are also other events that you can attend which may require some budget, some a little less, some a little more depending on the event.
- SQL PASS Rally
- SQL Excursions
- SQL Cruise
- Tech Cruise
- Oracle Cruise
- SQLskills Immersion Events
I’m sure that there are plenty of other events which you could find and attend. If you start with a nice high budget as you go through the process you’ll probably end up with enough budget for a couple of people to hit a conference or two throughout the year.
Good luck getting through the budget process, and hopefully I’ll see you at some of these conferences.
Have you been using SQL Server “Denali” and want to get your voice heard? Now’s your chance. Tech Target is looking for SQL Server “Denali” users (either CTP 1 or CTP 3) to interview for an article that they are working on about SQL Server “Denali”.
To get your voice heard contact Jason Sparapani (jsparapani AT techtarget DOT com) and he’ll take care of you.