A while back we at Denny Cherry & Associates Consulting became Microsoft Gold partners for the Microsoft Cloud Platform. Well, we’ve had so much fun being a Gold Partner for Cloud Platform that we decided that we needed to become a Gold Partner for Data Platform as well.
We were able to achieve this status with Microsoft by having a great dedicated team working at DC&AC and by having great customers that have a ton of faith in us as their SQL Server consultants.
Many thanks to our customers who have helped us get to this point. We plan on continuing to do great work for you in the future.
SAN snapshots, and I don’t care who your vendor is, by definition depend on the production LUN. We’ll that’s the production data.
That’s it. That’s all I’ve got. If that production LUN fails for some reason, or becomes corrupt (which sort of happens a lot) then the snapshot is also corrupt. And if the snapshot is corrupt, then your backup is corrupt. Then it’s game over.
Second rule of backups: Backups must be moved to another device.
With SAN snapshots the snapshot lives on the same device as the production data. If the production array fails (which happens), or gets decommissioned by accident (it’s happened), or tips over because the raised floor collapsed (it’s happened), or someone pulls the wrong disk from the array (it’s happened), or someone is showing off how good the RAID protection in the array is and pulls the wrong two disks (it’s happened), or two disks in the same RAID set fail at the same time (or close enough to each other than the volume rebuild doesn’t finish between them failing) (yep, that’s happened as well), etc. If any of these happen to you, it’s game over. You’ve just lost the production system, and the backups.
I’ve seen two of those happen in my career. The others that I’ve listed are all things which I’ve heard about happening at sites. Anything can happen. If it can happen it will (see item above about the GOD DAMN RAISED FLOOR collapsing under the array), so we hope for the best, but we plan for the worst.
Third rule of backups: OLTP systems need to able to be restored to any point in time.
See my entire post on SAN vendor’s version of “point in time” vs. the DBAs version of “point in time” .
If I can’t restore the database to whatever point in time I need to, and my SLA with the business says that I need to, then it’s game over.
Fourth rule of backups: Whoever’s butt is on the line when the backups can’t be restored gets to decide how the data is backed up.
If you’re the systems team and you’ve sold management on this great snapshot based backup solution so that the DBAs don’t need to worry about it, guess what conversation I’m having with management? It’s going to be the “I’m no longer responsible for data being restored in the event of a failure” conversation. If you handle the backups and restores, then you are responsible for doing them, and it’s your butt on the line when your process isn’t up the job. I don’t want to hear about it being my database all of a sudden.
Just make sure that you keep in mind that when you can’t restore the database to the correct point in time, it’s probably game over for the company. You just lost a days worth of production data? Awesome. How are you planning on getting that back into the system? This isn’t a file server or the home directory server where everything that was just lost can be easily replaced or rebuild. This is the system of record that is used to repopulate all those other systems, and if you break rules number one and two above you’ve just lost all the companies data. Odds are we just lost all our jobs, as did everyone else at the company. So why don’t we leave the database backups to the database professionals.
Now I don’t care what magic the SAN vendor has told you they have in their array. It isn’t as good as transaction log backups. There’s a reason that we’ve been doing them since long before your SAN vendor was formed, and there’s a reason that we’ll be doing them long after they go out of business.
If you are going to break any of these rules, be prepared for the bad things that happen afterwards. Breaking most of these rules will eventually lead to what I like to call an “RGE”. An RGE is a Resume Generating Event, because when these things happen people get fired (or just laid off, if they are lucky).
So don’t cause an RGE for yourself or anyone else, and use normal SQL backups.
There are some systems out there which have a lot of RAM, but only a few processors and these machines may need a non-standard NUMA configuration in order to be properly setup. For this example, let’s assume that we have a physical server with 512 Gigs of RAM and two physical NUMA nodes (and two CPU sockets). We have a VM running in that machine which has a low CPU requirement, but a large working set. Because of this we have 4 cores and 360 Gigs of RAM presented to the VM.
Now the default configuration for this would be to have a single NUMA node. However this isn’t going to be the best configuration for the server. The reason that this isn’t the best possible configuration is because all the memory which is allocated to the VM can’t fit within a single NUMA node. Because of this fact we need to tell the hypervisor that we want to split the four cores into two separate NUMA nodes which will allow the hypervisor to split the memory across the two physical NUMA nodes evenly, presenting two NUMA nodes to the guest with 180 Gigs of RAM from each NUMA node. (How you do this depends on the hypervisor that you’re using.)
Once this is done SQL will now know which NUMA node the memory and CPUs are assigned to, and it will correctly place work onto the correct CPU based on the NUMA node which contains the data which the work (query) needs to access.
Now every machine should not be configured this way if there is a small number of cores. The only time this becomes an issue is when there is less physical RAM per NUMA node than we are presenting to the guest OS.
How do we know how much RAM there is per NUMA node? You’ll probably need to ask your server team. The general rule is RAM/CPU Sockets. In the example above we have 512 Gigs of RAM with two CPU Sockets. In modern servers each CPU socket is usually its own NUMA node, however this may not be the case in the servers you are working with. And the CPU sockets only count if there is a processor physically in the socket.
Hopefully this helps clear up some things on these servers with these odd configurations.
There is still time to register for the fantastic class Internals for Query Tuning Class with Kalen Delaney. This fantastic three day class will teach what you need to know in order to better understand how SQL Server uses indexes, and how SQL Server uses indexes in order to process queries.
After taking this class you’ll be able to better tune your SQL Servers reducing the need to upgrade the server hardware, allowing you to extend the life of your servers and reduce the need to purchase additional SQL Server licenses. This can lead to a savings of tens of thousands of dollars per year.
All this for a small cost of $2225 for the class. For those looking for a great investment in their career and their business, here it is.
We look forward to seeing you at the class April 4th-6th, 2016 in Orange County, CA. So go get registered today.
In case you missed the announcement today, Microsoft SQL Server is going to be running on Linux soonish (mid 2017). This is some pretty big and interesting news.
The first thing to keep in mind is that this isn’t wholly unprecedented. This will be the second time that Microsoft SQL Server has run on an OS other than Windows. Keep in mind that originally Microsoft SQL Server ran on OS/2 (version 1.0 and 1.1), it didn’t run on Windows until 4.21, and it wasn’t built specifically for Windows until SQL Server 6.0 shipped.
Personally I think this is a really interesting path for the product to take. Obviously Microsoft isn’t going to be dropping Windows support or anything. This is simply a new avenue for Microsoft to explore in order to gain access to the open source application developer. SQL Server running on Linux is actually something that I’ve thought about in the past, and at the same time both wondered why they didn’t do it, and knew exactly why they didn’t do it.
When I first heard that this was something that they were exploring I was hesitant, but I could see the possibilities for the product as well as for the development communities that currently rely on MySQL/Postgress/etc. as they can install them on Linux/Mac OS/etc. as a package when doing their software development. Developers working on software which is running on a single machine, where that machine isn’t running Windows, is a large market of people. These folks often just need a RDBMS behind their application, and they want something which they can simply install on their machine and get up and running quickly and easily so that they can deploy their software. Currently packages like MySQL and Progress make this very easy for them to do. But wouldn’t it be nice if these same software developers could use an RDBMS which is much more fully featured with things like ColumnStore and Hekaton (In Memory OLTP) and which comes with an Enterprise Class support team at the vendor (Microsoft). The developers could even leverage this level of support when deploying their application to their customers.
So I was on a plane, and a little secret, we had hints that this was coming. I’ve been perusing Twitter and seeing a lot of people thinking this is a big shot across the bow at Oracle, and it really isn’t. I’m not going to discuss Oracle’s recent financials (hint—they haven’t been selling a lot of net new RDBMS licenses). Like Denny mentioned above, a lot of development work is on Macbooks (I’m writing this post on one right now), and a lot of devs are more used to Linux rather than Windows. If I were writing this post in 2002, I would think this move would be targeted at large enterprises because UNIX > Windows. Frankly, since 2008 Windows has mostly been on par with Linux and is better in several areas. This is all about giving developers a full robust RDBMS platform with good support. Will it be a full featured enterprise RDBMS tomorrow? Probably not, but the core engine will be there, and features will come over time.
My favorite part of SQL Server on Linux? My Bash scripts for automation work again.
Now there are clearly going to be a lot of questions that need to be answered. But this is going to be a very exciting time for those in the SQL Server community who are willing to learn about a new OS, embrace the command line (there’s almost never a GUI for Linux Servers), and not be afraid of a little change in the ecosystem.
Denny and Joey
Time is running out for your SQL Server 2005 instances. In just a few short weeks SQL Server 2005 support from Microsoft will be ending. This means that it’s time to get those servers upgraded to something newer.
If you are working on a PCI compliant system, you’ll need to get this fixed before your next PCI audit. One of the requirements of PCI compliance is that the system be supported by the vendor. Because support has ended for these older system from Microsoft they are no longer qualified to host systems which are PCI compliant. Because of this you’ll want to start planning your upgrade project now.
If you need assistance with your upgrade project, reach out to our team. We’d love to talk to you about your upgrade projects and how we can help get your systems onto supported version of Microsoft SQL Server.
Well the good news is that there’s no need to develop anything. SQL Server Management Studio can already do this for you. Simply open SSMS, then click on Tools > Options. In the Options window open Query Results > SQL Server and check the check box next to “Play the Windows default been when a query batch completes” and click OK.
The next time you run a query (you might need to close all your query windows or restart SSMS, you usually do with this sort of change in SSMS) it’ll beep when the query is done.
Personally I’ve actually used this as an alarm clock when doing long running overnight deployments so that I’d get woken up when the script was done so I could start the next step. It’s also handy when you want to leave the room / your desk while queries are running.
Don’t forget to turn up the volume when leaving and turn it back down when you are done.
If you were wondering how long this feature has been in SSMS, I was first using it in Enterprise Manager, in SQL 2000.
Have I got a deal for you. I’ll be presenting my precon SQL Performance Tuning and Optimization on Friday March 18th, 2016. For just $249 you can attend this full day training event and learn how to use everything that you get for free with SQL Server to help you tune your system to get more performance from the SQL Servers that you have today.
This is a great session, at a great price. And I hope that I’ll see you there.
So go get registered before time runs out.
Have you been wanting to move some of your services up to Azure, but you don’t have the budget to move them up to the cloud? Microsoft has an excellent deal for you. Microsoft will pay for (at least part) of the consulting time needed to get you migrated into Azure. These funds can be used to help you plan your migration so that all of the “i”s are dotted and the “t”s are crossed before you start to implement your Azure migration.
Planning a Migration from on-premises to Azure is key to ensuring that you have a successful migration. An unsuccessful migration is costly, and painful and is something that we want to avoid at all possible cost.
Getting Microsoft to pay for planning your Azure Migrations is surprisingly easy, there is only one major requirement, that you currently have Software Assurance on your systems, somewhere. As a customer with Software Assurance you have various benefits that come with your Software Assurance, one of which is consulting hours to assist you with migrations to the Microsoft Azure platform.
Activating this benefit can even be done without having to talk to anyone from Microsoft. You just need to log into your Microsoft Licensing Service Center and activate the benefit from there. We can even help walk you though the benefit setup and getting your vouchers.
The good news here is that Denny Cherry & Associates Consulting is setup to accept those vouchers, so you can have Microsoft pay for your migrations into Azure.
If you having been thinking about moving into Azure, and you are looking for some help in getting that done contact us. We’ll walk you though the process of getting your vouchers and use those to help you have a successful migration from your current environment to the Microsoft Azure Cloud Platform.
I’m thrilled to report that Denny Cherry & Associates Consulting now has not 1, but 2 VMware vExperts in our ranks. No we didn’t go off and hire another consultant. VMware has decided that this year they will recognize both Myself (Denny Cherry) and Joey D’Antoni for all the community work that we’ve done with Virtualization by awarding us the VMware vExpert award for 2016.
I know that I can safely speak for Joey when I say that we are both honored to receive this award from VMware. While there are a decent number of vExperts each year, the number of vExperts that are SQL Server Professionals is very small. Looking though that list I can see only a couple of people besides myself who have deep knowledge of SQL Server, so it’s pretty exciting that we make up somewhere around 1/2 of the SQL Server expertise within the VMware vExpert program.
Thank you to VMware for recognizing myself and Joey. And thank you to the members of the community who attend our sessions and read the content we post about VMware and SQL Server.