When working in the Azure portal finding task information when performing tasks against an object that is related to an object can be tough. Before I dive into the how, let me demystify what I’m talking about a little more, as that first statement is a little vague. In the case of my example I want to see the status of building an Azure Internal Load Balancer which I was deleting using the Azure portal.
There is no section within the Azure portal that you can use to see Internal Load Balancers directly. You can only see them from within the Virtual Machine. But if you are deleting them from the portal then you might not be able to see the Internal Load Balancer anymore.
So to see what’s happening to need to know where to find this related information in the Azure Portal.
The trick to doing this is looking at the events which were done on the resource group within the Azure Portal. Now the nice thing about these events is that it doesn’t matter if the events are triggered through the Azure Portal or Azure Powershell.
To get to these events, you can either click on Resource Groups from the main Azure panel (you may need to click browse if it isn’t shown as one of your favorites) like you see below.
Or you can go through another object within the Resource Group such as a Virtual Machine. Personally I usually end up clicking through from a VM, because I’m usually working on a load balancer which is done from the settings of the VM, so I already have that panel up. You’ll find the Resource Group at the top of the main panel of the VMs just under the settings button as shown below.
Either way you get there, you’ll end up on the properties of the Resource Group. With that panel open, scroll down on the panel and you should see a monitoring tile that looks something like this.
While that is a very pretty little tile it doesn’t give a whole lot of details. But you can click on the tile and get a lot more useful information. When you click on the tile the new blade that opens shows the same basic graph, but bigger. If you scroll down in that blade you’ll see details about those events as shown below.
If you click on the text in the Operation column, another blade will open which gives you the details of that operation.
In the details blade you’ll see all the various steps which were required for this operation to happen. If we click on one of those details, yet another blade opens. If we scroll to the bottom of that blade we’ll see some useful text which looks mostly like English.
Now in my case it’s a pretty easy to read error message. The Internal Load Balancer couldn’t be created because one already existed in the resource group, and you can only have one ILB per resource group. Sometimes the error text isn’t displayed at all, or it isn’t the most clear information in the world. It all depends on what you are trying to do and why it failed.
If you have commands which you run in PowerShell that throw a generic error message, you’ll probably get more details from in here that you were able to get in PowerShell as it depends where in the Azure stack the error occurred and if that component was configured to bubble the actual error message up or just log it and throw a generic error message to the calling application, your PowerShell window in this case.
In any case, this should help you find the information that you are looking for in Azure when things don’t quite go according to plan.
The short answer is, you don’t.
The entire point of Always Encrypted is that the DBA can’t view or modify the data using SQL Server Management Studio. The only way to view and change the data is by using the front end application that created the data. This means that if someone submits a ticket to your team to change some data, you won’t be able to. You’ll have to refer that ticket back to someone in the business to have them make the change using the application just like a normal user would as you can’t touch the data.
Recently there has been some talk online about how SQL Server processes data type conversions compared to how Oracle’s method for type conversion. I figured that I would clear up some of the confusion that’s out there.
When concatenating a number (like an INT) to a string (like a VARCHAR) you get an error message in SQL Server, but no error message in Oracle. The reason for this difference is due to the data type precedence differences in the two platforms. In SQL Server the database engine is trying to convert the VARCHAR value to an INT value (in this example) because INT is higher on the data type conversion order. Oracle on the other hand when concatenating a number (NUMERIC) to a string (like VARCHAR2) does not through an error message because it is converting the number to a string, then concatenates the string values.
If you create a table with a numeric datatype, and use a string value in the WHERE clause both platforms will throw an error because the data types don’t match.
I’ve also seen questions about why SQL Server has so many numeric data types when Oracle only has one. Having the different data types in SQL Server helps to optimize the storage of numbers, and to make it easier to predict how must storage your table will need (I’m assuming no compression in either case). With Oracle it requires one bit per value of 16 (remember data is stored in binary, not as actual numbers) (assuming that the value isn’t mostly zeros such as 100000 or 1000000000 where each of those values only requires two bytes of storage). In SQL Server, the space needed depends on the size of the data type with TINYINT taking 1 byte, SMALLINT taking 2 bytes, etc.
With data compression the storage changes for both platforms allowing numbers to be stored using less space. However, Data Compression in SQL Server requires Enterprise Edition and Advanced Compression in Oracle requires Enterprise Edition as well as the Advanced Compression feature to be purchased.
Is your user group currently hosting their website that isn’t giving you the features that you need?
Is your user group currently paying to much (or anything) for your website?
Would your user group to be able to be hosted for free, on a WordPress platform where you can have basically any theme, and any plugin available to you?
Have I got a deal for you.
Several people that I know run user groups, and they haven’t been happy with their hosting solutions for one reason or another. So I decided to do something about it. So I’ve put together the website SQLUGS.com (SQL User Groups) which will happily allow you to run your own website as something.sqlugs.com for free (we’ll be happy to host for other user groups as well, if you’d like to be hosted here as well). This platform is a WordPress powered solution with each subdomain as a network website so you get all the power of our already configured MySQL cluster which we have build and the web tier is built on an Azure Web App so it’ll scale beautifully as load comes and goes throughout the day.
Currently we’ve got a few plugins and themes configured. If you want different themes and plugins just let us know via the support page and we’ll get them added to the site (it’s a requirement of WordPress that only the site owner installs plugins and themes as it’s a shared codebase). But we’ll be happy to install anything on the environment as long as it doesn’t adversely impact performance of the rest of the network. As new plugins are installed you’ll have control of which ones you have enabled on your sites. We have a few turned on which are mandatory, which are mostly security related or infrastructure related (backups, forcing SSL when you login, etc.).
If you would like to host your user groups website in our network, just fill out the request form and we’ll get you setup as quickly as we can (we don’t have auto signup enabled because of spammers, scammers, etc).
Does your user group have their own domain name currently that you want to use to use? No problem, we support that as well. It just takes a little more work to setup on our side. If you want SSL for your domain we can do that as well, but we’ll require that you provide the SSL Certificate (we provide one for your something.sqlugs.com site). As long as there’s no charge for us to host it in Azure, then there’s no charge to the user group (based on pricing changes for SSL last week SSL certs should be free unless the pricing changes again).
We hope that you’ll take us up on our free hosting for your user groups.
As of today (April 12, 2016) Microsoft no longer supports SQL Server 2005. If something goes wrong with your SQL 2005 database server at this point, there’s not going to be anyone at Microsoft who can help you.
However we will be happy to help you. Do we love working on 10+ year old versions, no not really. But we are realists. You have SQL 2005 systems which haven’t been upgraded because you haven’t had the time to upgrade them, or because your vendor application doesn’t support anything newer than SQL 2005.
These systems still need performance tuning.
These systems still need upgrading.
And unlike a lot of other consulting companies out there we will be happy to help you performance tune these systems, and we are also happy to help you upgrade your systems from SQL 2005 to newer versions. Just because Microsoft doesn’t support the older versions anymore doesn’t mean that your older systems have magically been upgraded or are no longer needed by your enterprise.
If you need someone to do some performance tuning on your SQL 2005 server, or you need some help upgrading your SQL 2005 systems to a newer and Microsoft supported version of SQL Server, drop us a note.
A while back we at Denny Cherry & Associates Consulting became Microsoft Gold partners for the Microsoft Cloud Platform. Well, we’ve had so much fun being a Gold Partner for Cloud Platform that we decided that we needed to become a Gold Partner for Data Platform as well.
We were able to achieve this status with Microsoft by having a great dedicated team working at DC&AC and by having great customers that have a ton of faith in us as their SQL Server consultants.
Many thanks to our customers who have helped us get to this point. We plan on continuing to do great work for you in the future.
SAN snapshots, and I don’t care who your vendor is, by definition depend on the production LUN. We’ll that’s the production data.
That’s it. That’s all I’ve got. If that production LUN fails for some reason, or becomes corrupt (which sort of happens a lot) then the snapshot is also corrupt. And if the snapshot is corrupt, then your backup is corrupt. Then it’s game over.
Second rule of backups: Backups must be moved to another device.
With SAN snapshots the snapshot lives on the same device as the production data. If the production array fails (which happens), or gets decommissioned by accident (it’s happened), or tips over because the raised floor collapsed (it’s happened), or someone pulls the wrong disk from the array (it’s happened), or someone is showing off how good the RAID protection in the array is and pulls the wrong two disks (it’s happened), or two disks in the same RAID set fail at the same time (or close enough to each other than the volume rebuild doesn’t finish between them failing) (yep, that’s happened as well), etc. If any of these happen to you, it’s game over. You’ve just lost the production system, and the backups.
I’ve seen two of those happen in my career. The others that I’ve listed are all things which I’ve heard about happening at sites. Anything can happen. If it can happen it will (see item above about the GOD DAMN RAISED FLOOR collapsing under the array), so we hope for the best, but we plan for the worst.
Third rule of backups: OLTP systems need to able to be restored to any point in time.
See my entire post on SAN vendor’s version of “point in time” vs. the DBAs version of “point in time” .
If I can’t restore the database to whatever point in time I need to, and my SLA with the business says that I need to, then it’s game over.
Fourth rule of backups: Whoever’s butt is on the line when the backups can’t be restored gets to decide how the data is backed up.
If you’re the systems team and you’ve sold management on this great snapshot based backup solution so that the DBAs don’t need to worry about it, guess what conversation I’m having with management? It’s going to be the “I’m no longer responsible for data being restored in the event of a failure” conversation. If you handle the backups and restores, then you are responsible for doing them, and it’s your butt on the line when your process isn’t up the job. I don’t want to hear about it being my database all of a sudden.
Just make sure that you keep in mind that when you can’t restore the database to the correct point in time, it’s probably game over for the company. You just lost a days worth of production data? Awesome. How are you planning on getting that back into the system? This isn’t a file server or the home directory server where everything that was just lost can be easily replaced or rebuild. This is the system of record that is used to repopulate all those other systems, and if you break rules number one and two above you’ve just lost all the companies data. Odds are we just lost all our jobs, as did everyone else at the company. So why don’t we leave the database backups to the database professionals.
Now I don’t care what magic the SAN vendor has told you they have in their array. It isn’t as good as transaction log backups. There’s a reason that we’ve been doing them since long before your SAN vendor was formed, and there’s a reason that we’ll be doing them long after they go out of business.
If you are going to break any of these rules, be prepared for the bad things that happen afterwards. Breaking most of these rules will eventually lead to what I like to call an “RGE”. An RGE is a Resume Generating Event, because when these things happen people get fired (or just laid off, if they are lucky).
So don’t cause an RGE for yourself or anyone else, and use normal SQL backups.
There are some systems out there which have a lot of RAM, but only a few processors and these machines may need a non-standard NUMA configuration in order to be properly setup. For this example, let’s assume that we have a physical server with 512 Gigs of RAM and two physical NUMA nodes (and two CPU sockets). We have a VM running in that machine which has a low CPU requirement, but a large working set. Because of this we have 4 cores and 360 Gigs of RAM presented to the VM.
Now the default configuration for this would be to have a single NUMA node. However this isn’t going to be the best configuration for the server. The reason that this isn’t the best possible configuration is because all the memory which is allocated to the VM can’t fit within a single NUMA node. Because of this fact we need to tell the hypervisor that we want to split the four cores into two separate NUMA nodes which will allow the hypervisor to split the memory across the two physical NUMA nodes evenly, presenting two NUMA nodes to the guest with 180 Gigs of RAM from each NUMA node. (How you do this depends on the hypervisor that you’re using.)
Once this is done SQL will now know which NUMA node the memory and CPUs are assigned to, and it will correctly place work onto the correct CPU based on the NUMA node which contains the data which the work (query) needs to access.
Now every machine should not be configured this way if there is a small number of cores. The only time this becomes an issue is when there is less physical RAM per NUMA node than we are presenting to the guest OS.
How do we know how much RAM there is per NUMA node? You’ll probably need to ask your server team. The general rule is RAM/CPU Sockets. In the example above we have 512 Gigs of RAM with two CPU Sockets. In modern servers each CPU socket is usually its own NUMA node, however this may not be the case in the servers you are working with. And the CPU sockets only count if there is a processor physically in the socket.
Hopefully this helps clear up some things on these servers with these odd configurations.
There is still time to register for the fantastic class Internals for Query Tuning Class with Kalen Delaney. This fantastic three day class will teach what you need to know in order to better understand how SQL Server uses indexes, and how SQL Server uses indexes in order to process queries.
After taking this class you’ll be able to better tune your SQL Servers reducing the need to upgrade the server hardware, allowing you to extend the life of your servers and reduce the need to purchase additional SQL Server licenses. This can lead to a savings of tens of thousands of dollars per year.
All this for a small cost of $2225 for the class. For those looking for a great investment in their career and their business, here it is.
We look forward to seeing you at the class April 4th-6th, 2016 in Orange County, CA. So go get registered today.
In case you missed the announcement today, Microsoft SQL Server is going to be running on Linux soonish (mid 2017). This is some pretty big and interesting news.
The first thing to keep in mind is that this isn’t wholly unprecedented. This will be the second time that Microsoft SQL Server has run on an OS other than Windows. Keep in mind that originally Microsoft SQL Server ran on OS/2 (version 1.0 and 1.1), it didn’t run on Windows until 4.21, and it wasn’t built specifically for Windows until SQL Server 6.0 shipped.
Personally I think this is a really interesting path for the product to take. Obviously Microsoft isn’t going to be dropping Windows support or anything. This is simply a new avenue for Microsoft to explore in order to gain access to the open source application developer. SQL Server running on Linux is actually something that I’ve thought about in the past, and at the same time both wondered why they didn’t do it, and knew exactly why they didn’t do it.
When I first heard that this was something that they were exploring I was hesitant, but I could see the possibilities for the product as well as for the development communities that currently rely on MySQL/Postgress/etc. as they can install them on Linux/Mac OS/etc. as a package when doing their software development. Developers working on software which is running on a single machine, where that machine isn’t running Windows, is a large market of people. These folks often just need a RDBMS behind their application, and they want something which they can simply install on their machine and get up and running quickly and easily so that they can deploy their software. Currently packages like MySQL and Progress make this very easy for them to do. But wouldn’t it be nice if these same software developers could use an RDBMS which is much more fully featured with things like ColumnStore and Hekaton (In Memory OLTP) and which comes with an Enterprise Class support team at the vendor (Microsoft). The developers could even leverage this level of support when deploying their application to their customers.
So I was on a plane, and a little secret, we had hints that this was coming. I’ve been perusing Twitter and seeing a lot of people thinking this is a big shot across the bow at Oracle, and it really isn’t. I’m not going to discuss Oracle’s recent financials (hint—they haven’t been selling a lot of net new RDBMS licenses). Like Denny mentioned above, a lot of development work is on Macbooks (I’m writing this post on one right now), and a lot of devs are more used to Linux rather than Windows. If I were writing this post in 2002, I would think this move would be targeted at large enterprises because UNIX > Windows. Frankly, since 2008 Windows has mostly been on par with Linux and is better in several areas. This is all about giving developers a full robust RDBMS platform with good support. Will it be a full featured enterprise RDBMS tomorrow? Probably not, but the core engine will be there, and features will come over time.
My favorite part of SQL Server on Linux? My Bash scripts for automation work again.
Now there are clearly going to be a lot of questions that need to be answered. But this is going to be a very exciting time for those in the SQL Server community who are willing to learn about a new OS, embrace the command line (there’s almost never a GUI for Linux Servers), and not be afraid of a little change in the ecosystem.
Denny and Joey