SQL Server with Mr. Denny


May 18, 2016  2:00 PM

Move on Brogrammers, your comments aren’t welcome anywhere.

Denny Cherry Denny Cherry Profile: Denny Cherry
Behavior, Communication, community, SQL Server

Recently Jen McCown received a speaker evaluation form from a SQL Saturday attendee.  Now normally this wouldn’t be a blog worthy event as this happens several times a year as there are easily 100 SQL Saturday’s a year..  The reason that this speaker evaluation form is blog worthy is due to the comment which was left.

The question on the form was “What could the speaker do differently to improve?”.  Acceptable answers typically are something like “speaker slower”, “more demos”, “fewer demos”, etc.  However on this specific evaluation form the attendee wrote “wear négligée”.  Jen was kind enough to post the evaluation form for us all to see.

SpeakerEvalIn case you weren’t aware, and apparently at least one person isn’t, this is completely unacceptable.  This would barely be acceptable if her husband, Sean, had written it (he didn’t, she checked), from a stranger it’s unwanted, immoral, and completely unacceptable in either a professional or social setting.

Clearly someone needs to be reminded how we act in civil society.  We don’t tell total strangers that we want to see them standing around in their underwear.  Just because you’re seen someone give a presentation you aren’t in a relationship with them.  You don’t get to post these sorts of things.

Speakers in our community are not your playthings to do with as you please.  They are giving their own time and money to put together presentations and attend the events which they give the sessions at.

If you think that these sort of comments are acceptable, let me tell you now once and for all, they aren’t.  Not to a male presenter and not to a female presenter.  All the presenter is looking for, after spending their time and money is a round of applause and if you liked the session a nice “thank you”.

If you want to write this sort of thing on a speaker evaluation form, stay home. Your attendance isn’t required.

Denny

 

May 13, 2016  10:00 AM

Recommended reading from mrdenny for May 13, 2016

Michael Tidmarsh Michael Tidmarsh Profile: Michael Tidmarsh
SQL Server

Recommended reading from mrdenny for May 13, 2016.

This week, I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


May 11, 2016  4:00 PM

Viewing task status for related objects in the Azure Portal

Denny Cherry Denny Cherry Profile: Denny Cherry
Azure, Windows Azure

When working in the Azure portal finding task information when performing tasks against an object that is related to an object can be tough.  Before I dive into the how, let me demystify what I’m talking about a little more, as that first statement is a little vague.  In the case of my example I want to see the status of building an Azure Internal Load Balancer which I was deleting using the Azure portal.

There is no section within the Azure portal that you can use to see Internal Load Balancers directly.  You can only see them from within the Virtual Machine.  But if you are deleting them from the portal then you might not be able to see the Internal Load Balancer anymore.

So to see what’s happening to need to know where to find this related information in the Azure Portal.

The trick to doing this is looking at the events which were done on the resource group within the Azure Portal.  Now the nice thing about these events is that it doesn’t matter if the events are triggered through the Azure Portal or Azure Powershell.

To get to these events, you can either click on Resource Groups from the main Azure panel (you may need to click browse if it isn’t shown as one of your favorites) like you see below.

panel

Or you can go through another object within the Resource Group such as a Virtual Machine.  Personally I usually end up clicking through from a VM, because I’m usually working on a load balancer which is done from the settings of the VM, so I already have that panel up.  You’ll find the Resource Group at the top of the main panel of the VMs just under the settings button as shown below.

 

panel1

Either way you get there, you’ll end up on the properties of the Resource Group.  With that panel open, scroll down on the panel and you should see a monitoring tile that looks something like this.

panel2

While that is a very pretty little tile it doesn’t give a whole lot of details.  But you can click on the tile and get a lot more useful information.  When you click on the tile the new blade that opens shows the same basic graph, but bigger.  If you scroll down in that blade you’ll see details about those events as shown below.

panel3

If you click on the text in the Operation column, another blade will open which gives you the details of that operation.

panel4

In the details blade you’ll see all the various steps which were required for this operation to happen.  If we click on one of those details, yet another blade opens.  If we scroll to the bottom of that blade we’ll see some useful text which looks mostly like English.

panel5

Now in my case it’s a pretty easy to read error message.  The Internal Load Balancer couldn’t be created because one already existed in the resource group, and you can only have one ILB per resource group.  Sometimes the error text isn’t displayed at all, or it isn’t the most clear information in the world.  It all depends on what you are trying to do and why it failed.

If you have commands which you run in PowerShell that throw a generic error message, you’ll probably get more details from in here that you were able to get in PowerShell as it depends where in the Azure stack the error occurred and if that component was configured to bubble the actual error message up or just log it and throw a generic error message to the calling application, your PowerShell window in this case.

In any case, this should help you find the information that you are looking for in Azure when things don’t quite go according to plan.

Denny


May 4, 2016  4:00 PM

When using Always Encrypted in SQL Server 2016, how do I as the DBA view the data?

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Encryption, SQL Server

The short answer is, you don’t.

The entire point of Always Encrypted is that the DBA can’t view or modify the data using SQL Server Management Studio. The only way to view and change the data is by using the front end application that created the data. This means that if someone submits a ticket to your team to change some data, you won’t be able to. You’ll have to refer that ticket back to someone in the business to have them make the change using the application just like a normal user would as you can’t touch the data.

Denny


April 27, 2016  4:00 PM

Differences between data types between SQL Server and Oracle

Denny Cherry Denny Cherry Profile: Denny Cherry
Data Types, Oracle, SQL Server

Recently there has been some talk online about how SQL Server processes data type conversions compared to how Oracle’s method for type conversion. I figured that I would clear up some of the confusion that’s out there.

When concatenating a number (like an INT) to a string (like a VARCHAR) you get an error message in SQL Server, but no error message in Oracle. The reason for this difference is due to the data type precedence differences in the two platforms. In SQL Server the database engine is trying to convert the VARCHAR value to an INT value (in this example) because INT is higher on the data type conversion order. Oracle on the other hand when concatenating a number (NUMERIC) to a string (like VARCHAR2) does not through an error message because it is converting the number to a string, then concatenates the string values.

If you create a table with a numeric datatype, and use a string value in the WHERE clause both platforms will throw an error because the data types don’t match.

I’ve also seen questions about why SQL Server has so many numeric data types when Oracle only has one. Having the different data types in SQL Server helps to optimize the storage of numbers, and to make it easier to predict how must storage your table will need (I’m assuming no compression in either case). With Oracle it requires one bit per value of 16 (remember data is stored in binary, not as actual numbers) (assuming that the value isn’t mostly zeros such as 100000 or 1000000000 where each of those values only requires two bytes of storage). In SQL Server, the space needed depends on the size of the data type with TINYINT taking 1 byte, SMALLINT taking 2 bytes, etc.

With data compression the storage changes for both platforms allowing numbers to be stored using less space. However, Data Compression in SQL Server requires Enterprise Edition and Advanced Compression in Oracle requires Enterprise Edition as well as the Advanced Compression feature to be purchased.

Denny


April 20, 2016  4:00 PM

Does your User Group need a free WordPress hosting platform?

Denny Cherry Denny Cherry Profile: Denny Cherry
community, Community service, SQL Server, User groups

Is your user group currently hosting their website that isn’t giving you the features that you need?

Is your user group currently paying to much (or anything) for your website?

Would your user group to be able to be hosted for free, on a WordPress platform where you can have basically any theme, and any plugin available to you?

Have I got a deal for you.

Several people that I know run user groups, and they haven’t been happy with their hosting solutions for one reason or another. So I decided to do something about it. So I’ve put together the website SQLUGS.com (SQL User Groups) which will happily allow you to run your own website as something.sqlugs.com for free (we’ll be happy to host for other user groups as well, if you’d like to be hosted here as well).  This platform is a WordPress powered solution with each subdomain as a network website so you get all the power of our already configured MySQL cluster which we have build and the web tier is built on an Azure Web App so it’ll scale beautifully as load comes and goes throughout the day.

Currently we’ve got a few plugins and themes configured.  If you want different themes and plugins just let us know via the support page and we’ll get them added to the site (it’s a requirement of WordPress that only the site owner installs plugins and themes as it’s a shared codebase).  But we’ll be happy to install anything on the environment as long as it doesn’t adversely impact performance of the rest of the network.  As new plugins are installed you’ll have control of which ones you have enabled on your sites.  We have a few turned on which are mandatory, which are mostly security related or infrastructure related (backups, forcing SSL when you login, etc.).

If you would like to host your user groups website in our network, just fill out the request form and we’ll get you setup as quickly as we can (we don’t have auto signup enabled because of spammers, scammers, etc).

Does your user group have their own domain name currently that you want to use to use? No problem, we support that as well. It just takes a little more work to setup on our side. If you want SSL for your domain we can do that as well, but we’ll require that you provide the SSL Certificate (we provide one for your something.sqlugs.com site). As long as there’s no charge for us to host it in Azure, then there’s no charge to the user group (based on pricing changes for SSL last week SSL certs should be free unless the pricing changes again).

We hope that you’ll take us up on our free hosting for your user groups.

Denny


April 12, 2016  4:00 PM

And with that, SQL 2005 is out of support

Denny Cherry Denny Cherry Profile: Denny Cherry
performance tuning, SQL Server, SQL Server 2005

As of today (April 12, 2016) Microsoft no longer supports SQL Server 2005. If something goes wrong with your SQL 2005 database server at this point, there’s not going to be anyone at Microsoft who can help you.

However we will be happy to help you. Do we love working on 10+ year old versions, no not really. But we are realists. You have SQL 2005 systems which haven’t been upgraded because you haven’t had the time to upgrade them, or because your vendor application doesn’t support anything newer than SQL 2005.

These systems still need performance tuning.

These systems still need upgrading.

And unlike a lot of other consulting companies out there we will be happy to help you performance tune these systems, and we are also happy to help you upgrade your systems from SQL 2005 to newer versions. Just because Microsoft doesn’t support the older versions anymore doesn’t mean that your older systems have magically been upgraded or are no longer needed by your enterprise.

If you need someone to do some performance tuning on your SQL 2005 server, or you need some help upgrading your SQL 2005 systems to a newer and Microsoft supported version of SQL Server, drop us a note.

Denny


April 1, 2016  6:15 PM

Microsoft Gold Partner was so much fun, we did it again.

Denny Cherry Denny Cherry Profile: Denny Cherry
Microsoft Partner Network, Microsoft partners, Partner programs, SQL Server

A while back we at Denny Cherry & Associates Consulting became Microsoft Gold partners for the Microsoft Cloud Platform.  Well, we’ve had so much fun being a Gold Partner for Cloud Platform that we decided that we needed to become a Gold Partner for Data Platform as well.Gold All

We were able to achieve this status with Microsoft by having a great dedicated team working at DC&AC and by having great customers that have a ton of faith in us as their SQL Server consultants.

Many thanks to our customers who have helped us get to this point.  We plan on continuing to do great work for you in the future.

Denny


March 30, 2016  4:00 PM

Why aren’t SAN snapshots a good backup solution for SQL Server?

Denny Cherry Denny Cherry Profile: Denny Cherry
SAN, Snapshot, SQL Server, Storage arrays

3190462946_339cfc3345_zFirst rule of backups: Backup can’t depend on the production data.

SAN snapshots, and I don’t care who your vendor is, by definition depend on the production LUN. We’ll that’s the production data.

That’s it. That’s all I’ve got. If that production LUN fails for some reason, or becomes corrupt (which sort of happens a lot) then the snapshot is also corrupt. And if the snapshot is corrupt, then your backup is corrupt. Then it’s game over.

Second rule of backups: Backups must be moved to another device.

With SAN snapshots the snapshot lives on the same device as the production data. If the production array fails (which happens), or gets decommissioned by accident (it’s happened), or tips over because the raised floor collapsed (it’s happened), or someone pulls the wrong disk from the array (it’s happened), or someone is showing off how good the RAID protection in the array is and pulls the wrong two disks (it’s happened), or two disks in the same RAID set fail at the same time (or close enough to each other than the volume rebuild doesn’t finish between them failing) (yep, that’s happened as well), etc. If any of these happen to you, it’s game over. You’ve just lost the production system, and the backups.

I’ve seen two of those happen in my career. The others that I’ve listed are all things which I’ve heard about happening at sites. Anything can happen. If it can happen it will (see item above about the GOD DAMN RAISED FLOOR collapsing under the array), so we hope for the best, but we plan for the worst.

Third rule of backups: OLTP systems need to able to be restored to any point in time.

See my entire post on SAN vendor’s version of “point in time” vs. the DBAs version of “point in time” .

If I can’t restore the database to whatever point in time I need to, and my SLA with the business says that I need to, then it’s game over.

Fourth rule of backups: Whoever’s butt is on the line when the backups can’t be restored gets to decide how the data is backed up.

If you’re the systems team and you’ve sold management on this great snapshot based backup solution so that the DBAs don’t need to worry about it, guess what conversation I’m having with management? It’s going to be the “I’m no longer responsible for data being restored in the event of a failure” conversation. If you handle the backups and restores, then you are responsible for doing them, and it’s your butt on the line when your process isn’t up the job. I don’t want to hear about it being my database all of a sudden.

Just make sure that you keep in mind that when you can’t restore the database to the correct point in time, it’s probably game over for the company. You just lost a days worth of production data? Awesome. How are you planning on getting that back into the system? This isn’t a file server or the home directory server where everything that was just lost can be easily replaced or rebuild. This is the system of record that is used to repopulate all those other systems, and if you break rules number one and two above you’ve just lost all the companies data. Odds are we just lost all our jobs, as did everyone else at the company. So why don’t we leave the database backups to the database professionals.

Now I don’t care what magic the SAN vendor has told you they have in their array. It isn’t as good as transaction log backups. There’s a reason that we’ve been doing them since long before your SAN vendor was formed, and there’s a reason that we’ll be doing them long after they go out of business.

If you are going to break any of these rules, be prepared for the bad things that happen afterwards. Breaking most of these rules will eventually lead to what I like to call an “RGE”. An RGE is a Resume Generating Event, because when these things happen people get fired (or just laid off, if they are lucky).

So don’t cause an RGE for yourself or anyone else, and use normal SQL backups.

Denny


March 23, 2016  4:00 PM

How many NUMA nodes should I have if I have lots of RAM and just a few cores?

Denny Cherry Denny Cherry Profile: Denny Cherry
Database performance, NUMA, SQL Server

2894816411_01dc38281c_zThere are some systems out there which have a lot of RAM, but only a few processors and these machines may need a non-standard NUMA configuration in order to be properly setup. For this example, let’s assume that we have a physical server with 512 Gigs of RAM and two physical NUMA nodes (and two CPU sockets). We have a VM running in that machine which has a low CPU requirement, but a large working set. Because of this we have 4 cores and 360 Gigs of RAM presented to the VM.
Now the default configuration for this would be to have a single NUMA node. However this isn’t going to be the best configuration for the server. The reason that this isn’t the best possible configuration is because all the memory which is allocated to the VM can’t fit within a single NUMA node. Because of this fact we need to tell the hypervisor that we want to split the four cores into two separate NUMA nodes which will allow the hypervisor to split the memory across the two physical NUMA nodes evenly, presenting two NUMA nodes to the guest with 180 Gigs of RAM from each NUMA node. (How you do this depends on the hypervisor that you’re using.)

Once this is done SQL will now know which NUMA node the memory and CPUs are assigned to, and it will correctly place work onto the correct CPU based on the NUMA node which contains the data which the work (query) needs to access.
Now every machine should not be configured this way if there is a small number of cores. The only time this becomes an issue is when there is less physical RAM per NUMA node than we are presenting to the guest OS.
How do we know how much RAM there is per NUMA node? You’ll probably need to ask your server team. The general rule is RAM/CPU Sockets. In the example above we have 512 Gigs of RAM with two CPU Sockets. In modern servers each CPU socket is usually its own NUMA node, however this may not be the case in the servers you are working with. And the CPU sockets only count if there is a processor physically in the socket.

Hopefully this helps clear up some things on these servers with these odd configurations.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: