SQL Server with Mr. Denny


July 15, 2019  9:04 PM

Azure Bastion

Denny Cherry Denny Cherry Profile: Denny Cherry

The recently announced Azure Bastion service looks like a pretty slick service. It provides a secure way into your VMs without the need to VPN in. It gives you the same authentication that you’d expect from the Azure Portal (MFA, AAD Credentials, etc.) all while giving you a pretty easy to manage way to get into VMs in Azure. Now, this bastion service isn’t going to be for every situation, so it shouldn’t be used for that. But if you need a secure, logged way to connect to VMs in your Azure environment, this looks like a pretty good solution.

What the bastion service does is allow users to log in to the Azure portal, then select the VM that they want to connect to. From there they get an RDP session within their browser that lets them log into the VM that’s running in Azure.  From a security perspective, the cool thing about this is that you don’t have to give your VMs public IPs. Because the Azure Bastion service is the bridge between the public internet and your internal VMs, nothing needs a public IP address as nothing is going directly to the Internet.

If your in an environment when you need a way to give users RDP access to servers, this is going to give you a nice secure way of going so.

Like I mentioned earlier, this isn’t going to solve all problems. If you work from home and you need SQL access to VMs, then Azure Bastion isn’t going to help you as it doesn’t just pass traffic like SQL Traffic. You’d need to RDP into a machine, then run the SQL tools from there. So if you wanted to run something locally that could log into SQL Server, you’ll still need a VPN in that case.  But for situations where you need to RDP into machines, users that are remote logging into a terminal server for example where you don’t want to have to require that they install VPN software, this could be a good solution for them.

Currently, the Azure Bastion service is in Preview, so you’ll need to sign up for it which you can do from the Microsoft Docs. That doc will also tell you how to use the Azure Bastion service, as you can’t access it from the normal portal URL (yet).

There’s a couple of items to know about Azure Bastion.

  1. It isn’t available in all the regions yet. Because it’s a preview service isn’t only in a few Azure regions. The lack of regions will change, but while it’s a preview, it’s going to be a limited release.  Those regions the service is in today are:
  • West US
  • East US
  • West Europe
  • South Central US
  • Australia East
  • Japan East
  1. Today Azure Bastion can’t plan vNets. So if you have VMs in two different vNets, you’ll need to bastion services, one in each vNet. Hopefully, this will change by release.

Denny

 

July 8, 2019  4:00 PM

For the Love of God, Stop Exposing Company Information

Denny Cherry Denny Cherry Profile: Denny Cherry

Companies (and the employees at them) need to stop posting private company information on the Internet. And they really need to stop posting private information in public spots with no password.  Just last week yet another company was found to be doing something stupid.  In this case, they had tons of information posted to an S3 bucket, and there was no password on the S3 bucket.  In this S3 bucket, they had backups from systems, One Drive backups from employees, credentials for customer environments, keys for their production environments, etc.

“System credentials can be found in a number of places in the Attunity data set and serve as a useful reminder of how that information might be stored in many places across an organization’s digital assets,” UpGuard researchers said in a report published yesterday.

This information should have never been posted to a publically accessible location, much less one without a password.  There’s no good reason why things like system credentials would be posted online.

As IT workers, we have to do better than this.  We just have to.  There are too many people out there who would do bad things with this information if they got there hands on it.

Do I have a solution, no I don’t. But this really isn’t a problem that needs a technical solution. Whoever did this, simply shouldn’t have done it. There is no excuse for exposing anything much less this much information.

Denny


July 1, 2019  4:55 PM

Today is MVP Day, and it was a good day for some not so for others

Denny Cherry Denny Cherry Profile: Denny Cherry

Today is “MVP Day” when Microsoft MVPs find out if they’ve been awarded for another year as Microsoft MVPs.  For some people, it was not that great of a day as they weren’t renewed as Microsoft MVPs today. For others, today there was a really good email in their inbox. I’m happy to say that all 5 of the folks at DCAC that were Microsoft MVPs yesterday are still Microsoft MVPs today.

Congrats to Joey, Monica, John and Meagan (and myself to make 5) on another year of being Microsoft MVPs.

Denny


June 25, 2019  2:44 PM

Thanks for the invite Data Grillen, See You Next Year

Denny Cherry Denny Cherry Profile: Denny Cherry

Everyone you atteeded Data Grillen 2019, thanks so much for the invite this year. I had a great time visiting with everyone, both those that I knew from prior years and those that I met for the first time this year.

Hopefully I’ll get the chance to see everyone again next year.

Look for an email from Ben and William with the link to all the presentions. The slide deck that Joey and I presented will be up there.

If we can be of any assistance in your journey to Azure please let us know how we can help.

Denny


June 17, 2019  12:37 PM

Why I Travel So Much

Denny Cherry Denny Cherry Profile: Denny Cherry

I travel a lot. But occasionally I get asked “Why”. The answer to that is pretty simple and comes down to three simple things.

1. When I just out of high school (18 years old ish) I had no clue what I’d be doing for a living. My first real job out of high school was working Tech Support at an Internet Service Provider (ISP). I liked working with computers, and I’ve clearly stuck with it. Like most Americans, I had no idea that I’d have a chance to see much if any of the world outside of the US. I wasn’t sure what I’d do, or where I’d go. And I’ve been very fortunate that I’ve been able to see as much of the world as I’ve been able to so far. What this rambling reason really means is that because I can, I do. The world is a very cool place full of some exciting things, and I’ve been fortunate to be able to see as many of them as I can.

2. I’ve made a ton of friends around the world and many of them I wouldn’t get to see very often if I didn’t travel so much. Sometimes these folks see me in San Diego, sometimes they see me in other cities like Seattle, it just depends. Some perfect examples come to mind (I’m sitting in a hotel in Amsterdam as I’m writing this); tonight I’m having dinner with a friend from Holland since I happen to be in town; or when I saw a friend from Slovenia every month for about a year because we happened to go to the same conferences every month. It’s so cool to be able to see friends that live all over the world instead of just catching up with them in letters or online.

3. By going to some of the places that I’ve been, I’ve been able to meet people that I would never have met before and seen things that I would never have seen from one. A perfect example of that is when I went to Saudi Arabia to speak at a conference there. We had an extra day between arriving and the conference so a bunch of us when to visit the National Museum that was a few blocks away. We were able to tour the museum, which most people wouldn’t be able to do simply because of the cost of getting there and the fact that Saudi Arabia doesn’t have any way for tourists to visit. You either go there on business, or you don’t go there.

So as to why I travel so much really boils down to these three things. I definitely understand that I’m fortunate to be able to travel, especially as much as I do, especially considering my health issues from just a couple of years ago.  So because of all of this, I travel as much as I can, as often as I can (yes, I work when I travel) so I can see places and meet people because there’s a lot to see out there and I want to see as much of it as I can.

Denny


June 10, 2019  4:00 PM

Azure Database for MySQL server has a slow query log now

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m not sure when this feature got introduced, but I just saw it for the first time recently (and I asked for it when Azure Database for MySQL server was in private preview); but Azure Database for MySQL server can show you the slow queries that are running against the MySQL database. In fact, it’ll log more than just the slow queries, queries that don’t have an index can be logged as well as long-running queries.

If you open the properties of a database in the Azure portal, there’s a couple of different places you can set these settings. The first is on the “Server parameters” blade where you’ll see all the various parameters that you can set for the MySQL deployment. The second is if you select the “Server Logs” option towards the bottom (it’s in the Monitoring section) you’ll see the log files that are being created (if there’s any). At the top of this page, you’ll see a blue bar which says, “Click here to enable logs and configure log parameters.” If you click that it’ll show you a shorter list of the parameters which only includes the settings that are related to the log file (shown in the screenshot).

Once you enable the settings that you need wait a few minutes for there to be load on the system, then go into the “Server Logs” setting of the Azure portal, and you’ll see a log file that you can download. Just open it in notepad (or your favorite text editor), and you’ll see the various log records that you enabled.

While I was playing around with this, I turned on the slow queries option, and I got a bunch of records in the log about queries that WordPress needs to do some tuning on. Since I’m guessing WordPress won’t be doing any database tuning, I’ll have to do it myself. But now that I have some data to work off of, I at least have a starting point.

Denny


June 3, 2019  4:00 PM

Free Performance Metrics from Traffic Manager

Denny Cherry Denny Cherry Profile: Denny Cherry

A couple of weeks ago DCAC moved our website from being hosted in a single data center to being a globally distributed web application with multiple Azure WebApps with one hosted in the US and one in Europe.  As part of having traffic manager configured and enabled for our site, we get some free reporting on performance of our webapp that’s included as part of the package.  As you can see from our performance chart which we can see on the left, we can see where in the world our readers are coming from, and depending on if the system is able to capture it or not, we can see the performance that those users are getting (all the colors in the chart).

For some users, we aren’t able to get back performance data, and those users are shown in White. The users who are shown in a color (blue/green is good, red is bad) the system was able to get performance data for.

There is a little change which helps gather real-time measurements of data.  Within the settings for Traffic Manager, there’s a setting for “Real Time measurements” which gives you a little bit of javascript to include in your website (I’ve removed our key from the picture). You’ll want this javascript on every page that you have on your website. If you’re using WordPress like we are, we were able to have this on every page by putting it in the default footer that’s part of our theme.  The code that’s in the Measurement JavaScript box in the Azure portal, I copies and put in our Theme’s footer between the <?php wp_footer(); ?> and the </body></html>. This put the code at the very bottom of the page, so even if there’s a delay in loading it, it won’t affect our website performance.

By using this basic reporting, we can see some pretty valuable information.

  1. We can see where our website viewers are coming from
  2. We can see what the performance for our viewers are

By showing us where in the world the folks reading our website, our coming from this gives us more insight into what kinds of things we should continue to write about. It also gives us insight into where we might need to deploy additional WebApps for better performance so that users in those areas get better performance.

By showing us the performance that users are getting around the world, we can see roughly what kind of performance improvement that users will see if we deploy another copy of our website to their region.  As we can see from the graph users from India, South East Asia, Australia, and Africa are getting 200ms or higher performance levels from browsing our website.  Based on this it might make sense to deploy another copy of the website into Singapore (which would improve performance in South East Asia, India, and Australia) and another copy of our website to the South African data center to improve performance for users in Africa.

Creating additional WebApps in these regions isn’t a give-in. 200ms isn’t a guarantee that there’s a problem. In our case we can talk to people in those various regions of the world and have them browse to the website and see if there’s a problem with viewing the website or not.  In this case, the data that’s available from Azure is just a piece of the puzzle to see if there is a performance problem or not.  This is because only the information that’s available for the portal isn’t everything.

Denny


May 27, 2019  4:00 PM

Cost Threshold for Paralellism

Denny Cherry Denny Cherry Profile: Denny Cherry

Cost Threshold for Parallelism is a setting in SQL Server that’s been around for as far back in the product that I can think of. It decides how expensive an operator needs to be because SQL Server will use multiple threads for that operator instead of a single thread. The Cost Threshold for Parallelism setting is set for a default of 5 (which is a horrible setting, but we’ll get into that later) which means that any operators which have a cost of below 5 will only use a single thread and any operators will have a cost of 5 or more will use multiple threads.

SQL Server has a default of 5 because it has to have something as the default. But that setting should be changed for most systems.  For OLTP systems, I typically recommend changing this setting to a value of 50.  The reason that I like to start with a setting of 50 for OLTP application is that any operators that have a cost of less than 50, typically aren’t going to see any improvement by using multiple threads. They will usually get slower because of the cost of using parallelism.  So we want to keep the quicker queries from using parallelism so that it’s only being used by the more expensive queries that actually need it.

Now 50 isn’t a perfect setting.  After making a change to the setting, the server needs to be monitored to make sure that to many queries aren’t going parallel.  The Cost Threshold for Parallelism might need to be adjusted again after the initial change, so keep in mind that this isn’t a one size fits all recommendation.

Another thing to keep in mind is that changing this setting will cause your plan cache to be expired and all the plans will need to be recompiled, so you should see CPU load on the server go up right after you change the setting.

Denny


May 20, 2019  6:00 AM

Making WordPress a globally distributed application on Azure WebApps

Denny Cherry Denny Cherry Profile: Denny Cherry

Before I start, I’m just going to say that if I can do this with WordPress, then it can be done with anything.

The goal that I was going for to have a WordPress website that was running on multiple contents (I only used two regions, but this can be scalled out as needed to other regions as needed) so that responce time on one site of the planet would be similar to users of the site that are on the same side of the planet as me.  I also wanted the failover for all of this if an Azure Region goes offline to be as quick and painless as possible.

Here’s what I had to get setup. I setup two Azure WebApps both of which were running Windows. One was setup in US Central while the other was setup in North Europe.

Website Failover

I tried doing this with Linux WebApps, but some of the underlying functions that I needed to make this all work.  Specifically I needed Site Extentions, which are only available on Windows WebApps today.

After setting up the two web apps, I uploaded the website content to the Azure WebApp.  Once the site was uploaded, and the custom DNS was working on the WebApp it was time to start making the replication of the web traffic work.

In a new web browser I opened the .scm.azurewebsites.net version of my site.  To get the SCM site it’s http://{Azure WebApp}.scm.azurewebsites.net.  Since my Azure WebApp is named www-dcac-com-central by SCM site is https://www-dcac-com-central.scm.azurewebsites.net/. (There’s going to have to go to this site a few times, so keep the URL handy.)

Be very careful of the order that you are doing the replication. If you setup the replication from a blank website to your website, then it’ll replicate the blank website. So before doing this, make sure that you have a proper backup on your WebApp BEFORE you configure the mirroring.

One you have the website open click on the Site Extenstions on the right of the menu at the top.

You’ll see a galary option.  Select the galary option and find the “Site Replicator” extention. Enable this extention by clicking the plus sign.  A window will popup to complete the install, click install.

Once the installation is complete, go back to the Azure portal. From the Azure Portal stop the WebApp and then Start it again (clicking restart will not work).

Again in the Azure Portal select the second site (in my case the second region is the North Europe region).  From here you need to download the publish profile for the WebApp.  To do this from the Overview option tab select the “Get publish profile” option from the top right.

Just download the file, we’re going to need it in a second.

Go back to the first sites SCM website (https://www-dcac-com-central.scm.azurewebsites.net/ in my case) and click the play button on the Site Replicator extension.

This is going to give you the configuration screen for the Site Replication (it may take a minute to open). The setting screen is pretty simple.  There’s a browse button on the bottom left of the screen, click that and navigate to the publish profile file that you downloaded earlier.

Give the site some time to replicate all the changes to the other region.  When it says “Succeeded” it should be just about done. The larger the website, the longer this will take.  I FTPed into both WebApps and watched the files appear until they were all there.  On a standard WordPress install, this took about 10 minutes.

Once this was all finished, I repeated the process in the other direction.  I downloaded the publish file from the US Central version and configured the Site Replicator on the North Europe region; then I uploaded the publish file to the North Europe region.  I then gave the process about 10-15 minutes to settle down and do any replication that needed to be completed.

Once this was all finished, I was able to upload files, pictures, WordPress updates, etc. from either site and the change would be replicated to the other region within a second or two.

Once the website replication was handled it was site to setup Traffic Manager. This would allow people to connect to their local version of our website depending on where in the world they are connecting from. Getting the endpoints setup was pretty simple. I used basic geographic load balancing and pointed North/Central/South America to the US Central version, and Asia/Europe/Middle East/Africa/Anti-Artica to the North Europe version.

The only hard part was that because WordPress is going to have a ton of redirects you can’t do normal HTTP monitoring. Normally you could have traffic manager pointing to “/” for the path to monitor, but WordPress didn’t like this. I changed the website to use “/license.txt” instead as the path as this would cause the traffic manager can come online correctly. It isn’t a perfect situation, but it works well enough.

Once everything is setup and traffic manager is happy and working, we can point public DNS to the site.  In our DNS configuration for www.dcac.com we added a CNAME record to DNS. A CNAME record in DNS redirects the request to another record.  In our case we pointed www.dcac.com to www-dcac-com.trafficmanager.net. This allows the Traffic Manager service to resolve www.dcac.com to the correct version of the website.

We can test that this is working as expected by looking at the output of the nslookup command.

By running nslookup on my laptop (which is currently sitting at my house in San Diego, CA), we can see that I’m resolving www.dcac.com to www-dcac-com-central.azurewebsites.net.

If we do the same nslookup command from an Azure VM that’s running in Singapore, from the VM in Singapore when I do a nslookup on www.dcac.com I get back www-dcac-com-northeurope.azurewebsites.net.

From these outputs, I can assume that I’m viewing the version of the website that’s closer to the user.

I’ve now got two duplicate copies of the website running in two different Azure Regions.

Database Failover

On the database side of things, need need to setup some replicate for that as well. “Azure Database for MySQL servers” now supports multi-region replicas but there’s no auto-failover available yet (hopefully it’ll be coming at some point soon).  For the database copy I did basically the same thing as I did for the websites (and that I’ve done tons of times for Azure SQL DB).

For the database side of things I setup WordPress to use the dcac-mysqlcentral copy. From there I clicked the Add Replica button, and that made a copy of the data to a new server called dcac-mysqlnortheurope that I setup in North Europe.

Since there’s no automatic failover at the database level today, if I need to do a failover I need to edit the wp-config.php file on the webserver, and that’ll kick the connection over to the other server.  I also need to setup the needed PowerShell to do the failover. My next step of this process is going to be to setup some Azure Automation to handle all the database level failover, but so far this is a pretty good step as there’s not website level failover.

The End Result

The end result of all of this is that our website is setup and running in two different places for better availabililty and better performance of our application.

Denny


May 13, 2019  4:00 PM

SQL Server Performance Monitor Objects

Denny Cherry Denny Cherry Profile: Denny Cherry

Monitoring SQL Server Performance Monitor objects (Perf Mon for those in the know) can be an important part of monitoring your SQL Server instance. Finding information about the performance monitor objects that SQL Server exposes can be tricky, even though the SQL Server Project Team can documented what these objects all mean.

You can find the documents about the SQL Server Perf Mon objects online on the Microsoft docs website.

If you haven’t had a chance to check it out, I’d very much recommend it. If you are looking for which objects you should be monitoring this can answer a lot of questions for you.

If you need more help past that, let us know, we can help you out.

Denny

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: