VMware recently announced they’re vExpert list for 2018 and I’m proud to say that both Joey and myself were on the VMware list for 2018. The VMware vExpert program is designed to recognize those in the VMware community who help out with the VMware community at large. While the vExpert list looks rather large, it’s a big world and there’s a lot of people working around the world helping VMware users to setup various VMware products. I know that both Joey and myself and thrilled to have been awarded this award for 2018.
Having VMs in Azure which are members of your on-premises domain is a pretty important part of your Cloud implementation. There’s a couple of ways to do this, but I’ll be covering what we at DCAC think is the best option. In basic terms, you’ll set up a VPN from the Azure vNet to your on-premises network. From there you’ll want to add domain controllers to Azure which are members of the on-premises domain (not just another domain with the same name). Then change your vNet to use those DCs as your DNS servers, reboot your other VMs, and you’ll be able to add the other VMs to your Active Directory domain. Here are some details:
The first step is to put some domain controllers in Azure. To do this, you’ll need a site to site VPN between Azure and your on-premises environment. If you have multiple on-premises sites, then you’ll want to create a VPN between Azure and all your on-premises environments. If your A zure environment is hosted in multiple regions, then you’ll want to create a mesh network when each on-premises site in VPNed into all of your vNets. You’ll probably also want your vNets VPNed to each other (Peering of your networks between sites may be an option as well depending on how you’ve set things up). If you have an extremely large number of users at your site, then Express Route might be something worth looking into instead of a site to site VPN.
Once the Site to Site VPN (or Express Route) then you can focus on putting some domain controllers in Azure. Each site within your Azure environment should have at least 2 DCs, and they should be created within an Availability Set, or an Availability Zone (depending on what your standard is going to be for setting these up). You can now set the vNet to use the office DCs as DNS Servers. Once that’s done, reboot the Azure VMs that you want to make domain controllers and promote them to DCs. When making them DCs, you’ll probably want them to be a fairly large VM size so that the promotion process doesn’t take that long. You can resize them later. Once the VMs in Azure have been created as DCs, you’ll want to make those VMs have static IP addresses (whatever IPs they have are fine). Make a note of these IPs as you’ll need to enter them in a second.
Once the DCs are setup go into the vNet configuration and set the DNS servers for the vNet and change the vNet to use the new Azure DCs as your DNS Servers (you wrote these down at the end of the prior paragraph). Then reboot any VMs that you’ve created already in the vNet.
At this point all the VMs that you have already created before now can be added to the domain without issue just like any other machine in your environment.
A lot of the examples you’ll see for MAXDOP on the web assume a large server with multiple physical sockets and multiple NUMA nodes. But if you have a smaller server, like a lot of us do these days, what should you set the MAXDOP to?
The basic rules still apply, that you should set the MAXDOP to 1/2 of the size of your NUMA node. Just because you’re running a smaller server doesn’t mean that you don’t have NUMA configured, it just means that you have a single NUMA node. So for example if you have a server with one socket and six cores, then your MAXDOP should probably be set to 3; since three is 1/2 the size of the server.
Now there’s a lot of it-depends that goes with this, but this gives you a starting point. You might need to decrease MAXDOP to 2 and see how this affects the server.
Keep in mind that making changes to MAXDOP during the business day isn’t recommended as making changes to MAXDOP will flush the plan cache and cause all the queries to be recompiled.
The short answer here is no. The threat detection features that you see in Azure are not available in the on-premises product. This includes VMs running SQL Server instances in Azure. The only way to get the SQL Server threat detection features that Azure offers is going to be to use the SQL DB feature in Azure.
I ran into an interesting issue last week. I had VMs in the US West region of Azure which was unable to talk to the MySQL database service (PaaS) also in US West. The problem is turned out was due to a new feature that needed some adjusting to make work (there’s already a bug open to get this fixed).
The issue was that I couldn’t log into the MySQL PaaS service using VMs in my vNet. The error that I was getting was that I couldn’t access the PaaS service using VMs within a vNet.
The underlying issue was that the configuration of the Subnet within the vNet had the Microsoft.Sql service endpoint configured for that subnet. Once that was removed from the Subnet I was able to connect from the VMs to the MySQL for SQL Databases Service.
Microsoft is thankfully already aware of this problem. If you are using any combination of the PaaS services SQL Server, MySQL, Postgres and you want to use the Microsoft.Sql service Endpoints; for now, you need to put any virtual machines connecting to them of different subnets. Those subnets can be within the same vNet; they just need to be within different subnets within your vNet configuration.
In my case, my servers are only using MySQL and nothing within the subnet is trying to connect to SQL DB or SQL DW, so removing the Service End Point was the easiest solution. Once I did this, I was able to access my web servers again without issue.
Yep. This is actually by design; and is because listeners can be tricky little fellas. When using a SQL Server Availability Group Listener, you can see any databases on the server that is hosting the Availability Group Listener. The reason for this is because each SQL Server Availability Group Listener merely is a connection to the instance that’s hosting that Availability Group.
You can think of Availability Group Listeners kind of like a DNS entry. Whatever server the listener is pointed at that’s the server that all users can connect too. If for example, you had a server configuration with two Availability Groups and two Availability Group Listeners; if those two listeners are hosted on the same node of the Availability Groups and a user connects to one of the listeners, then the user would be able to access any of the databases (assuming they have access to all of them).
This is by design, and everything is working exactly as it should be.
This same rule applies to databases which aren’t protected by the availability group. If you have multiple databases protected by an Availability Group, and several databases hosted on the active server which aren’t members of the availability group and a user connects to the Availability Group listener, they’ll be able to see all the databases on the server. This also means that as the Availability Group Listener is moved from replica to replica databases will come and go as the Availability Group is moved.
Hopefully, this explains why you might be seeing things you aren’t expecting.
The GDPR or General Data Protection Regulation as its actually known is a European law that will be taken effect in May 2018. There’s a lot of misconceptions out there that need to be talked about, especially by non-EU websites.
Before we get started, let me state that I’m not a lawyer, and I don’t even play one on TV. Everything that I’m talking about should be verified with an actual lawyer.
The fines for violations of the GDPR are pretty steep. 20 million Euros or 4% of your companies revenue, whichever is HIGHER. Beyond this websites could be blocked from being accessed within the EU. In other words, this law is severe, and it doesn’t apply to just companies that operate in the EU.
One of the bigger misconceptions is that the GDPR is related to sales. It isn’t. The GDPR relates to personal data of the people that view your website whether they buy anything or not. This includes your comments, feedback, collecting emails, newsletter subscribers, etc.
Most people are using WordPress for their websites. As of late January 2018, there’s minimal plugins available. And the ones who are there have minimal installs. Jetpack, which is one of the biggest plugins available is working on GDPR compliance. They aren’t there yet, but they are working on it.
The biggest thing with the GDPR (as far as I’m concerned) is how to deliver the users request to be deleted. For DCAC the only place we have to worry about this is with comments. Our vendors (Mail chimp and Microsoft’s O365) both have or are working on GDPR compliance and will have something put together in time.
Another piece of the GDPR is that users from the EU need to be able to request an export of their data from your systems. For DCAC this is pretty simple, we just need to be able to export comments and what event data we collected. Our email sending doesn’t contain anything other than what newsletters the user received from us (and we can get this from Office 365). No matter how you export the data you need to be able to export the data and deliver it to the user that requests it. The user also needs a way to download their data. Today WordPress doesn’t have a solution for this, but hopefully, it will by the time the law kicks in.
The scary part of the GDPR is telling people if the website is breached, and you have 72 hours from when you discover the breach. There’s no real definition of user, so the assumption is that commenters, notification form submitters, etc. At least this is our assumption. Our hope, of course, is that a breach doesn’t happen, but we have to prepare for the worse.
It isn’t scary, but there’s a lot
As you can see there’s a lot to this GDPR stuff that needs to be worried about. Done correctly you can lean on vendors like WordPress’s JetPack plugin, Microsoft’s Dynamics 365 (part of the Office 365 suite) and Mail Chimp.
Hopefully, this has demystified the GDPR a little bit and made it a little less scary.
I’m pleased to say that Denny Cherry & Associates Consulting was named by Technology Headlines as the top consulting firm in 2017. Technology Headlines did a nice writeup on Denny Cherry & Associates Consulting to boot. I’m really proud of the group that we’ve put together at DCAC, and we’re starting 2018 off with a bang.
DCAC was selected as the consulting firm of the year for our work with our clients in both Microsoft Azure and Microsoft SQL Server. Our clients have been pleased with our work, and that’s the number one thing for us, making sure that our clients are satisfied with the work that we do for them. That’s the sort of thing that has separated us from some of the other firms in this space and elevated us to this award-winning position.
Be sure to check out our website www.dcac.co as well.
If you were at the PASS Summit, Live360 in November, or SQL Saturday in Slovenia in December, you may have noticed that I couldn’t make it to these events. It turns out that I had a medical issue that I needed to deal with as soon as possible.
Over the six months or so before the PASS Summit I had a headache that I couldn’t shake. Between trips, I made a doctors appointment with a general doctor that I used to use who sent me out for an MRI. The Insurance company didn’t want to pay for the MRI, but the doctor was able to get the insurance company to pay for a CT scan instead.
Do what the doctor says
I went in for an outpatient CT scan at about 7 pm and figured that I’d get the results from my doctor in a few days. 8 minutes after the I left the outpatient CT scan, the doctor who reviewed the CT scan called my cell phone and told me that he usually doesn’t call patients directly, but that he needed me to turn around and go to my nearest ER. Between the two of us, we decided to go to the ER attached to his facility was the easiest to get to, and he could call them and tell them I was coming. So I turned around and drove back to the hospital.
That facility checked me in and transferred me to the neurology unit, which was at another one of their hospitals. All of this was about a week and a half before the PASS Summit started.
The trip to the emergency room and the CT scan were the start of 35+ days of being in the hospital, with two surgeries and one procedure. It was not a fun month. After the doctor did the first of many MRIs, the doctor determined that we were dealing with a 4.3cm (about 2 inches for Americans) tumor on my brain stem. To give a visual reference, that’s bigger than a golf ball. It took about two days before we were able to get into the OR and get the tumor removed.
After the initial surgery I’ve done a lot of physical therapy, first at the hospital, then at an acute physical therapy facility. Sadly there I picked up a nasty infection which required a transfer back to the hospital and another surgery to resolve. Thankfully I’ve been home since the day after US Thanksgiving. The bad news is that I’ve was on IV antibiotics since before I left the hospital, and I was on them until December 29th.
The tumor was not cancer, that’s was the most important part of all this. That piece of information from the doctors alone made this much easier to deal with and get through. It also meant not needing Chemo to finish my recovery.
Getting Back To Normal
Things are slowly getting back to normal. This was an issue affecting the brain stem, not the brain; this means that it affected my fine motor skills, not my memory or ability to work with SQL Server. The most significant skill that had been impacted is walking. I moved to a walker while I was at the hospital which was pretty slow going. I’ve since moved to mostly a cane, and I’m improving every day.
Speech is where I’m going to need some therapy as well. Right now, when I speak at my regular speed (which is pretty quick), I have a noticeable speech impediment which I need to improve before I can give presentations again.
In short, I’ll be back doing presentations; it’ll just take some time before I’m back on stage.
With the announcement of the CPU “issues” in the last week or so, this week has quickly become Security Week at DCAC with our blogging. This week will all be capped off without security webcast this Friday. If you follow the DCAC Blog, you’ll see different security topics from everyone this week, with one new one coming out each week.
I wanted to take this time to talk about our old, and poorly named, friend SQL Injection. To date, this is still the most common way for data to be exposed. As applications get older and more corporate applications get abandoned the risk of them being abandoned gets worse and worse. As I’ve written about time and time and time again, SQL Injection is a problem that needs to be solved on the application side. However, with enterprise applications that get abandoned this becomes hard for a business to deal with as some business unit needs to pay for these changes.
And that need to paying for development time to fix security issues is why SQL Injection issues can come up. For old applications, business units don’t see a value in fixing applications (or at least verifying that there’s no issue with the application) so the applications will just sit there until an issue comes up. And by the time it does, those problems aren’t going to go away they’re just going to get worse as you now have customer data (or employee, or vendor, etc.) out there in the wild. Now you have a Public Relations issue on top of your security issue.
Issues like we saw this month get pretty logos and flashy names, but for the most part these kinds of issues require some sort of server access (yes I know there’s proof of concepts out there). But with SQL Injection as long as the application is exposed to users you have the potential for a problem.
We’re not just talking about external users here, but internal as well. Most breaches that companies have where data is taken are internal. In other words, you let the person into your office, gave them credentials to your network and let them go nuts on your network. I couldn’t tell you the number of unauthorized routers, Wi-Fi access points, or applications that scan the network I’ve found over the last 20 years.
So to recap, your biggest threats are employees that are inside your firewall, attacking old applications that haven’t been updated in years but still have access to information worth stealing.
It’s time to secure all those old applications.