I ran into an interesting issue last week. I had VMs in the US West region of Azure which was unable to talk to the MySQL database service (PaaS) also in US West. The problem is turned out was due to a new feature that needed some adjusting to make work (there’s already a bug open to get this fixed).
The issue was that I couldn’t log into the MySQL PaaS service using VMs in my vNet. The error that I was getting was that I couldn’t access the PaaS service using VMs within a vNet.
The underlying issue was that the configuration of the Subnet within the vNet had the Microsoft.Sql service endpoint configured for that subnet. Once that was removed from the Subnet I was able to connect from the VMs to the MySQL for SQL Databases Service.
Microsoft is thankfully already aware of this problem. If you are using any combination of the PaaS services SQL Server, MySQL, Postgres and you want to use the Microsoft.Sql service Endpoints; for now, you need to put any virtual machines connecting to them of different subnets. Those subnets can be within the same vNet; they just need to be within different subnets within your vNet configuration.
In my case, my servers are only using MySQL and nothing within the subnet is trying to connect to SQL DB or SQL DW, so removing the Service End Point was the easiest solution. Once I did this, I was able to access my web servers again without issue.
Yep. This is actually by design; and is because listeners can be tricky little fellas. When using a SQL Server Availability Group Listener, you can see any databases on the server that is hosting the Availability Group Listener. The reason for this is because each SQL Server Availability Group Listener merely is a connection to the instance that’s hosting that Availability Group.
You can think of Availability Group Listeners kind of like a DNS entry. Whatever server the listener is pointed at that’s the server that all users can connect too. If for example, you had a server configuration with two Availability Groups and two Availability Group Listeners; if those two listeners are hosted on the same node of the Availability Groups and a user connects to one of the listeners, then the user would be able to access any of the databases (assuming they have access to all of them).
This is by design, and everything is working exactly as it should be.
This same rule applies to databases which aren’t protected by the availability group. If you have multiple databases protected by an Availability Group, and several databases hosted on the active server which aren’t members of the availability group and a user connects to the Availability Group listener, they’ll be able to see all the databases on the server. This also means that as the Availability Group Listener is moved from replica to replica databases will come and go as the Availability Group is moved.
Hopefully, this explains why you might be seeing things you aren’t expecting.
The GDPR or General Data Protection Regulation as its actually known is a European law that will be taken effect in May 2018. There’s a lot of misconceptions out there that need to be talked about, especially by non-EU websites.
Before we get started, let me state that I’m not a lawyer, and I don’t even play one on TV. Everything that I’m talking about should be verified with an actual lawyer.
The fines for violations of the GDPR are pretty steep. 20 million Euros or 4% of your companies revenue, whichever is HIGHER. Beyond this websites could be blocked from being accessed within the EU. In other words, this law is severe, and it doesn’t apply to just companies that operate in the EU.
One of the bigger misconceptions is that the GDPR is related to sales. It isn’t. The GDPR relates to personal data of the people that view your website whether they buy anything or not. This includes your comments, feedback, collecting emails, newsletter subscribers, etc.
Most people are using WordPress for their websites. As of late January 2018, there’s minimal plugins available. And the ones who are there have minimal installs. Jetpack, which is one of the biggest plugins available is working on GDPR compliance. They aren’t there yet, but they are working on it.
The biggest thing with the GDPR (as far as I’m concerned) is how to deliver the users request to be deleted. For DCAC the only place we have to worry about this is with comments. Our vendors (Mail chimp and Microsoft’s O365) both have or are working on GDPR compliance and will have something put together in time.
Another piece of the GDPR is that users from the EU need to be able to request an export of their data from your systems. For DCAC this is pretty simple, we just need to be able to export comments and what event data we collected. Our email sending doesn’t contain anything other than what newsletters the user received from us (and we can get this from Office 365). No matter how you export the data you need to be able to export the data and deliver it to the user that requests it. The user also needs a way to download their data. Today WordPress doesn’t have a solution for this, but hopefully, it will by the time the law kicks in.
The scary part of the GDPR is telling people if the website is breached, and you have 72 hours from when you discover the breach. There’s no real definition of user, so the assumption is that commenters, notification form submitters, etc. At least this is our assumption. Our hope, of course, is that a breach doesn’t happen, but we have to prepare for the worse.
It isn’t scary, but there’s a lot
As you can see there’s a lot to this GDPR stuff that needs to be worried about. Done correctly you can lean on vendors like WordPress’s JetPack plugin, Microsoft’s Dynamics 365 (part of the Office 365 suite) and Mail Chimp.
Hopefully, this has demystified the GDPR a little bit and made it a little less scary.
I’m pleased to say that Denny Cherry & Associates Consulting was named by Technology Headlines as the top consulting firm in 2017. Technology Headlines did a nice writeup on Denny Cherry & Associates Consulting to boot. I’m really proud of the group that we’ve put together at DCAC, and we’re starting 2018 off with a bang.
DCAC was selected as the consulting firm of the year for our work with our clients in both Microsoft Azure and Microsoft SQL Server. Our clients have been pleased with our work, and that’s the number one thing for us, making sure that our clients are satisfied with the work that we do for them. That’s the sort of thing that has separated us from some of the other firms in this space and elevated us to this award-winning position.
Be sure to check out our website www.dcac.co as well.
If you were at the PASS Summit, Live360 in November, or SQL Saturday in Slovenia in December, you may have noticed that I couldn’t make it to these events. It turns out that I had a medical issue that I needed to deal with as soon as possible.
Over the six months or so before the PASS Summit I had a headache that I couldn’t shake. Between trips, I made a doctors appointment with a general doctor that I used to use who sent me out for an MRI. The Insurance company didn’t want to pay for the MRI, but the doctor was able to get the insurance company to pay for a CT scan instead.
Do what the doctor says
I went in for an outpatient CT scan at about 7 pm and figured that I’d get the results from my doctor in a few days. 8 minutes after the I left the outpatient CT scan, the doctor who reviewed the CT scan called my cell phone and told me that he usually doesn’t call patients directly, but that he needed me to turn around and go to my nearest ER. Between the two of us, we decided to go to the ER attached to his facility was the easiest to get to, and he could call them and tell them I was coming. So I turned around and drove back to the hospital.
That facility checked me in and transferred me to the neurology unit, which was at another one of their hospitals. All of this was about a week and a half before the PASS Summit started.
The trip to the emergency room and the CT scan were the start of 35+ days of being in the hospital, with two surgeries and one procedure. It was not a fun month. After the doctor did the first of many MRIs, the doctor determined that we were dealing with a 4.3cm (about 2 inches for Americans) tumor on my brain stem. To give a visual reference, that’s bigger than a golf ball. It took about two days before we were able to get into the OR and get the tumor removed.
After the initial surgery I’ve done a lot of physical therapy, first at the hospital, then at an acute physical therapy facility. Sadly there I picked up a nasty infection which required a transfer back to the hospital and another surgery to resolve. Thankfully I’ve been home since the day after US Thanksgiving. The bad news is that I’ve was on IV antibiotics since before I left the hospital, and I was on them until December 29th.
The tumor was not cancer, that’s was the most important part of all this. That piece of information from the doctors alone made this much easier to deal with and get through. It also meant not needing Chemo to finish my recovery.
Getting Back To Normal
Things are slowly getting back to normal. This was an issue affecting the brain stem, not the brain; this means that it affected my fine motor skills, not my memory or ability to work with SQL Server. The most significant skill that had been impacted is walking. I moved to a walker while I was at the hospital which was pretty slow going. I’ve since moved to mostly a cane, and I’m improving every day.
Speech is where I’m going to need some therapy as well. Right now, when I speak at my regular speed (which is pretty quick), I have a noticeable speech impediment which I need to improve before I can give presentations again.
In short, I’ll be back doing presentations; it’ll just take some time before I’m back on stage.
With the announcement of the CPU “issues” in the last week or so, this week has quickly become Security Week at DCAC with our blogging. This week will all be capped off without security webcast this Friday. If you follow the DCAC Blog, you’ll see different security topics from everyone this week, with one new one coming out each week.
I wanted to take this time to talk about our old, and poorly named, friend SQL Injection. To date, this is still the most common way for data to be exposed. As applications get older and more corporate applications get abandoned the risk of them being abandoned gets worse and worse. As I’ve written about time and time and time again, SQL Injection is a problem that needs to be solved on the application side. However, with enterprise applications that get abandoned this becomes hard for a business to deal with as some business unit needs to pay for these changes.
And that need to paying for development time to fix security issues is why SQL Injection issues can come up. For old applications, business units don’t see a value in fixing applications (or at least verifying that there’s no issue with the application) so the applications will just sit there until an issue comes up. And by the time it does, those problems aren’t going to go away they’re just going to get worse as you now have customer data (or employee, or vendor, etc.) out there in the wild. Now you have a Public Relations issue on top of your security issue.
Issues like we saw this month get pretty logos and flashy names, but for the most part these kinds of issues require some sort of server access (yes I know there’s proof of concepts out there). But with SQL Injection as long as the application is exposed to users you have the potential for a problem.
We’re not just talking about external users here, but internal as well. Most breaches that companies have where data is taken are internal. In other words, you let the person into your office, gave them credentials to your network and let them go nuts on your network. I couldn’t tell you the number of unauthorized routers, Wi-Fi access points, or applications that scan the network I’ve found over the last 20 years.
So to recap, your biggest threats are employees that are inside your firewall, attacking old applications that haven’t been updated in years but still have access to information worth stealing.
It’s time to secure all those old applications.
On January 19th meet with the crew from Denny Cherry and Associates Consulting at 11 am Pacific Time / 2 pm Eastern Time. During this webcast, we’ll talk about database security in general and specifically how Spectre and Meltdown impact database workloads within the Enterprise.
With Spectre and Meltdown taking over the IT news this month now is the time to review applications and databases to ensure that those applications are properly secured, and the data within those applications is kept safe from prying eyes.
For me, that means no brain tumor this year. Hopefully that’s a easy bar to hit.
There’s a lot of versions of SQL Server available today. I’ve seen clients deploying new services on SQL Server 2015, SQL Server 2016, SQL Server 2017 (yes we have a client on SQL Server 2017 already) and SQL DB. But if you’re deploying a new SQL Server what’s the right version to deploy?
I’d love to tell you that the answer is to use the newest version, but it isn’t. And no one should.
Our first thing to look at is what features of the database platform do we need. Do they require SQL Server 2017 or does it work with older versions of SQL Server? The next decision point is what versions is the DBA ready to support? Our customer that is running SQL Server 2017 is willing to be on the bleeding edge of technology and take risks with new versions of software within days of their release. Not everyone is willing to take these risks and feel more comfortable on SQL Server 2014 or SQL Server 2016. While I don’t always agree with the idea of running older versions, for this reason, I do understand it. I may agree with it, but I do understand it.
After that, it before a political decision within your company as to what version of the database to run. I can’t help much with political problems.
If possible, I’d vote for a newer version, but my vote isn’t usually the important one.
Out of the box, SQL Server will encrypt some things by default to protect you and your data. Out of the box, SQL Server will encrypt the passwords which are sent up from the client to the SQL Server. This will keep the password from being sniffed on the network when logging in to the SQL Server instance.
SQL Server does this encryption using a self-signed certificate to ensure that the certificates always there. If you have selected a different certificate for encryption, then SQL Server will use this certificate to encrypt the login data.