A really great feature in Azure SQL DB went GA today. That feature gives you and SQL DB the ability to automatically fail databases over to a Secondary replia, without having to configure your application to handle that failover. You point your application at a VIP and that VIP will automatically handle failover of the resource.
Say for example you have a database in US West named db1-west.database.windows.net and the DR copy of it in US East named db1-east.database.windows.net. This feature lets you create the VIP db1-vip.database.windows.net which automatically points to whichever database is currently active. In the event of a failover of US West, the VIP is going to failover to the database in US East, the database in US East become writable and when the US West is back up, the data will sync back.
Another cool thing which this feature does is something that most features won’t do, it’ll trigger a failover that allows for data loss. Now, this normally would be a very dangerous thing, but the Azure team has come up with a safe way of doing it. When you figure the service to do the failover, you decide how long you want to wait for there to be no data loss. If you want the system back up as soon as it allows for, select the smallest number, otherwise select a larger number. This allows you, and the business unit that you support, to decide what level of protection you want to have built into the system.
If you are thinking about moving to PaaS, not being able to have a DR option may have been stopping you. This is no longer a blocking point, you now have an easy to configure DR, that you can manually failover is need be. If you’re thinking of moving to Azure, DCAC can help to plan and execute that migration. Contact us today, to schedule a meeting to discuss if the cloud is right for you.
Setting up Transparent Data Encryption, or Backup Encryption, or basically anything else where you need to restore a certificate to another SQL Server instance can get tricky as soon as you run the instances under different service accounts. The problem that people will usually run into is that all the example code assumes that all the instances are running under the same service account, but this isn’t always the case in the real world. So let’s review the code that Nic posted to Stack Exchange earlier today.
/* Server 1 */
/* Create the master key */
CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘SomeRandomSecureString’;
/* Create the certificate to be used for backups */
CREATE CERTIFICATE BackupCert
WITH SUBJECT = ‘Backup Encryption Certificate’;
/* Backup the master key */
BACKUP MASTER KEY TO FILE = ‘\\FileShare\DatabaseMasterKey_Master.key’
ENCRYPTION BY PASSWORD = ‘SomeRandomPwd’;
BACKUP CERTIFICATE BackupCert TO FILE = ‘\\FileShare\BackupCert.cer’
WITH PRIVATE KEY (FILE = ‘\\FileShareBackupCert.pvk’,
ENCRYPTION BY PASSWORD = ‘RandomEncryptionPwd’);
/* Server 2 */
/* Create master key */
CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘SomeRandomSecureString’;
/* Restore the cert */
CREATE CERTIFICATE BackupCert FROM FILE = ‘\\FileShare\BackupCert.cer’
WITH PRIVATE KEY (FILE = ‘\\FileShare\BackupCert.pvk’,
DECRYPTION BY PASSWORD = ‘RandomEncryptionPwd’);
–Msg 15208, Level 16, State 6, Line 32
–The certificate, asymmetric key, or private key file is not valid or does not exist; or you do not have permissions for it.
/* Try restoring the master key instead */
DROP MASTER KEY;
RESTORE MASTER KEY FROM FILE = ‘\\FileShare\DatabaseMasterKey_Master.key’
DECRYPTION BY PASSWORD = ‘RandomEncryptionPwd’
ENCRYPTION BY PASSWORD = ‘RandomEncryptionPwd’;
–Msg 15317, Level 16, State 2, Line 39
–The master key file does not exist or has invalid format.
Now as we can see, the creation of the certificate failed, but only when running each server under its own service account. The question becomes why.
The answer is depressingly simple, NTFS permissions. By default SQL Server will make it so that the backup of the certificate is only available to the account that is running the instance that creates the certificate backup. So if you need to restore the certificate using the CREATE CERTIFICATE syntax shown in the example you’ll get an error. If you change the permissions so that the second SQL Account can read the certificate backup file you created then the restore will work.
The abstract from the PASS website:
With increased focus on data breaches, security issues, and privacy laws, organizations are feeling greater pressure to provide sufficient data protection. GDPR brings substantial fines and potentially criminal prosecution. Are you ready? Are your database systems ready?
During this full day Precon, we will review a variety of ways to secure your SQL Server databases and data from attack. In this session, we will review proper network designs, recommended firewall configurations, and physical security options. We will also review data encryption options, password protection options, using contained databases, and Always On Availability Groups security.
There will also be discussions about additional measures which should be taken when working with Internet-facing applications. From there we will move to the most common attack vector: SQL Injection. We will include all the ways that attackers can use SQL Injection to get into your system and how to protect against it. The security options for database backups is the next topic on the list followed by proper SAN security designs. We will then finish up by reviewing the auditing options which are available against a database and how they can be used to monitor everything else which we’ve just talked about during the day.
There are no prerequisites to attend the precon, so anyone can attend, no matter their experience level with SQL Server (some experience with SQL Server would be helpful).
There are already 100 seats sold, but the room will hold plenty of more people, so there are still seats available. However don’t wait too long, as you could end up waiting for the pre-cons to be completely sold out. So book now, while there are still seats available.
I’ll see you at the PASS Summit,
I’m thrilled to report that the folks at the PASS Summit have asked me to put together the speaker idol competition again.
If you haven’t seen the speaker idol before it’s kind of like the TV show of the similar name. But instead of winning a contract that’ll never be fulfilled by some recording company, you win a speaking spot at the next PASS Summit. One of the parts I love about Speaker Idol is that by the end of the PASS Summit we know who is the first speaker is for the next year. It also gives 12 speakers the chance to speak at the PASS Summit lightning talk style with immediate feedback from our panel of judges, who are all top PASS Summit speakers.
If you’ve always wanted to speak at the PASS Summit but haven’t been selected, this is your chance to get in front of the community and make yourself known. Several past contestants of Speaker Idol have gone on to speak at that PASS Summit even without winning the competition. One of the benefits of competing in the Speaker Idol competition is that it gives you visibility to the program committee, the judges and the community at large so that more people are aware of you and your speaking style. This alone will increase your odds of being selected to speak at the PASS Summit in the future (if you don’t win the competition).
If you competed in a prior speaker idol (and you didn’t win a speaking slot at the PASS Summit), you’re welcome (and encouraged) to submit again.
Here are a few blog posts that some of our prior contestants have posted about the experience.
Now for the rules (or you can just sign up here):
You must have spoken at one of the following events (or a large international event, in a format other than a lighting talk, where your session had at least 30 minutes of content delivered by you):
- SQL Saturday
- SQL Bits
- SQL Server Days
- Microsoft Ignite
- Dev Connections
- Another large event with a national or international reach
You have not spoken at the PASS Summit of PASS Business Conference for a breakout, spotlight or pre-con session. (If you gave a lightning talk and no other session you are eligible.)
You are not scheduled to speak at the 2018 PASS Summit.
You will be in Seattle during the PASS Summit (attending PASS isn’t needed, and if you aren’t attending PASS we will get you to the session room for Speaker Idol).
You are available to attend whichever speaker idol session(s) you are assigned to (including the final session on Friday afternoon).
You have a 5-minute technical presentation which you can give. (Time management is your responsibility.)
You sign up on the registration form before September 8th, 2018 at 11: 59:59 pm (23:59:59) according to Google’s time function in their spreadsheet application. Late applications can’t be considered, so just don’t be late. There’s plenty of time.
You are not a member of the PASS Board of Directors or an employee of PASS.
You are not employed by a sponsor of Speaker Idol.
If a contestant drops out of the contest before the PASS Summit starts the organizer (Denny Cherry) will reach out to the next person on his list that wasn’t accepted and offer them a position in the contest. People will be offered spots in the competition until someone accepts. The organizer will NOT announce how many people if any turned down a spot in the competition.
If a contestant drops out of the contest during the PASS Summit (for purposes of this clause “during the PASS Summit” is defined as from the time that Denny gets to Seattle, which will probably be Saturday or Sunday before the conference, until Friday afternoon), no changes to the lineup will be made.
If a contestant wins their round and cannot continue to the finals for any reason, the runner-up from their round will be declared the winner of their round and will be moved to the finals.
If a judge is not able to complete the competition, they will not be replaced.
Other Idol Rules
The wild card winner will be selected by the judges at the end of the Friday morning round from the runners-up from each round. If a winner is not able to compete in the finals, and the runner-up is already speaking in the finals, the judges may consider all contestants from that round as eligible for the wildcard slot.
Judges do not have to recuse themselves because a friend is in the competition. Judges are expected to be fair (We’ve proven that over the years we can act like adults.)
You are responsible for your travel to the PASS Summit to compete in this competition.
You may present your session in any format that you’d like. You can view the recordings from 2015 (the 2016 recordings current require that you purchased the recordings, I’m working on fixing that and the 2017 sessions I couldn’t find when I looked) via PASS TV.
While you can use any format when giving your presentation that you’d like, it is recommended that you use a format which will show the judges how your presentation skills will translate from a 5-minute session to a 75-minute session as that’s how long the PASS Summit sessions are.
Contestants are allowed to watch the other sessions during speaker idol and make changes to their presentation as they see fit.
In case you missed it above, the Sign Up Form is <- there.
Stay tuned for announcements on the judges and the contestants. (Judges won’t be announced until a little later.)
After you sign up, shortly after the sign-up period closes, the applicants will be sifted through, and the 12 contestants will be selected and emailed. They’ll have a short period of a couple of days to accept the invitation to compete. Once we have 12 “yeses”, we’ll all get on the phone to talk through what you should expect, talk about what kinds of sessions you want to give, and get any questions answered from this years 12.
That’ll give a few weeks for the 12 contestants to put together their slides for the speaker idol and prepare to give the best 5-minute sessions that they can.
See you at the Summit,
If you do any work in Azure you’ve probably done deployments using JSON scripts. Originally JSON in Azure didn’t support much, you could create objects, but if you put it in the JSON script, it would be created.
Thank thankfully isn’t the case anymore. We now have conditions in JSON in Azure which when used correctly can be very powerful. I’ve only had a couple of opportunities to use this new functionality, but it’s incredibly powerful. It’s all done by adding in a single line of code for the object in question. This syntax puts the following within the object.
“condition”: “[equals(parameters(‘newDeployment’), ‘true’)]”,
As long as the condition evaluates to true, then the object with this in the code will be created. Let’s look at an example to see how this can be used.
“condition”: “[equals(parameters(‘newDeployment’), ‘true’)]”,
As we can see in the sample code, as long as the value of the parameter “newDeployment” is “true” then this availability set in Azure will be created. This variable is set to $true or $false in PowerShell before it’s passed into the JSON, so normal handling can happen in PowerShell beforehand.
Hopefully, others will find this as useful as I did when creating JSON deployment scripts in Azure.
Back in the old days of SQL Server 2012, we’d always tell people that creating indexes was half art and half science. And that’s true, but we’d create some non-clustered indexes (designed correctly of course), and the system would magically go faster. These days, in modern versions of SQL Server (or SQL Database if you’re using Azure) it can be a lot more art than science.
You have to know the data, at least somewhat, to create truly useful indexes. You need to know what the application is going to do with the data (you don’t need to know what the application looks like, just what the queries that it runs against the database). If you have a general idea of the queries that are hitting the server that’ll point you to the kinds of indexes that should be created.
If your application is doing summary reports on large amounts of data, sales by month over the last ten years (as an example) then a Clustered ColumnStore index will probably be your best option for the table.
If the application is doing summary reports on small amounts of data, more of a data dump than a summary report, then a ColumnStore index probably isn’t the best option. Instead, a non-clustered index would probably serve the application, and therefore performance, better.
It just depends on what the application needs and what it does. As a consultant, I have to drop into the company, review the application and quickly figure out what kind of application it is, and what kind of indexes need to be added (as well as how to define the indexes). It’s a hard job, but it’s a fun job, and I’m thrilled that I get the chance to do it.
But when push comes to shove, indexes are still an art form to some degree; and I think they always will be to some extent. We have some great tools like pssdiag and the query store that can help with this, but when you come down to it, it’s still people looking at a server and making it faster.
In just a couple of short weeks, I’ll be presenting an all day session on Microsoft Azure. The session is titled “Designing Azure Infrastructure for Data
Platform Projects” and will be held on the 21st of May, 2018. This session will cover all the components of the Azure platform including touching on the database platforms.
Hopefully everyone who’s interested in Azure can come out and see this great session on the various components of Azure. The session will be in Lingon, Germany on the 21st, then the SQL Grillen conference will be the next day on the 22nd. I’ll see everyone there in Lingen.
This question actually comes up a lot. With the new managed storage it’s less of an issue as Managed storage takes care of the high availability for you
when you are building your storage accounts. However if you are still manually building storage accounts, or you have older systems in Azure that either uses classic storage accounts, or manually created storage accounts in Resource Manager (basically not managed storage) then you have to take care of this yourself. Doing this is fairly easy, but it requires a little checking and manual work to do this.
Setting this up requires an understanding of how to Azure backend is working. You’ve probably heard by now that Azure keeps three copies of all data which is uploaded to it. This is true, but that isn’t the HA that I’m talking about in this post. What I’m talking about here is if there are two members of an availability group, web farm, etc. Azure storage accounts are created on what is known as stamps. A stamp is basically the physical hardware the sits under VMs and storage accounts. The trick is to make sure that the storage accounts are on different stamps. Now there’s no way to do this in the Azure portal, or with PowerShell. The upside to all this is that it’s all still pretty easy to do. All it takes to see if your storage accounts are on different stamps is a DOS window.
Each storage account has a public name, and an IP address to go with it. Each stamp has an IP address. Looking at the IP of the storage account will tell you if storage accounts are on different stamps or the same one. If the storage accounts are on different stamps, then the VMs that use the storage accounts will be highly available. Looking at the storage accounts behind the two MySQL servers that run the www.dcac.co website (see the image) we can see that the IP addresses for the storage accounts are different, which means that they are on different stamps.
If when you check the storage accounts the IP addresses match, fixing the problem is pretty straightforward. You have to delete the storage account and recreate it. Odds are it’ll be created on a new stamp; if it isn’t, delete the storage account and wait a few minutes and create the storage account again. That should cause the storage account to be created on another stamp and give you the high availability that you’re looking for.
There have been some questions recently about how people should pass certification exams.
Taking the exams
The answer is pretty straightforward, use the product that the exam is on. Certification exams (I’ve only taken the Microsoft ones, but I’m sure the other vendors are similar) are build
on actual product use. Now you may not use every feature of the product in your day to day life, but the exams aren’t going to be based on what you do. They’ll be based on what the product can do.
Let’s look at one of the SQL Server exams for example, specifically “Administering a SQL Database Infrastructure.” The notes on the exam say what it will cover, and in what percentages. For example, 35-40% of the exam will be on managing and monitoring SQL Server instances. And if you aren’t sure what’s considered “Manage and monitor SQL Server instances” the website spells this out as “Monitor database activity, queries, indexes, statistics, and instances.”
Now you may be thinking that you use RedGate’s SQL Monitor or SolarWind’s Data Platform Advisor (or another tool) to monitor your instances. These exams won’t be testing on these. Assume that only native tools exist because that’s what Microsoft will be testing you on. After all, Microsoft can’t test on another companies product.
There may be times when you don’t agree with the percentages on the exams. I know a big argument exists over how much XML should be on the exam. I referrer you to the notes on the exam on the Microsoft site. If it says the exam may be 40% of something, then it probably will be.
What’s the best way to PASS the exams?
First, let’s talk about brain dumps. They’re not always right, and there’s no way to know when they are wrong. And when they are wrong, who are you going to complain to, the person/company that violated their NDA with Microsoft to make the brain dumps?
When you take Microsoft Exams (or any exam), you sign an NDA that you won’t talk about the content of the exam, and you won’t cheat on the exam. That means that if you use a brain dump, or tell other people to use a brain dump, or help build a brain dump, you’re cheating and potentially in violation of your NDA with Microsoft. What that NDA says, in short, is that Microsoft can revoke your certifications and prevent you from passing any future certification exams. All for talking about their exams. Now you may feel like that’s a bit strong, but Microsoft takes their anti-piracy actions against the certification exams very seriously.
The good news is that there’s an easy way to take the exams more than once for free. That’s using the Microsoft Exam Replay option as it lets you take the exam again if you don’t pass. What you can do with this, is get the Replay and take the exam. If you don’t PASS that’s OK, you’ll now know what you didn’t know on the exam so you can study up in those areas and take the exam again for free.
Also, keep in mind, the exam documents online will say how much usage of the product is expected to PASS the exam. Most of the exams assume 3-5 years experience. If you’re just getting into technology, then you probably aren’t ready for the exams.
Practice Tests are written by different people then the exams, so they’ll be different questions. This is on purpose. The questions can’t be the same, that would be cheating. While practice tests are good, they aren’t the actual exam.
Now I know this hasn’t all been good news, but it wasn’t supposed to be. It was supposed to be an honest breakdown of what the certification exams look like and how to study for them. The first step is to have a couple of years experience using the product. The second step is to use the Retry feature that Microsoft offers, or whatever they move towards in the future.
Not all cloud providers are the same. Some are flexible and will grow and shrink with your needs and business (Azure, Amazon, Google Cloud), and some will not (IBM, Rackspace, Other Bare Metal clouds). The whole goal of using a cloud provider is to be able to scale up and scale down as needed, preferably without having to do anything past the initial setup.
For example, lets say that we have an IoT application that typically gets 100,000 new messages per second uploaded to it. At that rate any cloud (or “cloud”) provider will do. Now say that a feature in Time magazine is written about out product and our IoT product sales shoot through the roof so instead of 100,000 new messages per second, we now are getting between 10,000,000 and 100,000,000 messages per second. How will your cloud handle this? If you’re in one of the public clouds like Amazon, Azure or Google Cloud then your resources should magically scale. Boxes that are on a bare metal cloud will stop responding without someone in IT knowing that they need to scale up, provisioning a bunch of machines and configuring those machines. This is assuming that your cloud provides even has some IOT framework in place to handle these messages.
Now odds are you don’t have a wildly successful IoT application. But you’ve probably got a website that customers hit to access your company in some way. Maybe they place orders on your website. What would happen if a massive amount of web traffic started coming in with no notice, and IT doesn’t hear about it until it crashes? Would your want you’re IT department deploying and configuring new servers (bare metal) or would you want the cloud to handle this by automatically scaling the width of the web tier wider so that you can handle the additional requests?
I can tell you want I want for our customers, I want the web tier scaling automatically so that we can keep taking orders rather than our website not be available for hours (or days) depending on how quickly your bare metal provider can respond to your need for new hardware, and your IT departments ability to spin up new services on those new resources.
If you’re using some bare metal cloud provider and thinking that you are getting the uptime that you were promised, you probably aren’t unless you have an architect in place to make sure that you’ve got HA and DR built into your platform. Because that’s the biggest thing you aren’t getting with bare metal cloud providers (beyond Auto-scale), is any HA/DR. If you think you are getting some HA/DR, you probably aren’t (at least not what I’d called HA/DR) without paying for a lot of extras. (The same will apply if you are doing IaaS in the cloud, I’m talking about PaaS services in Azure, Amazon, or Google Cloud.)
What this all boils down to, is that “cloud” has become a marketing word and not an actual word that means that it says anymore. Companies will use the cloud for anything from “VMs on someone else’s hardware” all the way through “auto scaling Platform as a Service.” And the closer you are to “VMs on someone else’s hardware” the further you are away from the true nature of cloud platforms.