SQL Server with Mr. Denny


April 1, 2019  4:00 PM

Azure NVMe Storage and Redundancy

Denny Cherry Denny Cherry Profile: Denny Cherry

With Microsoft Azure now supporting Virtual Machines with NVMe storage; things get a little different when it comes to handling recoverability.  Recoverability becomes very important because NVMe storage in Azure isn’t durable through reboots. This means that if your shutdown the server, or there is a host problem, or the VM host has to be patched and rebooted than anything on the NVMe drive will be gone when the server comes back up.

This means that to keep data on the VM past a shutdown you need to think about high availability and disaster recovery.

High Availability

You need to have High Availability built into the solution (with Availability Sets or Availability Zones) which probably means Always On Availability Groups to protect the data. The reason that you need to have Availability Groups is that you need to be able to keep the data in place after a failover of the VM.  When the VM comes back up, you’ll see the server is up, but it may not have any data. So what needs to be done at this point? You need to create a job on every node that will automatically look to see if the databases are missing and if they are then remove the databases from the AG, drop the databases, and reseed the databases from the production server.

Because of the risk of losing the data that you are protecting, you probably want at least three servers in your production site so that if one server goes down, you still have redundancy of your system.

Disaster Recovery

You need to have Disaster Recovery built into your solution as well as high availability. Because of the risk of losing data if a set of VMs fails you need to plan for a failure of your production site. The servers that you have in DR may or may not need to have NVMe drives in them; it all depends on why you need NVMe drives. If you need the NVMe for reads then you probably don’t need NVMe in DR; if you need NVMe for writes, then you probably do need NVMe in DR.

While a full failure of your production Azure site is improbable, it is possible, and you need to plan for it correctly.

If you have NVMe in DR, then you’ll want to the same sort of scripts to reseed your databases in the event of a SQL Server restart.

But this is expensive

Yes, it yes.

If the system is important enough to your business that you need the speed of NVMe drives, then you can afford the extra boxes required to run the system probably.  Not having HA and DR, then complaining that there was an emergency and the system wasn’t able to survive won’t get a whole lot of sympathy from me. By not having HA and DR you made the decision to have the data go away in the event of a failure. If these solutions are too expensive, then you need to decide that you don’t need this solution and that you should get something else to run the system.

Sorry to be brutal, but that’s the way it is.

Denny

March 25, 2019  4:00 PM

Another Year Gone, Another Year as a VMware vExpert

Denny Cherry Denny Cherry Profile: Denny Cherry

Another year has past, and VMware has decided to make me a VMware vExpect again. I believe that this is the 5th time I’ve been a VMware vExpert (the 4th time in a row, there was a gap year because I forgot to fill out the form, it was a thing).

I’m thrilled that VMware has decided to give me this award for the 4th time in a row.  It’s a great honor to be selected for the VMware vExpert award, more so because I’m not a sysadmin by trade, but I’m able to talk to sysadmins about databases and what the best options for hosting them within your VMware environment are.

Thank You, VMware for recognizing all the work that I’ve been doing, and that I plan to keep doing throughout the next year.

Denny


March 18, 2019  4:00 PM

I thought my days of Linux were over

Denny Cherry Denny Cherry Profile: Denny Cherry

Oh, how wrong I was. Back in the day, all I worked on was Microsoft SQL Server. These days I’m doing some Microsoft SQL Server and a decent amount of Microsoft Azure and Amazon AWS cloud work. With all three of those, there’s a lot of Linux in play. Microsoft SQL Server has supported Linux since the release of SQL Server 2017 at Ignite 2017.  Microsoft Azure and Amazon AWS have both supported Linux since (I believe) they first supported VMs in their cloud platforms (forever is the world of computers).

Back when I had just a few years expense with SQL Server (and IT in general) I also owned and managed a large (at the time) Oracle database which ran on Unix. Once that was no longer my baby to manage, I assumed by *nix carrier was over. And it was, for a while, but now Linux is back and this time in the SQL Server world.

Looking at the servers that DCAC has in our Azure environment, we have more Linux boxes than Windows. Our website runs off of PHP running on a pair of Linux servers. Our database is MySQL running on a couple of Linux server (eventually we’ll move all this over to Azure PaaS, but still running on Linux). The only production servers in Azure that we have running Windows, our the Active Directory domain controllers, one of which also syncs from Active Directory to Azure Active Directory to handle our sign in, Office 365, etc.  That’s it. Everything is Linux.

Our lab environment in our CoLo is also a mix of Windows and Linux.  We have a few tools that were built by Microsoft that we run that are running on Windows, but we’ve also got a decent amount of Linux in the data center as well.  By the time this is published (I’m writing this on the flight to the PASS Summit in November 2018) we’ll have a Docker cluster up and running as well (unless I get lazy and I don’t get up to the CoLo to rack the servers for it). This Docker cluster is Linux based as well and will let us run a bunch more Linux servers as well.

Your point is?

The point that I’m trying to get to in all of this is that if you are a database administrator that thought they were going to stay in the Windows world forever, think again. You have to be an expert in Linux to manage these systems, but you’ll need to understand the difference between Windows and Linux. SQL Server has a few differences between the platforms, and these differences are significant to the platforms.  As a Windows DBA you’ll want to be able to navigate the Linux Operating System, and tell your system teams where SQL Server is storing the database files (they are in /var/opt/mssql/data if anyone asks) so that they know which mount points need to be made bigger.

You don’t need to know everything, but the basics of Linux are doing to take you a long way.

Denny


March 11, 2019  4:00 PM

Webcast: Power BI: Where Should My Data Live?

Denny Cherry Denny Cherry Profile: Denny Cherry

At 11 am Pacific /2 pm Eastern on April 5th, 2019 DCAC will proudly be presenting the webcast “Power BI: Where Should My Data Live?” which is being presented by our own Meagan Longoria and Kerry Tyler.

Power BI provides many options for acquiring and creating datasets. Do you know the difference between an Imported Model, a Live Connection, and DirectQuery? Have you tried Composite Models? Are you aware of the newest features related to Power BI Dataflows and XMLA endpoints? This webcast will review your options for where to store data and explain the factors that should be used in determining what option is right for you. Obvious requirements such as data size, license costs and management, and desired data latency will be discussed. We’ll also talk about other factors such as the desire for self-service BI and avoiding data model sprawl. Join us for this webcast to learn how to determine the most appropriate type of Power BI dataset for your use case.

Click through and sign up for the webcast today. You’ll be able to download a calendar entry to remind yourself about the webcast. See you on April 5th.

If you register for the webcast and you can’t make it, fear not the session will be recorded and available for viewing after the fact as well.

Denny

 


March 5, 2019  9:19 PM

Recovering SQL Server Databases from a Failed SAN Drive

Denny Cherry Denny Cherry Profile: Denny Cherry

I was recently cleaning the house when I got a phone call from a client that I hadn’t heard from in a while.  Their SAN had decided that it was going to corrupt one of the LUNs. This LUN happened to be the Log drive on one of their SQL Servers.

Needless to say, SQL was down.

We created the LUN, and created the folders on the drive, and started SQL.  All the databases on the server showed “Recovery Pending” as their state in Management Studio.

Manually bringing the databases online wasn’t going to work as that caused error 5181 which reads:

Could not restart database MyDatabase. Reverting to the previous status.

Apparently, that wasn’t going to work. I was going to need a bigger hammer, in this case, a much bigger hammer. That bigger hammer was emergency mode and checkdb.

The users would be back on the system in about 8 hours, and the users, in this case, were lawyers, some of who might need to be in court in the morning, and the system was down wasn’t an excuse that was going to fly in front of a judge.

Since I had 169 databases to bring online, a script would be needed to handle this.  That script was pretty rough, but it got the job done.  Thankfully no users would be using the database, so there “should” be no transactions that were in processes. We wouldn’t be able to recover them anyway.

declare @name varchar(max)
declare @sql varchar(max)
declare cur CURSOR for select name from sys.databases where state = 3
open cur
fetch next from cur into @name
while @@FETCH_STATUS = 0
begin
set @sql = 'alter database ' + @name + ' set emergency, single_user
dbcc checkdb (''' + @name + ''', repair_allow_data_loss)
alter database ' + @name + ' set online, multi_user'
exec (@sql)
fetch next from cur into @name
end
close cur
deallocate cur

The script ran, all the databases came online (it took a while to run, the databases on the server are about 9 TB in size) and all was well.

Denny


February 25, 2019  4:00 PM

Employer’s shouldn’t be selling employee data

Denny Cherry Denny Cherry Profile: Denny Cherry

Employers and employees have by their nature a very trusting relationship. The employee trusts that the company will keep paying them, and the employer trusts that the employee will do the job that they are being asked to do.  The employee also trusts that the employer will act professionally. This trust includes NOT selling the employees personal data to the highest bidder (or any bidder for that matter). Companies out there do this to their employees, and they shouldn’t be doing it. The small amount of revenue is not going to offset the loss of trust that employees have in the company they work for.

Selling customer data is a horrible enough practice, but companies can stand behind their EULA which probably says something about the person using the service is the product to be sold, and the actual customer is the company that is buying data (you didn’t think Facebook wasn’t selling every bit of data they gather?). Selling employee data is a whole-nother mess to deal with as employees are going to have no way to opt out of it, and no way to stop it. Even leaving the company isn’t going to help, because once the data has been sold, it’s gone, and you have no idea what the purchasing company will be doing with it (here’s a hint, they’re probably going to sell it).

If you’re someone who’s in a position to approve or deny these sorts of sales of employee data, the answer is “no” — every time, with no exception. Your employee data shouldn’t be for sale, and I’m guessing that your employee base is going to be some pretty low-value data to purchasers, as once people find out how/why the purchasing company got their data, I’m guessing the people won’t be purchasing anything any time soon.

Denny

 


February 18, 2019  4:00 PM

NVMe Storage in the Cloud using the Lsv2-series Azure VMs

Denny Cherry Denny Cherry Profile: Denny Cherry

There are those cases when VMs need super fast storage in the cloud. Now Azure has an offering to get you that storage speed. With the introduction of the Lsv2-series of VMs, you can now get NVMe storage on your Azure VMs. Like with the other VM sizes, the more CPU that you purchase, the more storage you can attach to the VM. The size of the NVMe storage is assigned in 1.9TB chunks, with one 1.9TB chunk on the smallest machine (the L8s v2 VM) up to 19TB (19 1.9TB volumes) on the largest machine (the L80s v2 VM).

One big difference that you’ll see of these VMs is that this storage isn’t network attached. Because the storage isn’t network attached this means that the NVMe disks are local to the host, so if the VM moves to another host in the Azure Environment then the data will be gone. The same applies if you power down the VM and deallocate it (basically if you stop paying for it). As soon as the VM is deallocated any data stored on the NVMe disks is gone.

The big question for DBAs is; how do you use these for SQL Server?

Using these Lsv2-series, VMs is going to require some High Availability and Disaster Recovery planning to roll these VMs out. You’ll want to use Availability Zones (if they are in your region) so that you can keep two copies of the data in two different buildings within one region. You’ll also want to set up your database to span across regions as well so that if there’s a site failure, you don’t lose all of the data within the database (remember, on a failure of the VM, the data is gone). Settings this up will require either a Failover Cluster (using SIOS data keeper) or an Availability Group to that your data is stored in multiple data centers. Designing a solution for these new machines is going to require a decent understanding of the hardware that’s behind the machines so that you’ve got a good understanding of what’s happening under the covers with your machines (like the fact that your storage is deleted when you deallocate the machine). Thankfully the experts at Denny Cherry & Associates Consulting can help with designing these environments. Even though these VMs are brand new, we have already evaluated them with our customer’s workloads and have put them in place where it makes sense.

Azure has a feature called Constrained Cores where you can limit the number of cores presented to a VM. This feature gives you all the memory, storage, network bandwidth, etc. for a machine without the SQL Licensing cost of having all the CPUs that you don’t need. Constrained Cores is not available for the Lsv2-series VMs (yet), so you’ll need to do some right-sizing to make sure that you’re using the right size VM for your workload. But even without constrained cores, these VMs are going to be a game changer in the speed of storage for VMs.

Denny


February 11, 2019  4:00 PM

Should SQL Servers be rebooted weekly

Denny Cherry Denny Cherry Profile: Denny Cherry

There are people out there that reboot SQL Servers (or all servers) weekly. Usually the argument that I hear for doing this is that SQL Server has memory leaks which can only be resolved by rebooting Windows.

My argument against this, is that no SQL Server doesn’t have memory leaks which cause it to need to be rebooted weekly.  SQL Server is designd to use all the RAM that’s allocated to it. By default SQL Server is configured to use 2147483647 MB of RAM, or all of the RAM in the server. You can and should be changing this setting to a lower number so that there is memory for Windows and anything else that’s installed on the box (anti-virus, SSIS packages, SSAS, etc.).  The people that I typically see doing these server reboots, have been doing them for 15-20 years, typically because back then rebooting a server would fix a problem.  It won’t anymore.

There are servers that are running SQL Server that have been up for months, or years without issue (I’m ignoring the fact that you aren’t patching SQL Server and Windows here). I’ve persoally seen boxes with pretty heavy workloads on them that have run for years without having to be rebooted.

SQL Server doesn’t have any memory leaks. It’s a world class product of Microsoft, that runs millions of databases (if not more) and makes Microsoft billions of dollars in revenue each year. Whatever memory leak you think there is in SQL Server, there’s probably isn’t. If you think there is, then contact CSS at Microsoft to report the issue as Microsoft needs to patch the problem.

The biggest problem that people will see if that the buffer pool is flushed when the SQL Server restarts, and this causes SQL Server to read all the data it needs from disk as there’s no data in RAM. This causes performance issues right after the restart of SQL Server.

If you’re restarting SQL Server because there’s some performance issues that “goes away” after a SQL Server restart, we’re going to be better off dealing with the root cause of what’s causing the problem to come up to begin with. It’s probably going to be indexing or statistics, but that’s just a guess as every server is different, but it’s probably a safe guess.  Looking at the server will tell us what the problem is for sure, so we can solve the problem (and hopefully make the server faster in the process).

Denny


February 5, 2019  3:49 PM

DCAC Grows Again, Adding Meagan Longoria to our team of Excellent Consultants

Denny Cherry Denny Cherry Profile: Denny Cherry

Today we are proud to announce that Denny Cherry & Associates Consulting is growing our team of consultants again.  Our newest team member is Meagan Longoria from Denver, CO.

Meagan is a Microsoft Data Platform MVP living with her adorable dog Buster in Denver, Colorado. She is an experienced consultant who has worked in business intelligence, data warehousing, and database development for over ten years. She enjoys creating solutions in Azure and SQL Server that make data useful for decision makers. Her areas of expertise include data visualization, dimensional modeling, and data integration design patterns.

Meagan enjoys sharing her knowledge with the technical community by speaking at conferences, blogging at DataSavvy.me, and sharing tips and helpful links on twitter (@mmarie).

With the addition of Meagan our team of consultants now includes 5 Microsoft Data Platform MVPs, which is more MVPs than most large consulting companies have.

We’re thrilled to have Meagan joining our team.

Denny


January 28, 2019  4:00 PM

Connecting to Office365 with PowerShell

Denny Cherry Denny Cherry Profile: Denny Cherry

When looking at support documents for Office365, you’ll often see instructions like “connect PowerShell to Office365,” but there are no instructions on how to do that.  I was going through this, and thought, this is a problem I can fix.  It only takes a few lines of PowerShell to connect your PowerShell to an Office365 environment.  Basically what you are doing is PowerShell Remoting.  The underlying code that you’ll want to run is this:

Set-ExecutionPolicy RemoteSigned
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session -DisableNameChecking

When you run this code, it’ll prompt you for a username and password (that’s what the second line does). These are the credentials that you use to manage the Office365 environment.  After that, you can fire off whatever commands you need to, and they’ll be send to the Office365 farm, and the response will be sent back to your PowerShell window.

It should be noted that you’ll get an error with the line that starts with “$Session” if you have MFA on your account.  You’ll either need to disable MFA or put an exception for your IP address so that you don’t get an MFA prompt.  Once you do that, you can authenticate without MFA, and the third line of the code will work as connectinos to Office 365 via PowerShell appear not to support MFA currently.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: