SQL Server with Mr. Denny


November 11, 2019  4:00 PM

How to test for query regressions when upgrading

Denny Cherry Denny Cherry Profile: Denny Cherry

One of the things that you want to test for when doing an upgrade of SQL Server is query regressions. Regressions happen when the new version of SQL Server makes a poor decision when running the same code that you can before.  Testing an application against the new version should always be done, even though everything is supposed to work perfectly. You never know until you test, and since Microsoft hasn’t tested your specific workload, that leaves it up to you.

Prepping for the test

This testing does require a test machine to run your workload against, but since we have the cloud, doing testing like this is easy, and you can just destroy the VM when you’re done with your testing.

To do this testing, we’re going to need a few things.  The first is a machine to run our test databases on. In the perfect world, it’ll be the same size as the production server that you’re going to be running the new production system on. It doesn’t have to be, but the closer, the better.  We also need a  backup of the production database that we’re going to restore on the test machine.

We also need a SQL Profiler capture from the production system. If possible, you want the profiler capture to have every possible query that can be run.  This could be guaranteed by capturing data for a day or two, or it could be guaranteed by someone clicking all the options in the app while the trace is running.  You need a profiler trace as you need to play it back, and the easiest way to playback a capture is SQL Profiler.  It’ll take a lengthy profiler script to create the trace that it needed, but you can see the script at the bottom of this post.  You will want to change the path that the trace is writing to.  The script will output a number when it runs; you’ll need this number to stop the trace when you are ready to stop it.  The script will create a bunch of trace files that are 1 Gig in size when you run the script.  Make sure that you’ve got plenty of drive space for the capture.

Once you’ve captured the data, stop the trace (the stop script is at the bottom of this post as well) and move the capture to the test server.  On the test server, you’ll want to install the version of SQL Server that we’re going to move to (Hopefully SQL 2017 or newer).  Once SQL is installed, restore your production database then turn on the query store for that database (database properties).  You’ll also want to increase the size of the query store, 1000 Megs should be enough, but if it’s not, go ahead and make it bigger than that.

Once the query store is enabled (do NOT change the compatibility level yet), we’re ready for the first run of our profiler trace.

Checking for regressions

If you open SQL Profiler on the test server and open the first file of the trace.  Once that’s done, click the “Replay” drop-down menu, and then click “Start.” It’ll then prompt you to connect to a server. Enter the name of the test server.  It’s going to give you the option to use multiple threads, and you’ll probably want to switch to this option.

Once you are happy with these settings, click “OK” to start the replay.  Depending on how big the trace is, this might take a while. At a client, when I did this recently, we captured data for about 15 minutes, and it took about 7 hours to replay.

Once the replay is done, change the compatibility level to whatever version you plan to change the database to (usually the newest compatibility level).

At this point, we need to rerun the trace again.  Once the trace is done, we can go into the query store and see if things regressed.  Microsoft has made looking for regressed queries very easy; there’s a query store report you can open up. If you open the “Regressed Queries” report, it’ll show you all the queries that had plans that regressed to do changes in the SQL Server engine.  You can then investigate these queries and hint them or rewrite them to fix them.  Once those queries are all fixed, the system can be upgraded.

In the case of a client system that I was working on recently we had to hint a few queries, but those hints weren’t available in the old version of SQL Server that we were moving off of, so the hints couldn’t be applied until after the upgrade was done, but we just built that into the upgrade plan so there would be a couple more minutes of outage while we added hints to a bunch of procedures, then we opened the gates and let users onto the system.

Depending on the regressions that are found you might need some development time to see what’s going on (hint, we can help you figure out the regressions and what needs to be hinted and what needs to be rewritten).

Denny

 

To start the trace
declare @p1 int
declare @maxfilesize bigint = 1024
exec sp_trace_create @p1 output,2,N'c:\temp\Replay',@maxfilesize,NULL
select @p1
exec sp_trace_setevent @p1,14,1,1
exec sp_trace_setevent @p1,14,9,1
exec sp_trace_setevent @p1,14,10,1
exec sp_trace_setevent @p1,14,11,1
exec sp_trace_setevent @p1,14,6,1
exec sp_trace_setevent @p1,14,12,1
exec sp_trace_setevent @p1,14,14,1
exec sp_trace_setevent @p1,14,3,1
exec sp_trace_setevent @p1,14,35,1
exec sp_trace_setevent @p1,15,3,1
exec sp_trace_setevent @p1,15,35,1
exec sp_trace_setevent @p1,17,3,1
exec sp_trace_setevent @p1,17,35,1
exec sp_trace_setevent @p1,10,3,1
exec sp_trace_setevent @p1,10,35,1
exec sp_trace_setevent @p1,12,3,1
exec sp_trace_setevent @p1,12,35,1
exec sp_trace_setevent @p1,15,11,1
exec sp_trace_setevent @p1,15,6,1
exec sp_trace_setevent @p1,15,9,1
exec sp_trace_setevent @p1,15,10,1
exec sp_trace_setevent @p1,15,12,1
exec sp_trace_setevent @p1,15,13,1
exec sp_trace_setevent @p1,15,14,1
exec sp_trace_setevent @p1,15,15,1
exec sp_trace_setevent @p1,15,16,1
exec sp_trace_setevent @p1,15,17,1
exec sp_trace_setevent @p1,15,18,1
exec sp_trace_setevent @p1,17,1,1
exec sp_trace_setevent @p1,17,9,1
exec sp_trace_setevent @p1,17,10,1
exec sp_trace_setevent @p1,17,11,1
exec sp_trace_setevent @p1,17,6,1
exec sp_trace_setevent @p1,17,12,1
exec sp_trace_setevent @p1,17,14,1
exec sp_trace_setevent @p1,10,9,1
exec sp_trace_setevent @p1,10,2,1
exec sp_trace_setevent @p1,10,10,1
exec sp_trace_setevent @p1,10,6,1
exec sp_trace_setevent @p1,10,11,1
exec sp_trace_setevent @p1,10,12,1
exec sp_trace_setevent @p1,10,13,1
exec sp_trace_setevent @p1,10,14,1
exec sp_trace_setevent @p1,10,15,1
exec sp_trace_setevent @p1,10,16,1
exec sp_trace_setevent @p1,10,17,1
exec sp_trace_setevent @p1,10,18,1
exec sp_trace_setevent @p1,12,1,1
exec sp_trace_setevent @p1,12,9,1
exec sp_trace_setevent @p1,12,11,1
exec sp_trace_setevent @p1,12,6,1
exec sp_trace_setevent @p1,12,10,1
exec sp_trace_setevent @p1,12,12,1
exec sp_trace_setevent @p1,12,13,1
exec sp_trace_setevent @p1,12,14,1
exec sp_trace_setevent @p1,12,15,1
exec sp_trace_setevent @p1,12,16,1
exec sp_trace_setevent @p1,12,17,1
exec sp_trace_setevent @p1,12,18,1
exec sp_trace_setevent @p1,13,1,1
exec sp_trace_setevent @p1,13,9,1
exec sp_trace_setevent @p1,13,11,1
exec sp_trace_setevent @p1,13,6,1
exec sp_trace_setevent @p1,13,10,1
exec sp_trace_setevent @p1,13,12,1
exec sp_trace_setevent @p1,13,14,1
exec sp_trace_setevent @p1,13,3,1
exec sp_trace_setevent @p1,13,35,1
exec sp_trace_setevent @p1,70,1,1
exec sp_trace_setevent @p1,70,9,1
exec sp_trace_setevent @p1,70,11,1
exec sp_trace_setevent @p1,70,6,1
exec sp_trace_setevent @p1,70,10,1
exec sp_trace_setevent @p1,70,12,1
exec sp_trace_setevent @p1,70,14,1
exec sp_trace_setevent @p1,70,3,1
exec sp_trace_setevent @p1,70,35,1
exec sp_trace_setevent @p1,53,1,1
exec sp_trace_setevent @p1,53,9,1
exec sp_trace_setevent @p1,53,11,1
exec sp_trace_setevent @p1,53,6,1
exec sp_trace_setevent @p1,53,10,1
exec sp_trace_setevent @p1,53,12,1
exec sp_trace_setevent @p1,53,14,1
exec sp_trace_setevent @p1,53,3,1
exec sp_trace_setevent @p1,53,35,1
exec sp_trace_setevent @p1,11,1,1
exec sp_trace_setevent @p1,11,9,1
exec sp_trace_setevent @p1,11,11,1
exec sp_trace_setevent @p1,11,6,1
exec sp_trace_setevent @p1,11,10,1
exec sp_trace_setevent @p1,11,12,1
exec sp_trace_setevent @p1,11,14,1
exec sp_trace_setevent @p1,11,3,1
exec sp_trace_setevent @p1,11,35,1
exec sp_trace_setevent @p1,74,1,1
exec sp_trace_setevent @p1,74,9,1
exec sp_trace_setevent @p1,74,11,1
exec sp_trace_setevent @p1,74,6,1
exec sp_trace_setevent @p1,74,10,1
exec sp_trace_setevent @p1,74,12,1
exec sp_trace_setevent @p1,74,14,1
exec sp_trace_setevent @p1,74,3,1
exec sp_trace_setevent @p1,74,35,1
exec sp_trace_setevent @p1,71,1,1
exec sp_trace_setevent @p1,71,9,1
exec sp_trace_setevent @p1,71,11,1
exec sp_trace_setevent @p1,71,6,1
exec sp_trace_setevent @p1,71,10,1
exec sp_trace_setevent @p1,71,12,1
exec sp_trace_setevent @p1,71,14,1
exec sp_trace_setevent @p1,71,3,1
exec sp_trace_setevent @p1,71,35,1
exec sp_trace_setevent @p1,72,1,1
exec sp_trace_setevent @p1,72,9,1
exec sp_trace_setevent @p1,72,11,1
exec sp_trace_setevent @p1,72,6,1
exec sp_trace_setevent @p1,72,10,1
exec sp_trace_setevent @p1,72,12,1
exec sp_trace_setevent @p1,72,14,1
exec sp_trace_setevent @p1,72,3,1
exec sp_trace_setevent @p1,72,35,1
exec sp_trace_setevent @p1,100,1,1
exec sp_trace_setevent @p1,100,9,1
exec sp_trace_setevent @p1,100,11,1
exec sp_trace_setevent @p1,100,6,1
exec sp_trace_setevent @p1,100,10,1
exec sp_trace_setevent @p1,100,12,1
exec sp_trace_setevent @p1,100,14,1
exec sp_trace_setevent @p1,100,3,1
exec sp_trace_setevent @p1,100,35,1
exec sp_trace_setstatus @p1,1

To stop the trace
declare @p1 int = 2 --{Whatever value is outputted from the first query}
exec sp_trace_setstatus @p1,0
exec sp_trace_setstatus @p1,2

November 4, 2019  4:00 PM

When to use SQL DB vs SQL DW

Denny Cherry Denny Cherry Profile: Denny Cherry

There’s a lot of confusion around when to use SQL DB or SQL DW with a lot of people assuming that these are interchangeable; so, I wanted to try to address these.

SQL DW

Let’s talk about SQL DW first. There are some pretty hard and fast rules around SQL DW and when it should be used. SQL DW should only be used for reporting workloads where you have a proper data warehousing design for your database. It also is only going to be effective when you have a data warehouse that at least 2TB in size.

SQL DW came from APS (which used to be called PDW). One of the things that made APS (and PDW) so successful was that you needed consulting hours from an APS consultant to implement it successfully.  With SQL DW you can just buy a SQL DW from the Azure Portal, and off you go. There’s no need to work with a consultant; you can just build your tables and start reporting. However, if you don’t fully grasp data replication, data partitioning, and query processes (and other things) in SQL DW, then odds of a successful implementation to SQL DW are going to be slim to none.

SQL DB

SQL DB is going to be the best use case for a few different workloads.  Any OLTP workloads are going to be a good fit for SQL DB. If you’re coming from an On-Prem solution (SQL Server, Oracle, MySQL, etc.), then you may need to do some architecture changes in your application to get the best performance at scale from your application. Not all applications are going to require changes to the database architecture, but some will. Going into a cloud migration project with the assumption that there will be some database architecture work is a good thing. It will mean that if there is, you’re ready for the architecture work. If there doesn’t need to be any, then great.

SQL DB by itself supports databases up to 4TB in size. There is now a feature called Hyper-Scale which will let your databases in SQL DB go as large as they need to be (there are some additional costs to use Hyper-Scale).

Data warehouses and other reporting workloads can go into SQL DB as well. Smaller data warehouses that simply aren’t big enough for SQL DW (that will be under 2 TB in size) are great to put in SQL DB.  They’ll perform well, and you can use things like PowerBI to report off of them, and the cost for SQL DB will be much more attractive compared to SQL DW.

Some Migration Numbers

Doing some architecture work can potentially save you a ton of money as well.  I was doing a POC recently for a client as they were looking at an Azure Migration. They have ~80,000 databases that they were looking to move.  Phase 1 on the project was going to be to move the existing SQL Server’s to VMs in the cloud.  The cost for this was ~$80k a month.  By moving these databases into SQL DB (as Phase 2) they would be able to reduce their monthly cost to ~$37k a month.  Assuming that they need to spend $20k on developers a month to do this change, and it takes the developers six months to do the work ($120k in development costs), that’s a 3-month ROI before the company starts saving money.

What to use when

What it comes down to is that SQL DW is specialized with specific requirements (data warehousing with a well-defined schema and at least 2 TB in size) which SQL DB is more of a catch-all for everything else.

Hopefully, that helps explain when to use each one.  If you’ve still got more questions, then the team at DCAC would love to help you out.

Denny

 


October 28, 2019  4:00 PM

Is it time for an SSIS checkup?

Denny Cherry Denny Cherry Profile: Denny Cherry

On November 15th, 2019 Meagan Longoria and Kerry Type will present their webcast “Is it time for an SSIS checkup?”.

Database health checks are a familiar concept to many people. We check database and server settings, ensure security is appropriately implemented, and validate HADR plans to meet business needs. But do you ever assess the health of your SSIS projects? Do you know what things you should look for? Join us as we talk through what we think should be included in a data integration health check specific to SSIS, whether you run SSIS on your own machine or as an Integration Runtime within Data Factory.

We’ll discuss server configurations such as memory allocation and Kerberos, development processes and source control, and creating and populating non-production environments. Next, we’ll show you some design patterns and anti-patterns that can affect the quality and speed of your data integration processes. Then we’ll talk about appropriate settings and use of the SSIS catalog. We will also, of course, address data security and sensitivity. And we’ll finish with maintenance and support topics.

Organizations running SSIS should be able to easily answer questions such as “What is my longest-running package?”, “Who should be contacted when a package fails, and who is their backup?”, “Is the SSISDB being backed up?”, “What is our HADR plan?”, and “What are our SLAs for data being available to consumers?”.

Don’t let your SSIS environment become neglected. Learn the concepts you need to know to keep it healthy.

Go over to the webcast’s webpage to sign up for the webcast.

Denny


October 21, 2019  4:00 PM

Heading to the #sqlpass Summit? Have Questions? We Have Answers.

Denny Cherry Denny Cherry Profile: Denny Cherry

Are you planning on heading to the PASS Summit starting on November 5th? Do you have questions about the PASS Summit summit or Seattle in general?  We’ve got the answers for you, and we should have answered them when we presented our Attendee Orientation for the PASS Summit last week.  We’ve posted the recording of the webcast online if you missed the live broadcast, you can watch the recording as it just got posted.

So click over, watch the recording, get some tips about Seattle and we’ll see you in Seattle at the Summit.

Denny


October 21, 2019  3:05 PM

What is skip-2? What is means to you? And what is compromised to make this happen?

Denny Cherry Denny Cherry Profile: Denny Cherry

Skip-2 is the name of a software package that lets what appears to be state sponsored hackers attack your SQL Server data and download, change or delete data without the auditing that you have configured being triggered.  I won’t go in to the details of how skip-2 works, as the folks at ESET have done a great job with that writeup.

What I want to talk about is how they are able to make this happen.

The first thing to note is that this does not exploit a bug in SQL Server 2012 or SQL Server 2014 (the two versions that this targets).

In this exploit a new dll is injected into SQL Server, much in the same way an anti-virus can injected their DLL into SQL Server.  This injected DLL intercepts the call to functions like CPwdPolicyManager::ValidatePwdForLogin.  It first checks to see if the hard coded password of the attacker has been used. If it does then it sets a flag that the other intercepted function calls will check for, and it’s exists without sending the authentication request to the normal function that processed authentication requests.  If you aren’t using their hardcoded password, then it sends your authentication requests over to the authentiation function just like normal.

The other functions that are being intercepted are the functions that handle things like reporting authntication success (they wouldn’t want that to trigger as you’d then know when they were logging into your server) as well as the functions around auditing, transaction success reporting, etc. (they are all in the ESET writeup).

How they do this is pretty ingenous, they have written their own DLL and they are attaching it to SQL Server and loading that DLL higher in the process higherarchy then the DLL that normally hosts those functions (it’s more complex than this, this this is basically what’s being done). This allows them to intercept the needed functions and inject their own code that causes these functions to be skipped if needed (their code in their functions before calling the real functions).

The question that has come up already today, why does this only target SQL 2012 and SQL 2014. My assumption (and keep in mind I haven’t seen the attackers code, and I sure don’t have access to the SQL Server Code) is that this was origionally a targeted attack that they attackers decided could be used to get other useful data as well. Why it only works against SQL 2012 and SQL 2014? I assume that either the functions are in different places, or there are other parameters that are needed for SQL 2016+ (or both of these) which makes this sort of attack not work.

Is this something that you have to worry about, hopefully not. It only effects older versions of SQL Server, and in order to deploy this the attacker would need to be able to successfull breach your operating system so that the code can be loaded onto the SQL Server.  This means that in order to do this they would already have to have breached your network and gotten a foothold on a server somehow. If they’ve gotten far enough into your network to install this, you’ve got some big problems that you need to address, quickly.

The question becomes why isn’t this is a SQL Server bug?  Because they aren’t exploiting a bug in SQL Server to make this happen.  The injecting a DLL in SQL Server is a supported, documented process.  Attaching a DLL to another DLL, that’s a problem in Windows, and the attacker is getting into the Windows OS to deploy this.

Upgrading to SQL 2016+ is going to be a great way to get around this, as SQL 2016+ isn’t impacted by Skip-2. <sales hat>We can help with that</sales hat>.

Denny


October 14, 2019  4:00 PM

What is Page Life Expectancy?

Denny Cherry Denny Cherry Profile: Denny Cherry

If you’ve been a DBA for a while, you’ve heard people talking about PLE.  PLE stands for Page Life Expectancy; most DBAs (and really anyone that works with SQL Server for more than 5 seconds) called it PLE. I think this is for two reasons; 1 Expectancy is hard for a lot of people to say; 2 we’re lazy efficient.

PLE

PLE is reported by SQL Server as a number, somewhere between 0 and ~2B.  The number that it’s showing you is a number of seconds.  Those number of seconds are the amount of time that SQL Server estimates that it will be able to keep a page loaded in memory for before that page is swapped out.  So if a page of data was loaded into memory right now, and PLE was 100 seconds, then in ~100 seconds we could expect that page to be removed from the buffer pool in order to make room for another page.

I want to be really clear here, 100 is not a target number (neither is 300). It’s just a number that I am using for my explanation here.

When you look at PLE on a larger server, you may see a few numbers. That’s because each NUMA node has a PLE value, and those numbers are calculated together to get the servers PLE.  The actual calculation isn’t an average or anything quite that simple.  You take the PLE value of the node and multiply it by 1000.  You do this for all the NUMA nodes. Then you sum up these values, and divide the total by the number of NUMA nodes, then divide by 1000.

This means that PLE for one NUMA node can drop, while PLE for another NUMA node can be stable, and the server-wide PLE number may not change (or change much).

The reason that each NUMA node has its own PLE value is that SQL Server manages each NUMA node independently.  Understanding why PLE for each node can vary requires an understanding of how SQL Server works.  So, let’s dive a little deeper.  (For this article we’re going to assume a 2 CPU server, which has 2 NUMA nodes. It keeps things easier, but all this applies if you have 4 or 8 nodes as well.)

So let’s calculate PLE for our sample server. If both NUMA nodes have a PLE of 100, then the math looks like this.

((100 * 1000) + (100 * 1000)) / 2 / 1000 = 100

If one node has a PLE of 100 and the other has a PLE of 60, then the math looks like this.

((100 * 1000) + (60 * 1000)) / 2 / 1000 = 80

If you have a four-node box, then the value gets even odder looking. Let’s assume we have three NUMA nodes with a PLE of 100 and one node with a PLE of 60 and see how the math looks.

((100 * 1000) + (100 * 1000) + (100 * 1000) + (60 * 1000)) / 4 / 1000 = 90

NUMA assignments

When a table is loaded into memory, that table is assigned to a NUMA node; and that’s the only NUMA node that the table will be assigned to.  The reason for this is that SQL Server only wants to assign CPU processors from that NUMA node to work on that table.  Because of this, if there’s a report against that table (or a bad query that causes a scan of the table), that’ll impact the PLE of that NUMA node because the table needs to be loaded into memory so that the CPUs can process that table. (All sorts of things can cause PLE issues to crop up, this is just one example.)

So if we’ve got one really large table in our database, and lots of small tables (and everything in SQL Server and the OS set correctly) we should expect to see two very different numbers for PLE. One for the NUMA node that is working on the large table, and one on the NUMA node that’s hosting all the smaller tables.

You may be asking yourself how can I assign tables to a specific NUMA node, and the answer is that you can’t. Even if you could, there’s pretty much no reason to. SQL Server does a really good job of assigning tables to NUMA nodes all by itself.  If you think you’re having a problem with SQL Server because it’s assigning to many tables to a NUMA node, you’re probably got other problems that need to be dealt with (hint, we can help you with that).

PLE Recommendations

There are lots of PLE recommendations out there. Most of them are built on the number 300 which you’ll find in Books Online. Any advice about PLE, which has a specific number as a target, is worth the amount of money you paid to read the blog post (if you did pay to read that number I’m sorry you got screwed).  The target PLE is very system-specific, and it should be treated as such. Some systems should have a really low PLE because they churn data so often.  Other systems need to have a really high PLE so that they never have to hit the disk.

One of our clients has a PLE of somewhere around 35,000. If it gets below 28,800 (8 hours) we start to worry.  When we were designing this system, we decided that when customers ran the end of day reports (the customers are only open for 8 hours a day) for their office, we didn’t want to hit the hard drive. That means that all the data over the last 8 hours needed to be loaded into memory and it needs to stay there until the end of the day.  Is this a guarantee that there are no queries that will hit the disk at the end of the client’s day? No, of course not, but we’ve done everything we can to make this a reality for the client.  And given that this client has thousands of customers, and tens of thousands of unique users per day (if not more) and the system has been stable for years, we’ve done a pretty good job getting to that point.

So what number should you shoot for when looking at PLE?  As high a value as you can afford to get. Increasing PLE is a matter of index tuning and buying more RAM.  If you can afford to get more RAM, get it.  The more data that SQL Server can cache, the higher PLE is going to go, and the happier SQL Server will be.

Wrap-Up

While you probably don’t need to know what PLE is in the long run, it’s one of those things that you’ll want to keep an eye on, because if it drops you want to know why. PLE dropping isn’t a problem or even a symptom of a problem; unless it is. It’s a potential warning that some investigation on the server needs to be done and from that investigation, you might need to do more, or you might not.

Denny


October 7, 2019  4:00 PM

Doing something in a loop in Microsoft flow, and something else if there’s no match

Denny Cherry Denny Cherry Profile: Denny Cherry

Recently I wanted to make a Microsoft flow process to create channels in Microsoft Teams once a project is ready to start.  If the channel was already there then I wanted a message to be posted to the Teams channel saying that the project was ready, but that it didn’t need to create the channel.  So in programming terms I wanted a loop, with an IF statement inside the loop. Then if the loop got to the end of the downloaded set and there was no match it should do something.

Now Microsoft Flow doesn’t include anything to do this, so I had to get a little creative.

As I was looking through Microsoft Flow I found that you could use Variables within a Microsoft Flow application, and this made things suddenly click.  At the begining of the Flow I initiazlited a variable; in this case called “FoundClientInTeams”.

From here I setup a pull from Teams that would pull down all the channels for a team.  That was passed to an “Apply to Each” object, which is basically a “FOR” loop.

Within that “Apply to Each” object there’s a Condition that looks to see if the client name that we’re looking for exists or not.  If the channel exists (according to the condition) then the flow sends a message and changes the variable from false to true.  (There’s a “No” block of this condition that doesn’t do anything.)

After the “Apply to Each” there’s another Condition block, which checks to see if the varaiblehas been reset if needed. If the value is still false, that means that there was no match found, so we need to create the channel.  Is also posts a message saying that it created the channel, so there’s some debugging going on as well. The “No” side of the condition is left blank, so if the channel is found the varible is changed, and this condition should fail and nothing should be done at this point.

As I was working on this, I’m shocked to find that Microsoft flow is as poweful as it is.  What we’re doing in this case, is that when a document is updated on our SharePoint O365 site (in our case the SOW status), Microsoft Flow sees the change, and them either creates a new channel for the client if they are a new client, or just posting a message if they are an existing client.  Because we were able to automate this task, its one less thing that we need to do when we sign an SOW for a client.

Denny

 


September 30, 2019  4:00 PM

How big can you make a SQL Server transaction log file?

Denny Cherry Denny Cherry Profile: Denny Cherry

While the GUI (SSMS) has UNLIMITED in there, that’s actually not the right answer here. If you set the truncation log file to UNLIMITED and click save, it’ll actually reset the maximum file size to 2TB, since that’s actually the max supported size of a transaction log file.

So how to do create a single transaction bigger than 2 TB? You have multiple transaction log files. Also, don’t create 2TB transactions. If those fail, rollback is going to take a LONG time.

Denny


September 28, 2019  4:00 PM

Can I setup replication with an availability group as the subscriber?

Denny Cherry Denny Cherry Profile: Denny Cherry

Yes, absolutely. In order to do this, you need to configure the replication to use the Availability Group listener as the subscriber.  That way, the database that you’re using as the subscriber will follow the AG.  After replication creates the database, you will need to add it to the AG, as replication won’t do that for you.  After that, it’s just normal SQL Server Replication.

If you want to, you can replicate from the same availability group to another database on that same availability group.

Denny


September 23, 2019  4:00 PM

PowerShell for SQL Server Administration

Denny Cherry Denny Cherry Profile: Denny Cherry

If you’re a DBA, espcially if you’ve got multiple SQL Server’s to manage (or you work in Azure, or Exchange, or VMware, or a bunch of other things) then you’ll want to learn some PowerShell.  I recently released a course on LinkedIn Lynda where you learn about PowerShell and how to manage multiple servers all at once.

So click on through and check out the course.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: