I was working with a client recently and they had an interesting problem. Recently they moved their application from a physical machine to a VM. And when they did performance took a nose dive. We’ve all heard this story before. They’ve got a crack team at this client and had checked all the usual suspects including adding more RAM and more CPUs (even though CPU was low already).
When I started looking at the system I saw minimal IO, in fact basically no IO on the system. So we got a user on the phone, and I profiled based on their username. I saw a huge number of commands go streaming by, all with a runtime of 0 milliseconds. Nothing really worth noting here except that SQL was only getting 6-8 commands per second. The client application in this case is running on Citrix within the same VMware farm so the client and the database are physically very close to each other, so unless the application is doing a lot of work on each row it should be throwing more commands at SQL than that.
Now in a perfect world I’d have the vendor fix their horrible code so that it wasn’t doing the RBAR, but the vendor couldn’t be convinced that this was a problem because none of their other clients were having a problem. In-fact the other installations at this client weren’t have a problem. But they were all much smaller installations.
What we ended up doing was turning a few things off, both in Windows and in VMware.
In Windows we turned off Receive Side Scaling, the TCP chimney, network auto tuning and task offloading. The thought behind all these components is that they are trying to optimize the network, but doing that takes time. Granted that time is just nanoseconds, but when you’re trying to through thousands of RBAR queries against a server at a time, a few hundred nanoseconds each is going to add up to seconds. When we profiled the server and let it run, we saw more than 16,000 commands being sent to the database just to open a single window in the application.
The code that we used to disable all these features is all done using the netsh command, and is shown below.
netsh int tcp set global chimney=disabled
netsh interface tcp set global rss=disabled
netsh interface tcp set global autotuninglevel=disabled
netsh int ip set global taskoffload=disabled
The vendor also recommended turning off IPv6 but I wouldn’t recommended that, especially if you are in a clustered SQL environment as modern versions of Windows clustering require IPv6.
On the VMware side we also turned off Interrupt Coalescing on the virtual NICs. The goal of having this on, to quote Jim Hannon from House of Brick is “to benefit the entire cluster by reducing the CPU overhead of TCP traffic”. Now in this case we have CPU to burn, and a very latency sensitive application. So turning it off became an option. This is done by changing the VMware advanced setting ethernetX.coalescingScheme (if the setting isn’t there you have to add it). Then set it to disabled like so.
ethernetX.coalescingScheme = "disabled"
With all these changes in place we fired up the VM and got the application user back on the phone and had them click the button again. And we were back in business. Response time was right were we expected it to be, and were it had been in the physical world before the migration.
On Monday, October 30th I’ll be presenting an all day class on Azure Infrastructure at the PASS Summit. During this day long session which will kick off the PASS Summit, I’ll be covering everything that you need to know to setup a solid Azure infrastructure so that the infrastructure you setup for your company today will be able to scale and perform well for many years to come.
Much like designing a data center, designing a cloud hosting solution needs to be done carefully, and if done correctly it will scale beautifully and you’ll be able to most applications of almost any size into it. Conversely, a cloud infrastructure that isn’t designed well won’t be able to scale very well and moving some applications into it can be challenging.
Learning about a technology from someone with a track record of proven successes in the space is key to success. Our team at DCAC has proven success with migrating clients from on-premises to Azure, so much so that we’ve won awards for our work in Azure.
- Azure Storage
- Azure Networking
- VPN Connectivity
- Express Route
- Global Load Balancing
- Azure Data Center Security
- Virtual Machines
- Active Directory Design Considerations
- Azure Active Directory
- SQL DB or SQL Server
- Azure Compute Discussions
- Scale Sets
- Open Source in Azure
- Identity and Access Controls
- Extending Identity to 3rd Party Applications
- Monitoring Azure
Needless to say, this is a very comprehensive day long class which will cover a large portion of the Azure infrastructure as it exists today.
Coming to PASS and thinking about moving services into Azure? Or just want to learn more about Azure so that when you move services into Azure you’re ready with the knowledge you need to help your company succeed? This is the session for you. You can’t do better than the Gold Medal winner in Data Center Innovation for an Azure Migration to teach about Azure.
There’s a lot of misunderstandings about dynamic SQL out there. I have heard this from clients on occasion, and I see it from other consultants on their blogs. If done correctly, dynamic SQL is not difficult to use or troubleshoot. Let’s look though some of the complaints that I’ve seen.
If you’re using NVARCHAR strings, you need to prefix all of your string concatenations with N, or you could end up silently truncating your final product.
Yes, if you are using NVARCHAR you do need to prefix everything with N so that SQL Server knows that all your strings are Unicode. There is no risk of truncate your final product. If you forget to put the N in front of a string when concatenating strings together, the only problem you will have is that a question mark where will appear you should have a Unicode character. You won’t suddenly end up with a shorter string than you were expecting.
If you use CASE to branch concatenation paths, it can also silently truncate a string.
Yes, CASE can do this, if you don’t put an ELSE block in your CASE expression. If you don’t have the ELSE block in your CASE expression, then your code could return NULL from CASE. Unless you’re changed your CONCAT_NULL_YIELDS_NULL setting from the default, this will return a NULL value, which is the expected result. I am sorry, but you don’t get to complain that SQL Server is doing what you told it do. Adding an ELSE block that returns a blank string so that your statement looks like CASE WHEN … ELSE ” END isn’t all that hard.
Figuring out how many single quotes you need to properly quote things is awful, sometimes.
Generally speaking, once you are in a dynamic string instead of a single quote you just need two single quotes for each single quote that you want. Unless you are using dynamic SQL to generate dynamic SQL it shouldn’t be that hard. If you are using dynamic SQL to generate more dynamic SQL (which I’ve done) then you’ll want to stab yourself in the eye.
PRINT is limited to 8000 characters, RAISERROR even fewer.
Both of these statements are true. Keep in mind that PRINT shouldn’t be used to return information to a user; it just returns it to the messages tab in SSMS. In an application that output from PRINT isn’t displayed to the user at all (even though it is sent to the client, so it still has to travel over the network). If you need to get a warning or error to the user RAISERROR would be the way to do that, and 2048 characters should be enough space to do that in, as that’s a lot of text to send back to a user. Especially as most users won’t even read the error message. THROW, which was introduced in SQL 2012, has the same size limitation as RAISERROR.
If you’re using sp_executesql, it’s a real chore to get the variables as the query ran without additional logging.
Passing variables into sp_executesql is pretty straight forward. You simply declare than as an input parameter, which is the second parameter for sp_executesql. Then you pass them in as named parameters to sp_executesql and use then within the dynamic SQL as needed. The nice thing about dynamic SQL is that your can pass in parameters that you aren’t even using, so if you remove parts of the code that use the parameter you can still pass it in to the dynamic SQL and SQL won’t care. As for not being able to log what parameters are being passed in to the query, who cares? Logging the queries that are passed in introduces so many potential security issues not the least of which may include storing PII information in an unencrypted format that I’d never recommended doing this. Not to mention that this is adding additional overhead and risk to the system for not useful benefit. If you need to see how the query procedure dealt with the variables before running the dynamic SQL then
Concatenating non-string data requires CAST/CONVERT.
Yeah, this is a real pain in a strongly typed language. Just like in C, C++, C#, etc. Also stop stringing non-text fields together. You simply pass them into dynamic SQL as variables and handle like you would with normal variables. There are very few cases when I’d want to handle dates and/or numbers (as an example) in this way.
Sometimes surprise NULL concatenations leave you with empty strings.
Yeah, that happens when you concatenate NULL and something together in SQL Server. This is by design. If you don’t want this check for NULL using ISNULL, COLLASE, CASE or change the CONCAT_NULL_YIELDS_NULL setting.
In summary, I’d have to say that Dynamic SQL really isn’t all that hard to work with. If you remember the rules of strings and NULLs then working with dynamic SQL isn’t pretty straight forward. Dynamic SQL makes lots of sense to use in order to reduce the complexity of SQL Statements, and simple SQL statements are easier for SQL to execute.
Well it’s time for another great Data Platform Summit in Bangalore India. The team there has been kind enough to allow me to present a pre-con this year (third year in a row). This year I’ll be changing up my precon a little bit. Instead of talking about SQL Server, I’ll be focusing more on Azure and everything that the DBA that’s going to be moving into Azure needs to know about Azure (or at least as much as I can cram into a 7 hour session, I’ve got 4 days worth of material I can present but only 7 hours to present it).
During the session this year we’ll be learning all about the components that you need to get know about and setup to successfully run a high volume SQL Server in a VM in the Azure Cloud.
This session is really important to the Indian market specifically as Microsoft has just opened new Azure data centers in India in the last year, so more and more Indian companies should be expecting to move workloads to Azure. This session will give attendees the knowledge to become the experts within the organizations to be able to help their companies move into the Azure Cloud.
So what do you do now? You go signup for the all day PreCon session and then come to the session. Nice and easy.
Join me on September 14th, 2017 for my annual PASS Summit Attendee Orientation webcast. Like in prior years this is the must see pre-PASS Summit webcast were you’ll get all sorts of information about the PASS Summit for this year. This includes sessions, directions around the convention center, parties, Speaker Idol, and much, much more.
Getting registered for the webcast is simple. Go here and fill this out. That’ll get you the link to the URL for the webcast and the ICS file to put into Outlook so that you get a reminder for the webcast.
There will be reminders posted on Social Media, but even if you haven’t gotten approval to go to PASS yet, get registered for this webcast so you don’t forget. After all, it’s free.
I’m pleased to announce that PASS has made the mistake of letting me run the Speaker Idol at the PASS Summit for another year. If you
haven’t seen the speaker idol before it’s kind of like the TV show of the similar name. But instead of winning a contract that’ll never be fulfilled by some recording company, you win a speaking spot at the next PASS Summit. One of the parts I love about Speaker Idol is that by the end of the PASS Summit we know who are first speaker is for the next year. It also gives 12 speakers the chance to speak at the PASS Summit lightning talk style with immediate feedback from our panel of judges, who are all top PASS Summit speakers.
If you’ve always wanted to speak at the PASS Summit but haven’t been selected, this is your chance to get in front of the community and make yourself known. Several past contestants of Speaker Idol have gone on to speak at that PASS Summit even without winning the competition. One of the benefits of competing in the Speaker Idol competition is that it gives you visibility to the program committee, the judges and the community at large so that more people are aware of you and your speaking style. This alone will increase your odds of being selected to speak at the PASS Summit in the future (if you don’t win the competition).
It you competed in a prior speaker idol (and you didn’t win a speaking slot at the PASS Summit) you’re welcome (and encouraged) to submit again.
Here’s a few blog posts that some of our prior contestants have posted about the experience.
Now for the rules (or you can just sign up here):
You must have spoken at one of the following events (in a format other than a lighting talk, where your session had at least 30 minutes of content delivered by you):
- SQL Saturday
- SQL Bits
- SQL Server Days
- Microsoft Ignite
- Dev Connections
- Other large event with a national or international reach
You have not spoken at the PASS Summit of PASS Business Conference for a breakout, spotlight or pre-con session. (If you gave a lightning talk and no other session you are eligible.)
You are not scheduled to speak at the 2017 PASS Summit.
You will be in Seattle during the PASS Summit (attending PASS isn’t needed, and if you aren’t attending PASS we will get you to the session room for Speaker Idol).
You are available to attend which ever speaker idol session(s) you are assigned to (including the final session on Friday afternoon).
You have a 5 minute technical presentation which you can give. (Time management is your responsibility.)
You sign up on the registration form before September 8th, 2017 at 11:59:59pm (23:59:59) according to Google’s time function in their spreadsheet application. Because of the tight schedule between this announcement and PASS late applications can’t be considered.
You are not a member of the PASS Board of Directors or an employee of PASS.
You are not employed by a sponsor of Speaker Idol
If a contestant drops out of the contest before the PASS Summit starts the organizer (Denny Cherry) will reach out to the next person on his list that wasn’t accepted and offer them an position in the contest. People will be offered spots in the competition until someone accepts. The organizer will NOT announce how many people if any turned down a spot in the competition.
If a contestant drops out of the contest during the PASS Summit (for purposes of this clause “during the PASS Summit” is defined as from the time that Denny gets to Seattle, which will probably be Saturday or Sunday before the conference, until Friday afternoon) no changes to the lineup will be made.
If a contestant wins their round and can not continue to the finals for any reason, the runner up from their round will be declared the winner of their round and will be moved to the finals.
If a judge is not able to complete the competition they will not be replaced.
Other Idol Rules
The wild card winner will be selected by the judges at the end of the Friday morning round from the runners up from each round. If a winner is not able to compete in the finals, and the runner up is already speaking in the finals, the judges may consider all contestants from that round as eligible for the wildcard slot.
Judges do not have to recuse themselves because a friend is in the competition. Judges are expected to be fair.
You are responsible for your own travel to the PASS Summit to compete in this competition.
You may present your session in any format that you’d like. You can view the recordings from 2015 (the 2016 recordings current require that you purchased the recordings, I’m working on fixing that) via PASS TV.
While you can use any format when giving your presentation that you’d like, it is recommended that you use a format which will show the judges how your presentation skills will translate from a 5 minute session to a 75 minute session as that’s how long the PASS Summit sessions are.
Contestants are allowed to watch the other sessions during speaker idol and make changes to their presentation as they see fit.
In case you missed it above, the Sign Up Form is <- there.
Stay tuned for announcements on the judges and the contestants. (Judges won’t be announced until after the PASS schedule is finalized as I need to see who’s available to be a judge.)
After you sign up, shortly after the sign up period closes, the applicants will be sifted through and the 12 contestants will be selected and emailed. They’ll have a short period of a couple of days to accept the invitation to compete. Once we have 12 “yeses” we’ll all get on the phone to talk though what you should expect, talk about what kinds of sessions you want to give, and get any questions answered from this years 12.
That’ll give a few weeks for the 12 contestants to put together their slides for the speaker idol and prepare to give the best 5 minute sessions that they can.
One of our customers ran across some interesting data within their Azure bill. Specifically they were
looking at the billing for their SQL DW, but this same issue can show up in the billing statement for other services as well.
This customer has a single SQL DW setup with a service tier of DW300. Looking at their monthly bill last month, something looked kind of odd. A snippet of the bill is shown below.
Some things stand out to me when looking at this bill. First is the service that is being billed. It doesn’t say how many DWs we’re being billed more (more on that later).
Something else odd is that the SQL DW shows being billed for 1220 hours. Now in theory the most number of hours for any single service you can be billed for in a single month is 744 as that’s 24 (hours) *31 (days), but here we’re being billed for 1220 hours. Now we know for a fact that this SQL DW is powered off at times, so how are we using this resource for more than 31 days in a month?
The last oddity is the rate. $1.46 an hour. This points us to the reason for all of these strange numbers. That’s the rate to run a SQL DW at DW100 for one hour.
The entire SQL DW service is built around 100 DWUs and incrementing the cost and scale up from there. If you look at the pricing page for SQL DW you’ll notice that the price increase is linear. This is for a reason, it makes the billing MUCH easier.
So even though we’re being billed for 1220 hours, the SQL DW actually only ran for about 406 hours and change.
When reviewing your Azure bill, especially for PaaS services, keep in mind that the billing may not look like you expect it to because they have configured the billing at a lower tier then used a multiple to get to the correct pricing.
Let me start by saving to every event organizer thank you for running the events you run. This post may come off a little bad-tempered, but please know that this comes from me wanting to make your life easier, and to make your events even better than they already are.
Please, please communicate with potential speakers WELL in advance of your event. If you’ve got a list of speakers that you know based on their abstracts you’ll be accepting (even if you don’t know which specific session you’ll be picking) then let those speakers know that you’d love to have them speak at your event. If you don’t know yet which session you want from them, let them know. This gives them the chance to start booking travel as quickly as possible if they are coming from a remote location. If international airfare is required for the speaker booking early can mean saving hundreds of dollars in flight costs being saved. If the speaker is paying for the flights out of their own pocket this can be a big benefit.
Doing It Right
A perfect example of this is Data Platform Geeks Summit in India. The team there has been great about working with the international speakers several months in advance, in order to let them know that they’ve been accepted so that flights could be purchased. In addition to being able to book flights in advance some of the speakers were able to book other speaking engagements in Southeast Asia maximizing the ROI on the expense of booking the flights to get to the other side of the planet.
The more communication that you can have with your speakers and sponsors the better as it keeps everyone advised on what’s going on and what needs to get booked and shipped well in advance. This holds true even after you’ve announced your schedule—you won’t annoy speakers by over communicating.
The Trump Administration has put together an Election Integrity Investigation commission to investigate voter fraud here in the United States. They are requesting all state governments hand over a LOT of personal information including (but not limited to) your name, address, birthday, political party (if recorded), last four digits of your Social Security Number, which elections you’ve
participated in since 2006. The data is known as Personally Identifiable Information (PII).
This is not a political post, that’s for political folks to talk about. I’m an information security professional: I want to talk about the massive risk to all our personal data if this information is sent over.
Here’s the letter that was sent to the states. First let’s review how the data should be sent in: one option is via email. We all know email isn’t secure, witness the email hacking and data drops from the last election. Transmission of the information alone puts every individual at risk of a hack. Alternatively, they offer the SAFE system as a backup. Based on the URL, the system is run by the US Army, not the commission, which will require a secondary transfer of our PII, under the assumption it’ll end up in the correct place, won’t be lost, released, etc. Moreover, it’s entirely possible it could arrive at the U.S. Army and still be transferred via email, nullifying any attempt made in the name of personal security.
Now let’s dig into that second to last sentence there. “Please be aware that any documents that are submitted to the full Commission will also be made available to the public.” So the panel is going to take everyone’s personal data, the exact data needed to steal someone’s identity, and release it to the public? Where anyone can download it?
Finally there’s the request for the last four digits of your social security number, which is the most pernicious and misguided request in the letter, because it is the key to your identity. The digits you would be handing over are the most random ones in your coding as they are in sequential order of your request for a card. The first five, if you applied for a social security number before 2011 (which is almost all of us) are sequencing based on location: the first three digits were assigned based on the ZIP Code of your mailing address at the time of your social security application. You may still live in the same zip code, or you can be easily tracked for previous addresses online. The second two digits are a group number based on location, again possible to reference it with a little work.
As a final consideration, there are also people who need to stay hidden for their own protection: spouses who do not want their abusers to locate them, witnesses from court cases, adoption issues, even celebrities who don’t want to subject themselves to stalkers or superfans constantly knocking on their front door.
And there is no “opt out” for an individual, the record is historical in nature going back to 2006 and presented by the election board, not the individual.
This whole project seems to have been construed without consultation with anyone in the information security sector and it shows. Even if the panel could overcome the transfer and privacy issues, we have no way of knowing how this information will be stored (assuming that it isn’t released to the public as the letter suggests) so that confidential data can actually remain confidential.
Thankfully many of the Attorneys General from across the country have said that this won’t be happening because of state privacy laws. Encourage them, and your state election boards, to keep fighting this. Hopefully the commission will figure out that this is a horrible idea.
On Saturday a lot of winners of the Microsoft MVP award got emails that they’ve been waiting for telling them that they were renewed as MVPs for another year. Some people didn’t get renewed for one reason or another, and only Microsoft knows why.
You may have expected to see me tweet that I was renewed as a Microsoft MVP on Saturday, but I didn’t. That’s because I wasn’t. Now this doesn’t mean that I’m not an MVP, because I am. The thing is that unlike most of the other MVPs I wasn’t up for renewal as a Microsoft MVP July 1st. The reason why is a little lengthy, so bear with me.
Back in the olden days of the MVP program (2016) there were 4 renewal schedules, at the top of each quarter. I received my award in October the first time around, so I got renewed every October. When they announced the change to the program around the MVP Summit last year they had just renewed the October awardees so technically we’d all still be MVPs when renewal came up in July. Because of that Microsoft decided that we’d keep our October award expiration, then they’d extend that until the next July (our awards should have expired October 2017, so they now expire July 2018). Because of this we all get to keep our awards for an extra 9 months (January and April award winners got extended until July 2017, the July award winners didn’t get any extra time).
So yes, I’m still an MVP. You can tell who’s an MVP by looking at the MVP webpage. Now this isn’t a perfect solution as some MVPs opt to hide their profile for one reason or another, but it’s a pretty good bet that if someone isn’t listed there they aren’t an MVP anymore, and if they are listed there they are an MVP. As soon as someone is no longer an MVP there profile is removed so there’s no risk of finding people that aren’t MVPs in the MVP directory.
Congrats to the MVP Class of 2017, you’ve all earned it. See you at the next award cycle.