Anything the uses a database needs to have the database settings configured correctly, this includes all the VMware tools. I’ve had a couple of customers contact me recently about VMware’s AirWatch system because the transaction log on the database keeps growing. The reason in both cases so far is because the database is using the default settings and the transaction log was never backed up. I talked about this at Microsoft’s TechEd conference a couple of years ago during my talk titled “Introduction to SQL Server Management for the Non-DBA”. If you prefer a VMware themed slide deck, I’ve given basically the same presentation at VMworld as well (registration required).
Fixing this is actually a really simple fix. You need to do one of two things.
- Change the recovery model from FULL to SIMPLE.
- Start taking transaction log backups of the database
Now, how do you select the one of these that you want to do? That’s pretty easy. Do you need point in time recover-ability of the database? If the answer to that question is yes, then select option 2. If the answer to that question is no, then select option 1.
With most AirWatch systems you don’t really care if the database is restored to what it looked like yesterday or 5 minutes ago, so you usually don’t need point in time recovery. So just change the database from FULL to SIMPLE and you’re good to go. Doing this is actually really easy. Connect to the SQL server in SQL Server Management Studio. Right click on the database and select properties. Change the recovery model from FULL to SIMPLE, then click OK. That’s it the log won’t grow any more.
If you need to reclaim the space because the disk is full, then you need to shrink the log file. To do this, click the “New Query” button at the top of management studio and run the following command.
DBCC SHRINKFILE (2, 1024)
That’ll shrink the file down to 1 Gig and it “shouldn’t” ever grow any larger than that. There’s no outage to doing this, and no risk to the system. Because of the way that shrinkfile works you may need to run it a couple of times over a couple of days in order to actually get all the space back.
If you don’t have SQL Server Management Studio, or you can’t find the properties, here’s a script that’ll do everything that I’ve described. Just run this in the AirWatch database (it could be named anything so you have to change the database name in the USE statement from YourDatabaseName to whatever the name of your AirWatch database is).
USE YourDatabaseName GO declare @dbname sysname set @dbname = db_name() declare @sql nvarchar(1000) set @sql = 'ALTER DATABASE [' + @dbname + '] SET RECOVERY SIMPLE' exec (@sql) GO CHECKPOINT GO CHECKPOINT GO DBCC SHRINKFILE (2, 1024) GO
Nesting views in SQL Server by itself probably won’t introduce performance issues. However when it will do is make it MUCH harder to find performance problems. When you have the top view calling 3 views, which each call 3 views you suddenly have a nightmare of views to unwind to resolve the performance issues.
There’s another dirty little secret of views. Even if you aren’t accessing columns from all the tables in the views which are being referenced, SQL Server still needs to query those columns. This increases IO for your queries, for basically no reason. Because of this reason alone it often makes sense to not use views, and specifically nested views.
Now there are plenty of reasons to use views in applications, however views shouldn’t be the default way of building applications because they do have this potential problems.
While working with a client the other week we had to unwind some massive nest views. Several of these views were nested 5 and 6 levels deep with multiple views being referenced by each view. When queries would run they would take minutes to execute instead of the milliseconds that they should be running in. The problems that needed to be fixed were all indexed based, but because of the massive number of views that needed to be reviewed it took almost a day to tune the single query.
Eventually the query was tuned, and the query was able to be run in about 200ms, but the cost to the customer was quite high.
Use views when they make sense, but don’t use them every time. Like everything else in SQL Server, there is no one right answer for every problem. They are one solution for a subset of problems, not the end all solution.
Today Microsoft announced some MAJOR changes to the SQL Server 2016 product. As of SQL Server 2016 SP1, which ships today, from a programmability perspective there is no difference between the editions anymore. All programmability features will be available in all editions from LocalDB to Enterprise edition (there’s a couple of exceptions for various reasons which I’ve tried to notate below).
This includes features such as table partitioning, Hekaton, ColumnStore, distributed partitioned views, Change Data Capture, Change Tracking, etc. will work everywhere. Basically the goal of this change was to make it so that you can deploy your database via a deployment script on any edition of SQL Server and it would just work, no matter if you are deploying on Express or Enterprise edition.
There’s some realistic exceptions to this. Chance Data Capture won’t work in Express Edition or LocalDB for example due to the fact that these editions don’t include the SQL Server Agent.
This is a huge improvement for software vendors as they can now use all the features of SQL Server 2016 SP1 and above without needing to require that their customers buy Enterprise Edition.
You may be asking yourself why would I buy Enterprise Edition after this announcement? The reason to buy Enterprise Edition hasn’t really changed. Most people didn’t buy Enterprise Edition to get any of these features. Customers typically buy Enterprise Edition to get access to more than 16 cores and to more than 128 Gigs of RAM. This will still be the case moving forward. If you need to run SQL Server on massive hardware at massive scale, if you need scale out HA using Always On Availability Groups, or Virtualization installation rights (also known as host based licensing), then you will still need to purchase SQL Server Enterprise Edition.
The next question is “why is Microsoft doing this”? The reason is that Microsoft wants to make life simpler for people that develop software. They want to make it so that ISVs can install their software using a single codebase. They can start their customers on Express Edition, and as the customer grows they simply upgrade their instance to Standard Edition, then later to Enterprise Edition without having to have the customer make ANY changes to the application database. Just buy the new SQL Server license and to a SKU upgrade.
Personally I don’t see this taking any Enterprise Edition license sales away from Microsoft (remember that Microsoft is in business to sell software). The primary reasons for buying Enterprise Edition are still there. There’s probably a couple of people that won’t need to buy Enterprise Edition, which is fine. Now they’ll be happier customers because they only need to buy the right sized license for their workload.
All this will be available in SQL Server 2016 SP1 as well as CTP1 of SQL Server v.Next (it doesn’t have a name yet) which also ships today.
Why Much RAM?
When it comes to In Memory OLTP (Hekaton) and ColumnStore there’s probably some questions about how much memory you can use for editions other than Enterprise Edition. The rule will be that Hekaton and ColumnStore will each be allowed to use 25% of the memory cap, and this memory will be outside of the memory cap. So if you are using SQL Server Express edition which has a 1 Gig cap for SQL Server pool memory today, you’ll be able to use an additional 256 Megs of RAM for Hekaton and an additional 256 Megs of RAM for ColumnStore. For Standard Edition you can use 128 Gigs of RAM for pools and an additional 32 Gigs of RAM for Hekaton and an additional 32 Gigs of RAM for ColumnStore. When using these features this effectively gives you a 50% increase in the amount of RAM you can use with these down-level editions.
With PolyBase there’s an additional note. If you need scale out of PolyBase where you scale the PolyBase configuration out to a scale out farm the head node of the farm needs to be Enterprise Edition, however the worker nodes of the PolyBase farm can be standard edition. This is a massive cost savings for those using PolyBase for huge amounts of data processing. If you don’t need the scale out feature of PolyBase and just need normal PolyBase support within SQL Server that will be available in all editions other than LocalDB.
Just wow, I can’t imagine the work that went into getting this change in place. And I’m not talking about the technical work to get this far, but the work of getting all the various people in management to approve such a radical change of taking what we have traditionally called “Enterprise Edition” features and moving them into not just Standard Edition, but into every edition of SQL Server 2016 SP1. This is just huge.
Now there’s a few things which aren’t quite done yet because they’re going to take some more work, so we’ll hopefully see these changes rolling into SQL Server 2016 in later CUs or SPs.
In the meantime, enjoy the new features once you patch your SQL Server 2016 servers to SP1.
If you are running an older version of SQL Server don’t look for a new service pack that’ll back port these features to your older version. The only way that you’re getting that to happen is with an upgrade to SQL Server 2016. If you are looking for a consulting partner to help you with those upgrades, might I recommend Denny Cherry & Associates Consulting. We are available to assist you with those projects right away.
This week is the MVP summit and it seemed an appropriate time to talk about why I love being an MVP. (This post is going to be full of not so humble brags.)
The MVP summit is probably one of the best things about being an MVP. We as MVPs get direct access to members of the product team (in person at the Microsoft campus) where they show us outsiders all sorts of stuff that they are working on, or thinking about working on (different groups do this to different amounts, with the SQL Server team being one of the most open AFAIK).
Because of this openness I’ve been able to make close personal relationships with various members of the product team, and I’d ever call several of them friends as we have talked about non-work things when we see each other on work trips to conferences and other events. Without being a Microsoft MVP I may have met some of the folks on the product team, but we wouldn’t have been stuck in the same room for a week allowing us the opportunity to get to know each other. As this is my 9th year as an MVP, and my 9th MVP summit I’ve had a good amount of time to get to know various members of the product team at this point, and hopefully I get to know more over future years (if I’m lucky enough to be renewed).
There are other great benefits of being an MVP such as the MVP email lists where you can interact with MVPs from all over the world, in just about every country, in just about any product group that you can think of. There’s the Product Group Interactions (PGIs) which are webcasts where the product group talks to the MVPs about some topic for an hour. And of course there’s the MSDN subscription. All of these are great things which the MVP program and the product group give to the MVPs. But by far the most important to me is the relationships which I’ve built up over the years which give me some amazing insight into Microsoft and how the products are made (for good or bad) and the level of trust that the members of the product group have in me to trust me with information about the future of products (what little they like to share, even with MVPs).
This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.
- Driving Cultural Change? You Are What You Measure
- Vblock and VxBlock use Cisco UCS. Got it?
- Nomenclature Matters
- Dell EMC Announces Isilon All-Flash
- vSphere on AWS – A Deal with the Devil?
Hopefully you find these articles as useful as I did. Don’t forget to follow me on Twitter where my username is @mrdenny.
Microsoft has had the registry keys for Run and RunOnce in the registry since the registry was introduced in Windows 95 and Windows NT 4. But in the 20+ years that those keys have been there (and I’ve used them for a variety of things) I’ve never known that there was a limit on the length of the commands that you could put into those keys.
I found this while working on a client project when I needed to kick off some powershell automatically when the server restarted to get it added to the domain, as well as do a few other things. But for some reason the key just wasn’t running.
I did a LOT of digging around the Internet and I stumbled across this old Stack Overflow post (from 2012) which in the comments talks about how there’s a limit of 256 characters. I threw the command in Microsoft word and low and behold my command was 350+ characters long.
The next step was to change the PowerShell script that created the command to put it into a batch file and then setup the RunOnce key to run the batch file instead, and it worked on the next try.
So here it is documented for all the world to see, sometime in the mid 1990s, someone at Microsoft set the length of a command at 256 characters, probably to save memory and because “why would anyone need a command longer than 256 characters”. And today in 2016 it bit me.
Moving to the cloud isn’t for everyone. When the cloud providers first launched their cloud platforms their approach, and marketing message, was for the most part that everything should be running in the cloud. And they were pushing that message hard.
Today however we have a much friendlier, cuddlier cloud and the cloud providers understand that not everything will be running in the cloud. Hybrid configurations are not only allowed, they are encouraged (don’t get me wrong, Microsoft, Amazon, VMware, Rackspace, etc. will all be thrilled if you’ll put your entire platform in their cloud environments.
The reality is that not all workloads are a right fit for the cloud. If you are running highly sustained workloads, then the cloud probably isn’t the right solution for your environment. The systems which work best in the cloud are the ones which can be converted to use the various PaaS services not just the IaaS services. The PaaS services will usually provide a more cost effective hosting solution, especially for workloads which vary over time; for example, ramping up during the day and becoming almost idle overnight.
Even if running in a PaaS environment isn’t an option this may be cost effective for running in an IaaS environment. It all depends on how bursty the workload is that you plan on moving to the cloud.
The other metric that you need to consider before moving to the cloud, is how long is the ROI on moving to the cloud? If you just purchased new hardware for hosting your production environment in a CoLo (or your own data center) then moving to the cloud probably isn’t the best option for you, at least not right now. You need to get your monies worth from the hardware that you just invested in before you replace it with a cloud offering. If however your hardware has reached the end of its life cycle, then now is the perfect time to consider moving to the cloud, as one of the benefits of moving to the cloud is that the cloud provider takes on the hardware replacement cost every few years instead of you having to take on those costs.
When looking at cloud provider costs, remember the cloud providers are in a race to the bottom on cost. Spinning up services in the various cloud platforms is getting cheaper than ever. Especially if you can take advantage of long term contracts and pre-paid services as these can knock even more of the cost for moving to the cloud down even lower.
One big key to taking advantage of the cloud is to scale down when you don’t have a need for the load, and scale up when you do. By scaling down resources when they aren’t needed, depending on your workload you could end up running services at a lower pricepoint about 50% of the time (16 hours of peak load 5 days a week, 8 hours of low load 5 days a week, and 24 hours of low load on the weekend breaks down to 80 peak load hours and 88 low load hours) depending on your applications workload.
With these numbers in mind, suddenly moving services from on-premises to the cloud (we do recommend Microsoft Azure as we are a Gold Cloud Platform partner with Microsoft) suddenly looks a lot more attractive.
We’ve all heard the argument that using a different TCP port number for SQL, other than 1433, is more secure. Here’s the truth, it isn’t. Andy said it best here in a conversation on Twitter that happened in the #sqlhelp hash tag. If you think that hiding your SQL Servers on a different TCP port number is going to make them harder to break into you are sadly mistaken.
The two most common ways of a data breach are employees and SQL Injection. Employees already know what port the SQL Server is listening on, as they know how to connect to it already. If not, they can easily download a port scanner and scan the SQL Server for open ports. The ones that are open they’ll connect to. If it’s a decent port scanner it’ll tell them which port is SQL and what all the other open ports are so they’ll know which one to connect to. You’ve slowed the attacker down by about 5-10 seconds, maybe.
If the attacker is using SQL injection then they don’t care what port the SQL Server is running on. They are connecting to the web/app tier and just using whatever SQL Server connection information that the application has to get access to your database. By changing the TCP port number you haven’t slowed them down at all.
Proper security requires understand user/business requirements and building a solution which will meet those requirements while building proper secure protections into the system so that if something does go wrong the exposure is limited.
Pretending that hiding the servers and making them harder to find is making them more secure isn’t. Spend time doing things which will make the environment actually secure so that when the system is breached the data isn’t all lost. And if possible do this without making the environment harder to manage, which is all that you are doing by changing the TCP port numbers.
I travel, a lot. A lot of my travel is to go speak at events somewhere in the world. I’ve had the privilege of being able to travel to 4 continents and 12 countries (that I can think of) giving presentations on Microsoft SQL Server and Microsoft Azure.
Why do I do it? To give back. I learned a lot of what I learned from speakers at conferences as small as a SQL Saturday, code camp, user group, etc. and as large as Microsoft TechEd, Microsoft Ignite, EMC World, VMworld, etc. As I learned a lot from these events, I think that it’s only fair that I continue to go to these events and pass what I’ve learned on to the next group of people looking to learn about IT.
With the IT field changing annually, if not more often than that we have to keep learning because if we don’t then we become stagnant in our carriers and we become less valuable to our employers, clients, etc.
The other advantage that I have when I go to these events is that I get to meet new people and see new places. As I’ve traveled to all these various conferences and classes I’ve met some amazing people that without speaking at these events I never would have had the opportunity to meet.
Back in high school, and even when I started working I was a massive introvert. Way, way more than you could have even imagined. Just the idea of talking to people was just something that wasn’t going to happen, much less actually talking to people that I hadn’t known my entire life. But now I’m so privileged to be able to visit places that I never thought that I’d ever get to visit and meet people that have the same career as me, but have had totally different lives than I’ve had. To get to visit with them even if just for a day or a weekend while at the conference is such a privilege that I’m happy that I have the chance to fly around and meet all these amazing people.
I think my favorite story to date of meeting people in random locations happened this last June (in 2016) when Karen Lopez (@datachick) and I were in Bangalore India teaching a class. She was there for one class and I was there for two classes with a week in between. Karen was kind enough to stay in India for a few days extra to keep me company and to enjoy the awesomeness of the hotel and city with me. On Tuesday we decided that we’d do a little sightseeing and go visit a couple of science museums. At the Science and Technology museum in Bangalore, India we had two different groups of people ask to take pictures with us. We were so shocked that we of course said yes.
When we were talking about it later we were really hoping that they worked in the IT field and knew who one or both of us were, and that they didn’t think we were someone actually famous. I know that I’m really hoping that the photos get uploaded to Facebook and Facebook’s facial recognition tags us (or that they tag us) so that Karen and I can see the pictures and see how they know us.
Anyway, enough of my rambling blog post. The plane is landing soon, on my last leg of my longest trip to date, exactly one month long (I left home on June 2nd and I’m arriving home July 1st).
Hopefully I’ll see you at a class, or conference somewhere in the world. Or maybe at an online event (but those aren’t anywhere near as much fun as it’s a little hard to chat with people over a webinar).
You moved your critical infrastructure to the cloud. You’re now relying on the cloud for 100% of your customer facing systems. Did the sales person from your cloud provider tell you that you didn’t need DR any more because you are relying on their rock solid infrastructure? Did you believe them?
I hope you didn’t believe them, because they were wrong.
Now, you’re probably thinking that this isn’t the first time that I’ve talked about this, and you’d be right. Very right. There was even a webinar. And yet people still think that they don’t need a DR plan when running services in the cloud. When I logged into the Azure portal over the weekend I was greeted with this unhappy looking map. This outage in Azure isn’t just a small little outage. There’s an entire storage cluster offline this weekend. This means that there’s some decent chunks of the East Asia region offline at the moment. This is the exact reason why you need to have a DR plan available and executable, even when working in the cloud.
The cloud isn’t some magical platform for running services in thin air. It’s servers, in racks, in data centers, owned by someone else. If you aren’t prepared when those servers fail, and they will fail, then you are going to be in a world of hurt when they do fail.
I wish I could say that the cloud will just take car of this automatically, but it won’t, probably even. So as the technology people who are building solutions in the cloud, we need to be able to handle these sorts of failures, just like we would have on-premises by having our systems designed and build with DR in mind. We need to have DR plans at the ready so that we can survive these sorts of failures.
Now this isn’t the only outage that Azure has had this year. But they also aren’t the only cloud provider to have an outage this year. According to the latest numbers (January through August) which I’ve seen Azure has been available 99.9982% of the time while Amazon AWS has been available 99.9975%. Now these aren’t self reported numbers, these are the numbers from Gartner’s Cloud Harmony report.
Every cloud provider has outages. How we plan for them and execute them decides how well we handle the outages that come up.