Today Amazon’s AWS S3 service in US East took an outage. Along with this outage we see a lot of companies that didn’t build DR plans into their cloud deployments. The most depressing of these was that Amazon AWS couldn’t update their status board because it’s hosted in S3 in US East apparently. This is what I call a massive design failure of their application.
Along with the issue where were some great graphics that people were creating, probably because they had nothing else they could do at work.
It’s kind of scary just how many services reported as being offline due to the outage. As sampling you can find below. Even the mighty isitdownrightnow.com was offline.
On top of this there’s people that just aren’t able to work at the moment because they’re either building something against AWS, or the apps which they use are only hosted in a single AWS data center.
Building your systems so that they can survive a total outage of your primary data center is key when putting services into the cloud. You have to plan for your site that’s hosting your services to fail. This isn’t an AWS issue, or an Azure issue. All the cloud providers will have a failure somewhere along the line that’ll take an entire site offline. How you’ve configured your systems to handle these failures will determine if you can keep working during the outage, or if your staff is sitting around tweeting about how they can’t work because your cloud provider is offline.
Are you using the SQL Server 2016 SSMS? Do you HATE the help experience when you press F1 and it opens the GOD DAMN webpage?
Did you know that you can change this horrible annoying behavior?
YES YOU CAN!!!!!
Now, when I did this I had to restart SSMS to get the change to actually do anything. Half the time help hangs when I open it, but I’m on the fast train for Windows 10 so it might be that.
Best blog post ever!
If you’re hosted in Azure in the West US region you may be getting a free hardware upgrade later this month or early next month. Part of the Azure infrastructure is being upgraded and VMs that are running in the part which is being upgraded will be automatically to the new hardware.
Now if you’re going to be impacted whoever is your service administrator for your company will be getting (or has gotten) an email about this, which will include a reboot of all the impacted VMs. If your VMs are within an Availability Set then you’re guaranteed to only have one VM in the Availability Set reboot at a time. This is why we have Availability Sets.
In fact, this is all one of the reasons that Azure is a great platform. We’ve got systems that are getting moved to new, (hopefully) better hardware and there’s basically no impact to the systems in question. Just a rebooted of the impacted systems, that’s it. All in all, that’s a pretty minimal impact.
Now the email does include instructions if you’d like to schedule the upgrade yourself. Instead of letting Azure move your services automatically, just power down all the VMs in the Availability Set then power them back on. When they power back on they’ll be automatically moved to the new hardware. Yes this does mean that you have to take an outage but you’ve got two weeks notice to schedule the outage and complete it before Microsoft will force the issue.
All in all, that’s not bad.
If you’re getting the upgrade, enjoy that new server smell.
A few weeks ago Grant Fritchey and I had the chance to speak at five user group meetings in five days, in five cities all over Florida.
Clearly we’re both insane as we agreed to do this. Everything was scheduled by Karla Landrum and she got some big shout outs at the User Group meetings.
But one person that didn’t get enough credit was Rodney. He was kind enough to take a week off of work and drive up to Nashville SQL Saturday to pick me up, then drove Grant and I all over Florida to all these user group meetings. Driving for 6-8 hours a day while Grant and I worked away on our laptops couldn’t have been an easy task; and I just wanted to through out a shout out to Rodney for being willing to put up with me, Grant and Karla for 5 days of driving around the state.
So thanks Rodney.
Back when I started Denny Cherry & Associates Consulting I needed a way to track my time. I used Excel and it worked good enough. But
then I decided to start expanding the company so I needed to find a better solution for tracking time as that’s how consulting companies get paid.
I found lots of options that all cost a small fortune, and all they did was track time. They didn’t do invoicing, and if they did do invoicing they cost even more. Some were up to $40 per user per month. Needless to say as a small company there was no way I was going to start paying hundreds of dollars a month for just a few people to be able to log time sheets. Especially as we have outside people working for us on occasion that we sub contract work out to for clients. So even if those people weren’t working that month we’d have to pay for them to have access to the system, just in case. That wasn’t going to fly.
So I ended up building our own time sheet system in WordPress as a plugin (we use WordPress as our website so it made it an easy place to work). It’s taken a lot of additional work adding in features that others (and us) would want, but we’ve gotten the plugin ready for public release and use by other companies. It’s now available in the WordPress plugin list as “Time Sheets“. There’s also some information available about it on our website. If you’re running a small (or even larger than small) company that needs a free application to track time, then I’d recommend checking out our time sheet system.
The system has all the workflows that you’d expect from a large timesheet system (approvals by supervisors, an invoicing queue, and a payroll queue). You can turn off features as needed such as the notes for each days work, or the expenses section if those aren’t going to be needed by various teams (or anyone).
We’ve found it to be pretty handy and easy to use for our small team, and hopefully you find it useful as well.
Recently I was interviewed for ReachForce.com’s Expert Interview Series on the impact of the Cloud and how it’s been impacting IT. Check out the interview and see my thoughts on the cloud and IT.
Working in Azure means mastering Private Browsing (aka. Porn Mode). This is especially true if you have multiple Azure Active Directory accounts (or at least several Microsoft Live accounts) that you need to log into accounts. On any standard day I usually have 2-3 different browsers open each running in both normal and porn mode so that I can be signed into several clients at once.
After working in Azure for a while you’ll find that you are a master of ++P (Internet Explorer and Firefox) and ++N (Chrome and Safari).
I wrote this blog post because this phrase came up during a client meeting, and the client bet me that I couldn’t write a blog post about it. So there. :p
P.S. Never bet me I can’t write a blog post about something stupid.
At SQL Bits this year I’ll be presenting my all day session Database Administration for the Non-DBA. This all day training day is a great session for those who work in shops where they have to function as the DBA, but their job isn’t to be the DBA (or someone who is brand new to being a DBA). We’re going to talk about all the great things that you have to worry about as the DBA including backups, restores, corruption, performance tuning, indexing, virtualization, High Availability, Disaster Recovery, and much more.
If you are an accidental DBA, or are just getting into the DBA field then this is the training day event that you want to sign up for. So stop waiting, and get registered (you select the training day during the normal registration process for SQL Bits).
Well that really depends. When your database VM fails for some reason (either someone does something bad like delete the VM by accident, or a Windows patch doesn’t install correctly and the VM won’t start correctly, etc.) do you care about having your logins on the server, your jobs, any SSIS packages deployed to MSDB, any operators, etc. restored to the system. Or do you want to have to redeploy all those things after you rebuild the server and get SQL set back up? If you aren’t OK with redeploying those (or the risk of those deployments is to high) and all the risks that go with that, then doing a restore of those system databases is going to be the way to go. (Hint, a restore is always going to be more reliable than a redeployment.)
Now I know with AlwaysOn Availability Groups you can just fail over to another node, but at some point you need to rebuild that failed node.
What if you don’t have Availability Groups in place. Maybe it’s a tier 2 server that doesn’t have HA beyond what Azure offers, or maybe it’s a failover cluster so you don’t have multiple copies of the system databases.
Now I get that restoring master isn’t as easy as a user database, but it’s actually pretty easy to restore. It just requires restarting the SQL software in single user mode with a couple of switches (SQLServer.exe -c -f -m) from the command line, then you have to login using another command line window to actually restore the database using sqlcmd and the RESTORE DATABASE command to actually restore master. Restoring msdb is easier as all you have to do is stop the SQL Server Agent (any anything else using msdb) then restore the msdb database like normal.
If I can easily describe how to restore the two databases in a single paragraph then it shouldn’t be all that hard for someone to do.
Now the cost. Your system databases should be small. If you’ve got pruning setup correctly in your msdb database then the backups (which you should be writing to RA-GRS, Readable Geo-Redundant Storage using the Backup to URL feature of SQL Server) should be a few megs per day. Assuming you keep them for 10 days (which is probably more than most people do) and we’ll assume 1 Gig of space needed for backups to make the math easier you’re talking about $1.20 per month to store the backups. If you kept the backups for 31 days then the cost is $3.72 per month for the system backups. (Assuming you don’t have an EA, and you are paying the full retail price, which basically no one should be.)
I’ve shown that system databases are easy to restore.
I’ve shown that they don’t cost much to backup (if you aren’t paying attention, the cost to store these backups is a rounding error in the cost of the SQL VM).
So what’s your excuse now for not backing them the system databases?
I wanted to throw out a reminder that I’ll be giving a pre-con at Nashville SQL Saturday 2017. I know that the announcement for it was right before the holidays and things can get lost around the holidays pretty easily. So with that…
I’ll be presenting a pre-con named SQL Server for the New or Non-dba on January 13th, 2017 at 8am CST.
In this all day session on Microsoft SQL Server we will be learning about how Microsoft SQL Server works and what needs to be done to keep it up and running smoothly when you don’t have a full time database administrator on staff to help you keep it running.
In this session we will cover a variety of topics including backups, upgrade paths, indexing, database maintenance, database corruption, patching, virtualization, disk configurations, high availability, database security, database mail, anti-viruses, scheduled jobs, and much, much more.
After taking this full day session on SQL Server you’ll be prepared to take the information that we go over and get back to the office, get the SQL Server’s patched and properly configured so that they run without giving you problems for years to come.
Be sure to go register for the pre-con, as registration is required.
See you at SQL Saturday Nashville.