Tweets, facebook posts and blog posts can be powerfull things. The have the ability to sway peoples opinions of others, to drive people to buy software, to sell stock, and to make bad decissions.
Posting cranky posts just to get clicks views and retweets does nothing useful but show that all you care about is showing that you want to stir the pot.
There are lots of ways of being constructive without fanning the flames. In the above tweet the author just craps all over someone, I assume the people who made the service pack, with no context or any followup at all. I get that it’s only a tweet with 140 characters, but there’s ways to get context. In our next example we see exactly how. We have a thank you to Microsoft for the lovely lapel pin/magnet, but a warning to people who aren’t used handling rare earth magnets that they need to be kept away from kids. As it’s a longer post (from Instagram) there’s a link though to the origional where the rest of the post finishes with “These are dangerous.”. The warning is still given, but without just crapping all over the fact that somone went through the trouble of sending these out to the MVPs.
I think my message here is, think before you post. Think how it’s going to impact others. Not just those you want to have read it, but those who did the thing that you’re writing about. Maybe rephrase how you’re going to post that snarky post and it’ll have more of the desired impact. I can almost guarantee that the first tweet had no useful impact on the SQL Server product team, where as the second post would have had much more impact to the MVP team when designing the next round of awards.
With the 8TB SSD drives that Azure has, which makes the most sense to use multiple 1TB SSDs or the 8TB SSD drives? Well that depends. The 8TB SSDs give you 7500 IOPs and 250 MB/sec, but if I take 8 1TB SSD drives I can get 1600 MB/sec of throughput and 40,000 IOPs in the same amount of space.
Of course I need to stripe the 8 disks together in Windows, but there’s no cost for that. The cost of 8 1TB drives is slightly higher than 1 8TB drive by 114 pounds in the case of this screenshots. But given the performance difference it’s a cost worth having.
So why would I want the 8TB drive, because I have a GS5 that needs 1/2PB of storage. There’s no “easy” way to do that with 1TB drives. If/when we get P70+ drives things will get really interesting.
Everyone takes shortcuts. It’s normal. But we shouldn’t be doing it. It comes with some disadvantages. Sometimes it doesn’t look pretty, sometimes the shortcuts cause performance problems, sometimes they cause bugs in software. Sometimes they cause applications to fails. Our job as IT professionals it’s to do what’s easy. It’s to do what’s in the best interest of the system or company.
Stop putting staples in plants. Stop taking shortcuts.
Today is Day 1 at the PASS Summit, and there’s going to be all sorts of blog posts all about what’s being announced during the keynote today (I assume). I’ll leave those announcements for others to blog about.
Denny Cherry & Associates Consulting has a big announcement to make today as well. Starting today, DCAC is expanding by adding another fantastic consultant to our ranks. This time we’re adding John Morehouse to our growing family. Like the rest of us, John will be working from home which means a co-worker in Kentucky (yea, another set of state paperwork to fill out every month, thanks, John).
John flew in this morning to join the rest of the team for his first day at the “office” which we appreciate. Leaving the two little ones on a very early flight after Halloween couldn’t have been the most fun thing ever.
John’s has 20 years of IT experience, with over ten years of dedicated SQL Server experience making him an excellent addition to the DCAC organization bringing our in-house team up to about 100 years of IT experience.
When John isn’t traveling to SQL Saturday events, his hobbies include spending him with his kids, reading and vacationing.
We welcome John to DCAC. Come to the exhibit hall and to booth 316 to get some great swag and say hi to John.
I found the above instructions on a blog post I was trying to use to fix an issue in visual studio recently. (Ignore the fact that I was in Visual Studio and focus on the screenshot.) This post has 4 step. Step 1, which you can see above has two warnings in it, but now followup information about what to do it you get the errors that it says. It doesn’t give you links out to posts on how to fix these critical errors. In fact, I could go no further in working through this issue in Visual Studio. I ended up simply copying the code manually from one branch to another as that took me 20 minutes and I spent 6 hours trying to figure out how to fix the issue.
When it comes to writing blog posts, writing for the expert is fine, but at the least, you need to have links out for the beginner, if not put those details in your post. Not everyone out there is an expert and knows how to use products at the Scott Hanselman level. If they did we wouldn’t need blog posts on how to do things. Some people like myself and really good in some areas (SQL Server) and not others (Visual Studio) and posts should cater to everyone.
I know I’ve been really slacking on getting this year judges list for speaker idol posted. A lot of this is just because of everything else that’s been going on leading up to the PASS summit, specifically my insane travel schedule to SQL Saturday’s Microsoft Ignite, and the SQL 2017 Launch, and shifting the schedule to slide the judges in where possible so there are no conflicts.
Without delay, and in no specific order, here are your 2017 Speaker idol Judges.
- Karen Lopez
- Joey D’Antoni
- Kendra Little
- Mark Simms
- Allan Hirt
(Even spelled correctly this year)
I know our judges are going to give all 12 of our contestants some great feedback, and they are going to do a great job picking out first PASS Summit 2018 speaker.
We’ve had another change to the PASS Summit speaker idol line up. Tzahi has had to withdraw due to work commitments and won’t be able to attend the PASS summit at all this year. Which is good news for Jeremy Frye as he will be taking the open spot. Like all the contestants we with Jeremy the best of luck and we’ll see everyone at the PASS summit.
Your new and improved speaker idol line up now stands at:
Your Wednesday lineup for speaker idol is:
- Jim Donahoe
- Brian Carrig
- Jonathan Stewart
- Robert Volk
Your Thursday lineup for Speaker Idol is:
- Javier Villegas
- Eric Peterson
- Ed Watson
- Dennes Torres
Your Friday lineup for Speaker Idol is:
- Daniel de Sousa
- Joseph Barth
- Jeremy Frye
- Simon Whiteley
Building, implementing and executing a proper DR plan successfully is a challenging undertaking. It is a lot more complicated
than most experienced IT professionals and/or consultants think it is. This is because there are a LOT of moving parts to build a DR platform that’s going to fail over and allow the application to keep working. I’ve been bringing this point up in my sessions a lot recently. Our job in IT isn’t to build the cool, slick, sexy, solution. Our job is to make it so that the sales guy can sell widgets. Whatever widgets your company sells, the job of IT is to help the sales guy sell widgets. If the sales guy can’t sell widgets (and the shipping department can’t fulfill those orders and everything else that goes with selling widgets) then your company doesn’t get paid. If the company doesn’t get paid, then you don’t get paid. Then you have a pissed off spouse, and a pissed off mortgage company. And these aren’t good things to have. So let’s get back to talking about helping the sales team sell widgets.
HA Failover is pretty straightforward. There’s no data loss, everything is done using two-phase commit as it’s all inside the data center. So I don’t want to talk about that. I want to talk about when things really fall apart. The production site fails. Things are getting really interesting. Lets design for this.
just talk about what we need to think about.
- Active Directory
- IP Space
- Connection String Issues
- Remote Access
- End-user application access (web front end)
- Employee access (web / fat client)
That’s a lot of things that have to be thought about. You’ll notice that I haven’t even talked about the database stuff yet. Once we get into the database stuff starts getting more complex.
- Recovery Point Objective (RPO)
- Recovery Time Object (RTO)
- Am I using features that make availability groups not supportable (probably not on current versions)
- How many replicas do I need?
- Am I correctly licensed?
- Can I do this in the cloud?
- Should I do this in the cloud?
- How many values do I need to skip for sequences and IDENTITY columns?
This is clearly a complex topic. Because of this, we’ve put together a roundtable of experts on high availability and disaster recovery to have a roundtable discussion to talk about some of this complexities that there are that people stumble on. The webinar will be at 11 am Pacific Time / 2 pm Eastern Time on Tuesday, October 24th.
To register for the webinar so that we can remind you about it, click over to our registration page. Download the Outlook calendar entry and we’ll remind you when it’s time for the webcast. Can’t make it on the 24th? No problem. We’ll be recording the webcast and sharing making it available for viewing after for free.
When it comes to DR, you only get one chance. If you screw it up the company goes out of business so you really need to be taking your DR planning experience from the very best. And that’s who we have scheduled for our round table.
I’ll be honest, when I first learned about SQL Server on Linux, I didn’t get it. It took like an hour for it to make sense to me. After a week of talking to customers, a lot of whom were super interested in deploying SQL Server on Linux or SQL Server in Docker it just makes even more sense to me.
Let’s run down the various use cases, then let’s dive into each one and break them down.
- LAMP stack developers
- Small SQL Instances for 3rd party apps
- Shops which are primarily Linux shops today and want/need to deploy SQL Server
- Shops who want to replace Oracle, MySQL, and/or PostgreSQL
Let’s explore each of these a little more, shall we?
LAMP stack developers
There’s LOTS of developers out there that when they start a new project they fire up new containers and grab PHP, a database (MySQL, PostgreSQL, etc.) and go to town programming away. Microsoft SQL Server being available for Linux and having it being just a quick docker download (or yum install) away gives them the option of having a full enterprise class database behind their application with a quick, easy 2 minute config that’s like 3 questions, for the same cost in development. And when they go to deploy their application to their users (customers) they have the confidence of telling them that the database they they’ve selected is Microsoft SQL Server, supported by a multi-billion-dollar company which they are probably used to dealing with already.
The developer can build their application with confidence knowing that the programming surface is identical between the Linux and Windows versions of the product so they can expect it to react exactly the same no matter if the customer installs it on RedHat, Ubuntu, Docker, or Windows.
Small SQL Instances for 3rd Party Apps
An interesting potential workload that a car manufacture brought up while I was working the booth at the Microsoft Ignite conference, was small database applications like 3rd party applications, or even 1st party applications but which are typically single core VMs with small memory footprints. Today they take up a large percentage of the VMware farm, and what did I think of moving these into a Linux host running Docker containers and running really small containers (assuming that the CPU and memory were fine). My response, sounds like a great use of the docker deployment of SQL Server on Linux.
The docker deployment is going to give you a really thin, really lightweight deployment option as you don’t have to worry about deploying the OS within the VM. In a large host running dozens of containers this could led to some major resource savings.
There are lots of companies which are primarily Linux and don’t want to run or manage Windows servers for one reason or another, but they often have to because they need to support SQL Server to support either line of business applications or back office business applications such as HR applications or payroll applications, even if those applications are Java applications running on Linux servers.
Now those companies can toss their Windows servers and run Linux OSes that they are used to running and still have the SQL Server database that they need to run their applications. The power to do what you need to do, to run your organization and the flexibility to do it the way that you want to do it without any vendor lock in. It’s like Microsoft has been listening to the Linux community or something (no they aren’t going to open source SQL Server, stop asking).
I was shocked the number of people who came to the booth, knowing that I do not work for Microsoft, telling me that this gives them the ability to replace Oracle, MySQL or PostgreSQL in their shops. Most of these companies were primary Linux (or all Linux) server OS shops so suddenly having the flexibility to get out from under the thumb of their Oracle sales rep while staying on their server OS platform of choice, is huge. One customer came up and said that even after the cost of converting their applications from Oracle to SQL Server (which was not going to be cheap), and the cost of paying for the SQL Server licenses, the annual savings was still in the millions because of the Oracle licensing costs. I’m guessing that the Oracle sales guy really isn’t going to like that phone call, but his car is nice enough already.
Given that it’s only been out for a few days, that’s good start.
At Microsoft Ignite, Microsoft announced that they are changing the patching cycle for SQL Server. SQL Server will no longer release Service Packs for SQL Server 2017, instead of releasing only CUs and releasing them much more often. CUs will be released monthly for the first 12 months of a new release, then quarterly after that.
Now some people out there in blogger world have gotten all cranky about this. But this is a good thing. The big downside to CUs for years has been that little disclaimer attached to them, that they should only be installed when you are impacted by an issue that the CU specifically fixes. This disclaimer was there because in the dark ages of SQL Server (SQL 2005 or so) CUs weren’t as well tested as Service Packs, they just couldn’t be because the automation wasn’t there to do so.
Now the automation is there to do the testing.
Because of this, we don’t need Service Packs, because every CU is now effectively a Service Pack. The fixes are going to be coming out faster. This doesn’t mean that there are more bugs that need to be addressed, or that the software is more unstable or any of the other conspiracy theory nonsense that I’ve read about this. It simply means that along with the faster release cycle that we saw for SQL Server 2017, Microsoft has accelerated the release cycle for patches as well.
This all means that if there is a bug in the software that impacts our systems that we have to wait for, this means that we will get the patch faster than we’ve ever gotten the patch before. And without having to open a support ticket and get a hotfix patch. We get this via a fully supported CU patch through a normally supported servicing release. This sounds like a good thing to me.
Does this mean that you need to do more patching of SQL Server? Maybe. If you want every possible patch installed to get every possible fix, then yes. But you probably aren’t hitting every possible bug that’s fixed by every possible CU that comes out. If you are you are either pushing the product harder than 99.9% of the users out there, or you are one unlucky shop.
I’d recommend patching SQL as often as you can. If you can patch each month for that first 12 months, excellent. If you can’t get a monthly maintenance window to get those patching installed then shoot for every other month. If you aren’t being currently affected by an issue that’s being resolved by a patch then waiting for an extra month won’t kill you, and if you are, then getting the patch faster than you would have gotten it under the old servicing model and getting the maintenance window to get it deployed really shouldn’t be that big of a deal.