SQL Server with Mr. Denny


March 14, 2014  1:32 AM

The Communication Deficit in the Airline Industry

Denny Cherry Denny Cherry Profile: Denny Cherry
Airplane image

                                           Via flickr / Kuster & Wildhaber

Something’s been bugging me ever since we lost track of MH370 (Malaysia Airlines Flight 370) last week. And that is, “How the hell can we loose an airplane?”

According to this MSN article (who got it from Reuters) that the airplane pings the maintenance system “about once an hour” and that 5 or 6 of these pings were received from the airplane after Malaysian air traffic control lost contact with the aircraft. Apparently there is an option to have the airplane feeding real time data up to whoever reads this data, but this airline doesn’t pay for that service.

So my next question becomes, “Why isn’t having this turned on mandatory?”

Gathering this much data really isn’t all that complex, even when you account for every single commercial aircraft in the world. According to FlightRadar24 there are about 6700 commercial airplanes flying at any one time (I’m looking at 6pm Pacific Time). FlightRadar24 gets its data from the FAA in the US and the ADS-B system in the rest of the world. According to the Daily Mail only about 60% of commercial airplanes have ADS-B, so we can assume that there are about 11,200 airplanes in the air at any given time. Getting real time data feeds from them with just the basic information needed to locate the aircraft in the event of an emergency like say when an aircraft disappears in the middle of an ocean really doesn’t require all that much information to be transmitted and we don’t really need it to be transmitted all that often.

We’d need the aircraft number, the flight number would be nice but isn’t really needed, the altitude, airspeed, GPS location, heading and the rate of climb (or decent). In other words a bunch of numbers need to be transmitted by the computer say, once a minute. I’d say that in a pinch every 5 minutes would be acceptable. That’s about 40 bytes of information to transmit, we’ll say 1000 bytes so that we can wrap it in XML when we send it. That data comes in to some centralized system which monitors every airplane world wide. That’s all of ~16M rows of data a day that would need to be stored and made available.

According to this article on MSN (again from Reuters) the BEA director Remy Jouty has stated that the technology to send continuous data was not yet “mature” enough. What makes them think that? It’s just data being sent over a satellite link. If we can stream data to and from every satellite in orbit and track every piece of space junk that’s in orbit I think we can track a few airplanes.

“Sending continuous data for all flights would create a huge amount of data and might get very difficult to monitor and analyse,” said Tony Tyler, director general of the International Air Transport Association…

This is my favorite part (from a data perspective) of the entire MSN article. Based on the numbers above at ~16M rows, being stored as 1000 bytes of information per row that’s all of ~15 Gigs of information per day. I can fit that on a thumb drive, or my phone, or any iPod out there. Say we keep the information for 7 days in the live system (we can archive it off and do all sorts of cool analytics on it on another system), that’s ~105 Gigs of data. Now I’ll need two thumb drives to store it (I’m only accounting for the thumb drives which are currently sitting in my laptop bag as I write this). Whoever Jouty and Tyler are talking to about putting together this kind of system apparently aren’t very good at handling data.

A system like this doesn’t need to have every bit of dynastic data that’s available in the black box. This sort of data should be used to go find the airplane, which has the black box in it. This way the data being transmitted from the airplane to the system is kept small (just in case satellite time really is expensive still).

I really don’t think that building a system like this would take all that long to put together. I’m fairly confident that me and a couple of friends could put something together in a week. The trick would be getting the airlines to all start feeding data to it in real time from the airplanes. And of course this system needs to be secure which means that the system can’t be turned off as long as the airplane is powered up.

Frankly I can’t find one good reason that a system like this shouldn’t be built and used, and I can’t for the life of me figure out why it hasn’t already been built.

Denny

March 12, 2014  5:00 PM

Learn Before You Plan

Denny Cherry Denny Cherry Profile: Denny Cherry

Walking into a new shop every couple of weeks has its challenges. What sort of problems are they going to throw at me. Recently I had the perfect example of learning more about the data before you start planning out your attack.

In this case the problem was that due to some business rules changes they needed to remove some transactions from their historical data for the last couple of years. I was given a quick summary of the data and let loose on the project. After looking at the sample data which was provided I came up with a plan to resummarize all the data in the fact table for the last couple of years into new tables, then on the night of the release we’d just change the object names and all would be well with the world.

After a few hours of banging my head on the data and not getting the new summary values to match the old data, I started looking at the transactions which needed to be removed. Turns out they were all $0 transactions, so all that needed to happen was that the transaction counts needed to be reduced and the dollar figures weren’t going to be changing at all. Suddenly my job got much easier. Instead of having to do all sorts of math, and figuring out what goes where all I needed to do was a little bit of subtraction. 3 days of work had just been reduced to a few hours.

It just goes to show, that slowing down, taking a little time to learn more about the data that you’ll be working with can go a long way to making your life much easier.

Denny


March 8, 2014  1:39 AM

Recommended reading from mrdenny for March 07, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: SQLBob also known as Bob Pusateri

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 5, 2014  5:00 PM

Risks of Password Changes

Denny Cherry Denny Cherry Profile: Denny Cherry

We all know that we should be changing the passwords for our service accounts every once and a while (ever 90-180 days or so) just in case someone gets ahold of the password who shouldn’t have it. However when it comes to SQL Server you have to be a little careful about changing those passwords depending on how you’ve setup your password schemes.

If for example you’ve got one domain account which runs all your SQL Servers, then changing that password gets pretty scary pretty fast. If you aren’t able to take an outage on all the servers at once then you’ll end up with servers which are running using the old domain account and anyone who tries to connect to the server will get a lovely error message about the SSPI Context not being valid.

Lets take a real live scenario that one of my clients ran into.

On Tuesday their let an IT worker go (a non-voluntary termination). On Friday they are notified that the password for the domain account for the SQL Server’s has changed. Panic mode hits, and all the service account passwords are changed and everyone with a domain admin account changes their passwords. Saturday hits and all the schedules jobs on a bunch of servers start to fail. Monday comes and no one is able to log into 95% of the SQL Servers.

The problem, when the password was changed it wasn’t changed on all the servers, so the machines which hadn’t been changed still had the old password stored and they couldn’t access the domain to authenticate users. Until all the servers could be taken down (which was approved pretty quickly as the systems were already down) users weren’t able to connect and do the work that they needed to do.

Denny


March 1, 2014  6:38 AM

Recommended reading from mrdenny for February 28, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: sqlpass also known as PASS

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


February 26, 2014  5:00 PM

TechEd 2014 Here I Come

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m pleased to be able to tell you that I will be speaking at TechEd 2014 North America. This year I’ll be giving a very similar presentation to the one that I gave last year titled “Introduction to SQL Server Management for the non-DBA”. Last year this session was VERY popular with a very full room, which is impressive as I was giving this session in the largest room available at both TechEd 2013 North America as well as TechEd 2013 Europe. I know that a lot of TechEd attendees are new each year as they can only go ones every 2-3 years so hopefully this session will be just as popular with both the attendees who couldn’t attend the conference last year as well as with people who wanted to see the session but couldn’t make it as they wanted to see a different session.

Like last year when I’m not giving my session you’ll probably be able to find me around the SQL Server booth as well of course at many of the after parties in the evenings.

Hopefully I’ll be seeing you at TechEd in Houston, TX this year.

Denny


February 21, 2014  7:00 PM

Recommended reading from mrdenny for February 21, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: j9soto also known as Rebecca Herrera

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


February 19, 2014  2:00 PM

Old vSphere, Large VMs, and Some Really “Funky” Problems

Denny Cherry Denny Cherry Profile: Denny Cherry
RAM Chips

Courtesy of http://www.flickr.com/photos/chrissinjo/5368405044/in/

Recently a client was having some IO performance problems so they decided that the easiest solution would be to through an SSD under the log files on the VM. Easy and cheap solution. The server in question has about 50 databases on it. In order to speed up the transaction log file transfer I decided to run multiple file copies at once. However doing so pushed the CPU on the server to 100%, the network would randomly crap out causing my RDP connection to drop and the file transfer was taking forever.

All the settings in both vSphere, Windows, the Array we were copying from, etc. all looked fine. This VM happened to be running on vSphere 4.1 Update 1 and couldn’t be upgraded because the vCenter server hadn’t been upgraded yet. The guest had 8 vCPUs and 96 Gigs of RAM. The hardware had two physical sockets which had 8 physical cores each (hyperthreading is turned on) and 128 Gigs of RAM on the host (this VM was the only VM on the host). According to the vSphere 4.1 Best Practices a VM will by default be run with all the CPUs within a single NUMA node. This means that this VM has it’s CPUs in a single NUMA node and some of the memory is in each NUMA node.

I wouldn’t expect this to be a problem, but apparently it was. Each of the file copies was running a CPU core up to 100%. When I’m running 10 file copies at once that’s over 100% of the CPU needed.

We reduced the amount of RAM available to the VM from 96 Gigs of RAM to 60 Gigs of RAM. Suddenly the guest was able to copy the files very fast, the file copies took next to no CPU power, and the network problems went away.

Why vSphere and Windows was having these problems I’ve got no idea, but this did fix the problem.

Denny


February 14, 2014  7:48 PM

Recommended reading from mrdenny for February 14, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: PASS_RM_Canada also known as PASS RM Canada

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


February 12, 2014  2:00 PM

How Much Is That Query Costing?

Denny Cherry Denny Cherry Profile: Denny Cherry

When we think about the cost of the query we typically think about the cost which SQL Server assigns to execute the query. However when we look at performance tuning queries there is another cost that we need to think about. This additional cost is the cost to the company while the query is running. As we start looking into these costs we can start to see just how important it is to performance tune every query within our applications.

Say we have a query which takes 0.5 seconds to run. We might think that is pretty good and we don’t need to do any more performance tuning on it. Now lets assume that this query is the customer lookup query that every one of our customer service agents uses every time that a customer calls into the call center. As a large company we have on average 1500 customer service reps on the phone at any time (working three shifts), and they each take 10-15 calls per hour. That query is now being run 18000 times per hour (give or take). Assuming that our customer service reps make $10 per hour it is costing us ~9000 seconds per shift or ~7.5 hours of staff time per day. That’s a cost of $75 per day just to run this specific query. Multiply this out for a year and that’s ~$27,375 per year, just to watch the query run. By tuning the query so that it runs within 0.1 seconds we can cut these costs down to ~$5,475 per year. That is a decent amount of financial savings to the company for just one query.

As we start looking at the rest of the queries in the application and we start adding up the costs for those queries suddenly we can easily be looking at $100,000 a year or more is employee costs just to use the application.

Wouldn’t it be nice to be able to tell management that you’ve just saved the company $100,000 a year just by adding a couple of new queries to the database?

This is why we need to do performance tuning.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: