SQL Server with Mr. Denny


March 26, 2014  7:00 PM

Excel 2013 and Analysis Services 2000

Denny Cherry Denny Cherry Profile: Denny Cherry

So I got a strange question from a client the other day.  Can I get Excel 2013 connected to Analysis Services 2000?

Yes you read that correct, SSAS 2000.

The answer is “it depends”.  If you are using Excel 2013 x32 (the 32bit version) then yes you can connect to SQL Server Analysis Services 2000.  If however you are using Excel 2013 x64 (the 64bit version) then no you can not connect to SQL Server Analysis Services.  At least part of this reason is because there is no 64bit drive for SQL Server Analysis Services 2000 which is a bit of a problem.  So if your uses have Excel 2013 x64 and need to connect they’ll need another machine (or a VM hosted somewhere) where they can run the 32bit version of Excel 2013 so that they can use it to export data from SSAS.  Then they can save the file, and open it in Excel x64 on their main machine and do whatever fancy stuff they need to do.

Denny

March 21, 2014  6:40 PM

Recommended reading from mrdenny for March 21, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: SQLSatExeter also known as SQL Saturday Exeter

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 19, 2014  5:00 PM

Is Your DR Plan Ready For The Zombie Apocalypse?

Denny Cherry Denny Cherry Profile: Denny Cherry

When setting up DR plans are you covering all your potential options? Are you thinking about the entire location of your production site being unavailable? All to often when I talk to companies about their DR plans it takes me all of about 5 seconds to poke some major holes in the DR plan. Maybe this is because I’m from California and when we have a disaster (Earthquake) we really have a disaster.

So think about your DR plans for a minute. They are probably built around the assumption that the office and/or the data center (if the data center isn’t in the office) don’t exist any more. Right? So what kind of disaster have you planned for?

  • Building fire
  • Power Outage
  • Roof Gives Way
  • Earthquake
  • Flood
  • Hurricane

OK, that’s a decent list. But I’m going to say that you probably have only planned for 3 of those 6 that you think you’ve planned for. Can you figure out which ones I think that you’ve planned for?

I’ll give you a minute.

Take your time.

If you guessed Building File, Power Outage and Roof Gives Way you are correct, but you only get 1/2 credit for Power Outage.

The reason that I don’t think you’ve actually planned for an Earthquake, Flood or Hurricane is because what sort of planning around people have you done. Many companies have the bulk of the IT team in a single site with maybe just a couple of help desk guys and a sysadmin at which ever office is the secondary site. When it comes time to execute the disaster recovery plan what staff members are you planning on having available? Are you planning on having the IT staff from your primary site available? If the answer you gave way “yes” then you answered incorrectly.

When a disaster hits that’s big enough to take out our primary site the odds are that everyone who works at the primary site probably won’t be available either.

If there’s an Earthquake that’ll effect the entire region. Even if some staff members life far enough away that they aren’t impacted how are they going to get to the office. The roads probably won’t be passable, the power will probably be out, the airports will be closed, trains won’t be running, etc.

The same applies to a flood. Think back to the flooding in New Orleans a few years ago. How many of the companies there expected their employees to be available to execute the DR plan? The employees were busy evacuating their families, odds are the company was the last thing on their minds. The same rules apply for a hurricane.

When planning for power outages, how large scale of a power outage are you planning for? Here in the US the power grid is very unstable and it’s pretty easy to crash the entire grid. Hell it’s happened a couple of times now in the last few years. That again means that airports are going to be having problems, trains will be slow or down, roads will be a mess, users won’t have power to VPN into the office, etc. And again the staff members will be busy worrying about their family members being secure, fed and warm, not worrying about getting the company failed over to the DR data center.

While you probably don’t need to plan for every possible disaster (the odds of the zombie apocalypse are pretty small) there needs to be a lot of thought put into the plans to account for when the staff will and won’t be available. And the DR tests need to take this into account. Doing a DR test and having the staff at the primary site who manage the systems normally doing the test isn’t the best test as those people won’t be available for a real disaster.

The best plan is to have the DR test done by the people who don’t work at the primary site, and have them do it without contacting the people from the primary site. That’s a proper DR test. And I think you’ll be surprised just how quickly that sort of DR test fails. After all, do the people who don’t work at the primary site even have access to the DR run books without accessing the primary site?

Denny


March 14, 2014  5:14 PM

Recommended reading from mrdenny for March 14, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: sqlargentina also known as SQL Argentina

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 14, 2014  1:32 AM

The Communication Deficit in the Airline Industry

Denny Cherry Denny Cherry Profile: Denny Cherry
Airplane image

                                           Via flickr / Kuster & Wildhaber

Something’s been bugging me ever since we lost track of MH370 (Malaysia Airlines Flight 370) last week. And that is, “How the hell can we loose an airplane?”

According to this MSN article (who got it from Reuters) that the airplane pings the maintenance system “about once an hour” and that 5 or 6 of these pings were received from the airplane after Malaysian air traffic control lost contact with the aircraft. Apparently there is an option to have the airplane feeding real time data up to whoever reads this data, but this airline doesn’t pay for that service.

So my next question becomes, “Why isn’t having this turned on mandatory?”

Gathering this much data really isn’t all that complex, even when you account for every single commercial aircraft in the world. According to FlightRadar24 there are about 6700 commercial airplanes flying at any one time (I’m looking at 6pm Pacific Time). FlightRadar24 gets its data from the FAA in the US and the ADS-B system in the rest of the world. According to the Daily Mail only about 60% of commercial airplanes have ADS-B, so we can assume that there are about 11,200 airplanes in the air at any given time. Getting real time data feeds from them with just the basic information needed to locate the aircraft in the event of an emergency like say when an aircraft disappears in the middle of an ocean really doesn’t require all that much information to be transmitted and we don’t really need it to be transmitted all that often.

We’d need the aircraft number, the flight number would be nice but isn’t really needed, the altitude, airspeed, GPS location, heading and the rate of climb (or decent). In other words a bunch of numbers need to be transmitted by the computer say, once a minute. I’d say that in a pinch every 5 minutes would be acceptable. That’s about 40 bytes of information to transmit, we’ll say 1000 bytes so that we can wrap it in XML when we send it. That data comes in to some centralized system which monitors every airplane world wide. That’s all of ~16M rows of data a day that would need to be stored and made available.

According to this article on MSN (again from Reuters) the BEA director Remy Jouty has stated that the technology to send continuous data was not yet “mature” enough. What makes them think that? It’s just data being sent over a satellite link. If we can stream data to and from every satellite in orbit and track every piece of space junk that’s in orbit I think we can track a few airplanes.

“Sending continuous data for all flights would create a huge amount of data and might get very difficult to monitor and analyse,” said Tony Tyler, director general of the International Air Transport Association…

This is my favorite part (from a data perspective) of the entire MSN article. Based on the numbers above at ~16M rows, being stored as 1000 bytes of information per row that’s all of ~15 Gigs of information per day. I can fit that on a thumb drive, or my phone, or any iPod out there. Say we keep the information for 7 days in the live system (we can archive it off and do all sorts of cool analytics on it on another system), that’s ~105 Gigs of data. Now I’ll need two thumb drives to store it (I’m only accounting for the thumb drives which are currently sitting in my laptop bag as I write this). Whoever Jouty and Tyler are talking to about putting together this kind of system apparently aren’t very good at handling data.

A system like this doesn’t need to have every bit of dynastic data that’s available in the black box. This sort of data should be used to go find the airplane, which has the black box in it. This way the data being transmitted from the airplane to the system is kept small (just in case satellite time really is expensive still).

I really don’t think that building a system like this would take all that long to put together. I’m fairly confident that me and a couple of friends could put something together in a week. The trick would be getting the airlines to all start feeding data to it in real time from the airplanes. And of course this system needs to be secure which means that the system can’t be turned off as long as the airplane is powered up.

Frankly I can’t find one good reason that a system like this shouldn’t be built and used, and I can’t for the life of me figure out why it hasn’t already been built.

Denny


March 12, 2014  5:00 PM

Learn Before You Plan

Denny Cherry Denny Cherry Profile: Denny Cherry

Walking into a new shop every couple of weeks has its challenges. What sort of problems are they going to throw at me. Recently I had the perfect example of learning more about the data before you start planning out your attack.

In this case the problem was that due to some business rules changes they needed to remove some transactions from their historical data for the last couple of years. I was given a quick summary of the data and let loose on the project. After looking at the sample data which was provided I came up with a plan to resummarize all the data in the fact table for the last couple of years into new tables, then on the night of the release we’d just change the object names and all would be well with the world.

After a few hours of banging my head on the data and not getting the new summary values to match the old data, I started looking at the transactions which needed to be removed. Turns out they were all $0 transactions, so all that needed to happen was that the transaction counts needed to be reduced and the dollar figures weren’t going to be changing at all. Suddenly my job got much easier. Instead of having to do all sorts of math, and figuring out what goes where all I needed to do was a little bit of subtraction. 3 days of work had just been reduced to a few hours.

It just goes to show, that slowing down, taking a little time to learn more about the data that you’ll be working with can go a long way to making your life much easier.

Denny


March 8, 2014  1:39 AM

Recommended reading from mrdenny for March 07, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: SQLBob also known as Bob Pusateri

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 5, 2014  5:00 PM

Risks of Password Changes

Denny Cherry Denny Cherry Profile: Denny Cherry

We all know that we should be changing the passwords for our service accounts every once and a while (ever 90-180 days or so) just in case someone gets ahold of the password who shouldn’t have it. However when it comes to SQL Server you have to be a little careful about changing those passwords depending on how you’ve setup your password schemes.

If for example you’ve got one domain account which runs all your SQL Servers, then changing that password gets pretty scary pretty fast. If you aren’t able to take an outage on all the servers at once then you’ll end up with servers which are running using the old domain account and anyone who tries to connect to the server will get a lovely error message about the SSPI Context not being valid.

Lets take a real live scenario that one of my clients ran into.

On Tuesday their let an IT worker go (a non-voluntary termination). On Friday they are notified that the password for the domain account for the SQL Server’s has changed. Panic mode hits, and all the service account passwords are changed and everyone with a domain admin account changes their passwords. Saturday hits and all the schedules jobs on a bunch of servers start to fail. Monday comes and no one is able to log into 95% of the SQL Servers.

The problem, when the password was changed it wasn’t changed on all the servers, so the machines which hadn’t been changed still had the old password stored and they couldn’t access the domain to authenticate users. Until all the servers could be taken down (which was approved pretty quickly as the systems were already down) users weren’t able to connect and do the work that they needed to do.

Denny


March 1, 2014  6:38 AM

Recommended reading from mrdenny for February 28, 2014

Denny Cherry Denny Cherry Profile: Denny Cherry


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: sqlpass also known as PASS

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


February 26, 2014  5:00 PM

TechEd 2014 Here I Come

Denny Cherry Denny Cherry Profile: Denny Cherry

I’m pleased to be able to tell you that I will be speaking at TechEd 2014 North America. This year I’ll be giving a very similar presentation to the one that I gave last year titled “Introduction to SQL Server Management for the non-DBA”. Last year this session was VERY popular with a very full room, which is impressive as I was giving this session in the largest room available at both TechEd 2013 North America as well as TechEd 2013 Europe. I know that a lot of TechEd attendees are new each year as they can only go ones every 2-3 years so hopefully this session will be just as popular with both the attendees who couldn’t attend the conference last year as well as with people who wanted to see the session but couldn’t make it as they wanted to see a different session.

Like last year when I’m not giving my session you’ll probably be able to find me around the SQL Server booth as well of course at many of the after parties in the evenings.

Hopefully I’ll be seeing you at TechEd in Houston, TX this year.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: