SQL Server with Mr. Denny


April 13, 2014  10:17 PM

Recommended reading from mrdenny for April 11, 2014



Posted by: Denny Cherry
SQL Server, Top 10

 

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: paultebraak also known as Paul te Braak

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny

April 9, 2014  2:00 PM

Viewing Maintenance Plan Logs



Posted by: Denny Cherry
SQL

For many of my customers I’ve got them setup to use maintenance plans because they are small shops without a DBA and the maintenance plans give them something visual which they can easily tweak later on if needed without having to have me come in to make a small change to their reindexing or backups, etc.

One of the big failings of the maintenance plans is the reporting in Management Studio. If you right click on a maintenance plan you get an option to view reports and it gives you a whole bunch of really nice info … sometimes. After the plan has been running for a while it just sits there not giving you any useful information. And that’s because the code behind the maintenance plan report page … SUCKS!

The process does a bunch of row by row querying instead of just dumping a single large recordset like any normal application. So how do you get that useful information from SQL Server. The query below will give you the exact same information that the GUI gives you, but without waiting forever for the GUI to open up. Just change the plan and subplan names to the ones that you are looking for and run the query.

SELECT
ld.line1 AS [Line1],
ld.line2 AS [Line2],
ld.line3 AS [Line3],
ld.line4 AS [Line4],
ld.line5 AS [Line5],
ld.server_name AS [ServerName],
ld.start_time AS [StartTime],
ld.end_time AS [EndTime],
ld.error_number AS [ErrorNo],
ld.error_message AS [ErrorMessage],
ld.command AS [Command],
ld.succeeded AS [Succeeded]
FROM
msdb.dbo.sysmaintplan_plans AS s
INNER JOIN msdb.dbo.sysmaintplan_subplans AS sp ON sp.plan_id=s.id
INNER JOIN msdb.dbo.sysmaintplan_log AS spl ON spl.subplan_id=sp.subplan_id
INNER JOIN msdb.dbo.sysmaintplan_logdetail AS ld ON ld.task_detail_id=spl.task_detail_id
WHERE
((sp.subplan_name= ‘Full’)and((s.name=’Backups’)))
ORDER BY
[StartTime] DESC

Denny


April 4, 2014  2:00 PM

Recommended reading from mrdenny for April 04, 2014



Posted by: Denny Cherry
SQL, SQL Server, Top 10

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: tjaybelt also known as tjaybelt

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


April 2, 2014  5:00 PM

How I Setup TempDB



Posted by: Denny Cherry
Back To Basics, Database Administration, SQL, SQL Server, SQL Server 2000, SQL Server 2005, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, TempDB

How TempDB should be setup is a question that comes up a lot, mostly because there is a lot of bad information out there.  So here’s how I typically setup TempDB for the bulk of my clients, and these settings will probably work for you as well.

Number of Files

The number of files that you need for tempdb, in most cases is 8.  There’s plenty of websites and blogs out there which say that you should have one tempdb database file for each CPU core that you have.  And that was true, for SQL 2000 because we only had 4-8 cores in the machines.  For 99.9%+ of SQL Server instances 8 tempdb database files will do the trick.

You’ll need to add more files for the tempdb database when you see waits on the PFS data pages (the easiest way to see this is by using sp_whoisactive as it’ll say PFS if the waits are on a PFS page).

File Size

The size of the tempdb database should be what ever size it grows to normally.  If you’ve got one database file today, and that gets up to 15 Gigs, when you break that out to 8 files, make each one about 2 Gigs in size.  This way when SQL starts up the space is already allocated and you don’t need to worry about the database growing.

 

Auto Growth Settings

Typically I set the auto growth settings for the data files for the tempdb database at 1 Gig.  For systems which are a lot larger I might go with a bigger number, but for most clients we set a 1 Gig auto-growth setting.

Transaction Log

For the transaction log things are a little big different.  I’ll typically start with an 8000 Meg file.  If I think that it’s going to grow a lot (based on experience for this instance) I’ll set the growth for 8000 Megs.  If I don’t think it’ll grow much or at all I’ll set it for 1000 Megs.  These numbers look a little funky because of some auto-growth bugs which have existed in the past.

So that’s it.  That’s how I’ll typically setup the tempdb for my clients.

Denny


March 29, 2014  5:11 AM

Recommended reading from mrdenny for March 28, 2014



Posted by: Denny Cherry
SQL Server, Top 10


This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: daveberm also known as David Bermingham

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 26, 2014  7:00 PM

Excel 2013 and Analysis Services 2000



Posted by: Denny Cherry
SQL Server, SQL Server 2000

So I got a strange question from a client the other day.  Can I get Excel 2013 connected to Analysis Services 2000?

Yes you read that correct, SSAS 2000.

The answer is “it depends”.  If you are using Excel 2013 x32 (the 32bit version) then yes you can connect to SQL Server Analysis Services 2000.  If however you are using Excel 2013 x64 (the 64bit version) then no you can not connect to SQL Server Analysis Services.  At least part of this reason is because there is no 64bit drive for SQL Server Analysis Services 2000 which is a bit of a problem.  So if your uses have Excel 2013 x64 and need to connect they’ll need another machine (or a VM hosted somewhere) where they can run the 32bit version of Excel 2013 so that they can use it to export data from SSAS.  Then they can save the file, and open it in Excel x64 on their main machine and do whatever fancy stuff they need to do.

Denny


March 21, 2014  6:40 PM

Recommended reading from mrdenny for March 21, 2014



Posted by: Denny Cherry
SQL Server, Top 10

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: SQLSatExeter also known as SQL Saturday Exeter

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 19, 2014  5:00 PM

Is Your DR Plan Ready For The Zombie Apocalypse?



Posted by: Denny Cherry
Distaster Recovery, SQL, SQL Server

When setting up DR plans are you covering all your potential options? Are you thinking about the entire location of your production site being unavailable? All to often when I talk to companies about their DR plans it takes me all of about 5 seconds to poke some major holes in the DR plan. Maybe this is because I’m from California and when we have a disaster (Earthquake) we really have a disaster.

So think about your DR plans for a minute. They are probably built around the assumption that the office and/or the data center (if the data center isn’t in the office) don’t exist any more. Right? So what kind of disaster have you planned for?

  • Building fire
  • Power Outage
  • Roof Gives Way
  • Earthquake
  • Flood
  • Hurricane

OK, that’s a decent list. But I’m going to say that you probably have only planned for 3 of those 6 that you think you’ve planned for. Can you figure out which ones I think that you’ve planned for?

I’ll give you a minute.

Take your time.

If you guessed Building File, Power Outage and Roof Gives Way you are correct, but you only get 1/2 credit for Power Outage.

The reason that I don’t think you’ve actually planned for an Earthquake, Flood or Hurricane is because what sort of planning around people have you done. Many companies have the bulk of the IT team in a single site with maybe just a couple of help desk guys and a sysadmin at which ever office is the secondary site. When it comes time to execute the disaster recovery plan what staff members are you planning on having available? Are you planning on having the IT staff from your primary site available? If the answer you gave way “yes” then you answered incorrectly.

When a disaster hits that’s big enough to take out our primary site the odds are that everyone who works at the primary site probably won’t be available either.

If there’s an Earthquake that’ll effect the entire region. Even if some staff members life far enough away that they aren’t impacted how are they going to get to the office. The roads probably won’t be passable, the power will probably be out, the airports will be closed, trains won’t be running, etc.

The same applies to a flood. Think back to the flooding in New Orleans a few years ago. How many of the companies there expected their employees to be available to execute the DR plan? The employees were busy evacuating their families, odds are the company was the last thing on their minds. The same rules apply for a hurricane.

When planning for power outages, how large scale of a power outage are you planning for? Here in the US the power grid is very unstable and it’s pretty easy to crash the entire grid. Hell it’s happened a couple of times now in the last few years. That again means that airports are going to be having problems, trains will be slow or down, roads will be a mess, users won’t have power to VPN into the office, etc. And again the staff members will be busy worrying about their family members being secure, fed and warm, not worrying about getting the company failed over to the DR data center.

While you probably don’t need to plan for every possible disaster (the odds of the zombie apocalypse are pretty small) there needs to be a lot of thought put into the plans to account for when the staff will and won’t be available. And the DR tests need to take this into account. Doing a DR test and having the staff at the primary site who manage the systems normally doing the test isn’t the best test as those people won’t be available for a real disaster.

The best plan is to have the DR test done by the people who don’t work at the primary site, and have them do it without contacting the people from the primary site. That’s a proper DR test. And I think you’ll be surprised just how quickly that sort of DR test fails. After all, do the people who don’t work at the primary site even have access to the DR run books without accessing the primary site?

Denny


March 14, 2014  5:14 PM

Recommended reading from mrdenny for March 14, 2014



Posted by: Denny Cherry
SQL Server, Top 10

This week I’ve found some great things for you to read. These are a few of my favorites that I’ve found this week.

This weeks SQL Server person to follow on Twitter is: sqlargentina also known as SQL Argentina

Hopefully you find these articles as useful as I did.

Don’t forget to follow me on Twitter where my username is @mrdenny.

Denny


March 14, 2014  1:32 AM

The Communication Deficit in the Airline Industry



Posted by: Denny Cherry
SQL, SQL Server
Airplane image

                                           Via flickr / Kuster & Wildhaber

Something’s been bugging me ever since we lost track of MH370 (Malaysia Airlines Flight 370) last week. And that is, “How the hell can we loose an airplane?”

According to this MSN article (who got it from Reuters) that the airplane pings the maintenance system “about once an hour” and that 5 or 6 of these pings were received from the airplane after Malaysian air traffic control lost contact with the aircraft. Apparently there is an option to have the airplane feeding real time data up to whoever reads this data, but this airline doesn’t pay for that service.

So my next question becomes, “Why isn’t having this turned on mandatory?”

Gathering this much data really isn’t all that complex, even when you account for every single commercial aircraft in the world. According to FlightRadar24 there are about 6700 commercial airplanes flying at any one time (I’m looking at 6pm Pacific Time). FlightRadar24 gets its data from the FAA in the US and the ADS-B system in the rest of the world. According to the Daily Mail only about 60% of commercial airplanes have ADS-B, so we can assume that there are about 11,200 airplanes in the air at any given time. Getting real time data feeds from them with just the basic information needed to locate the aircraft in the event of an emergency like say when an aircraft disappears in the middle of an ocean really doesn’t require all that much information to be transmitted and we don’t really need it to be transmitted all that often.

We’d need the aircraft number, the flight number would be nice but isn’t really needed, the altitude, airspeed, GPS location, heading and the rate of climb (or decent). In other words a bunch of numbers need to be transmitted by the computer say, once a minute. I’d say that in a pinch every 5 minutes would be acceptable. That’s about 40 bytes of information to transmit, we’ll say 1000 bytes so that we can wrap it in XML when we send it. That data comes in to some centralized system which monitors every airplane world wide. That’s all of ~16M rows of data a day that would need to be stored and made available.

According to this article on MSN (again from Reuters) the BEA director Remy Jouty has stated that the technology to send continuous data was not yet “mature” enough. What makes them think that? It’s just data being sent over a satellite link. If we can stream data to and from every satellite in orbit and track every piece of space junk that’s in orbit I think we can track a few airplanes.

“Sending continuous data for all flights would create a huge amount of data and might get very difficult to monitor and analyse,” said Tony Tyler, director general of the International Air Transport Association…

This is my favorite part (from a data perspective) of the entire MSN article. Based on the numbers above at ~16M rows, being stored as 1000 bytes of information per row that’s all of ~15 Gigs of information per day. I can fit that on a thumb drive, or my phone, or any iPod out there. Say we keep the information for 7 days in the live system (we can archive it off and do all sorts of cool analytics on it on another system), that’s ~105 Gigs of data. Now I’ll need two thumb drives to store it (I’m only accounting for the thumb drives which are currently sitting in my laptop bag as I write this). Whoever Jouty and Tyler are talking to about putting together this kind of system apparently aren’t very good at handling data.

A system like this doesn’t need to have every bit of dynastic data that’s available in the black box. This sort of data should be used to go find the airplane, which has the black box in it. This way the data being transmitted from the airplane to the system is kept small (just in case satellite time really is expensive still).

I really don’t think that building a system like this would take all that long to put together. I’m fairly confident that me and a couple of friends could put something together in a week. The trick would be getting the airlines to all start feeding data to it in real time from the airplanes. And of course this system needs to be secure which means that the system can’t be turned off as long as the airplane is powered up.

Frankly I can’t find one good reason that a system like this shouldn’t be built and used, and I can’t for the life of me figure out why it hasn’t already been built.

Denny


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: