Posted by: Eric Hansen
when relevant content is
added and updated.
I touched up on a bit regarding why I’m writing my own monitoring solution. This will cover some of the design aspects I had thought of initially and talk some on how things have progressed since.
One of the first things that came to me was an authentication system. Right now its still pretty bare (passwords are stored in plain text, for example), but this will be solved soon as I do have some ideas rolling around in my head right now. But, within the authentication system, I had to consider how I wanted to structure the database. Everything was (and is) tied to the user id, and that is all I knew at the time.
This will monitor servers and services, which I dubbed “units” cosmetically. However, services are dependent on the server, so I wanted to give both their own unique ID, so I tied the ID of the server to a column in the services table (foreign keys).
Next, there was the difference maker of how to notify the user. There’s still the traditional email and SMS/text. However, I took it one step further (and will make it even further in the future). Details are top secret here, but the integration of all are tightly nit together.
Then there was the idea of storing alerts. Potentially a heavy burden on the database if its not handled properly (which I’ll try to discuss in a different part), or very effective in aiding sysadmins. The idea is simple enough, there’s three states: 1 – okay (no issues), 2 – warning (non-critical issue/unable to connect), 3 – alert (resopnd asap). However, you don’t want to store an alert everytime you check a server and see its down, just the first time.
There’s some other tables I’ve created that I’ll discuss later, but a big thing to discuss since this is the topic of database design is what server to use. For me there’s MySQL and PostgreSQL.
I’ve used MySQL since early 2000 when I first started web development until not too long ago. I’ve seen it improve tremendously in the form of redundancy and preventing corruption. From the days of forced MyISAM to almost near elimination of the engine.
PostgreSQL is one that I haven’t had that much experience with. I’ve used it since early 2012 or late 2011 when I first started development of a backup service I was working on. A few things drew me to it over MySQL:
- Not as heavy on the resources (matters a lot when you’re running on low-end VPSes)
- Has a cascade feature
- Simplier to set up and use in Python
- Has built in “whoopsie” support
Cascade is a nice feature in which when you set up a foreign key, if the “master” key of sorts is deleted, then any FK that is tied with that master key is deleted as well. For an example, we have a user in the users table with an ID of 4. In the servers table we have four servers (IDs 4, 5, 10, 42) that the said user added. If servers.user_id is a foreign key to users.id, and set to cascade on delete, then when user ID #4 is deleted, servers 4, 5, 10 and 42 are deleted as well (and everything that has a FK tied with them). This makes it easier than in MySQL where you would have to write a trigger and deal with various issues I’m sure (I’ve never written a trigger or procedure before so I don’t know).
Another thing you learn in Python quick is to use what is best supported. So, for example, when dealing with a configuration file, its a lot easier to work with a JSON file than it is a PHP array file, for example. MySQL support in Python exists, but it is not the greatest from what I’ve read and seen (been a while so this might be different now, though). PostgreSQL, however, has tremendous support with the module psycopg2. I ended up just writing a small wrapper around it to make things easier to deal with (I’ll post a link to the code once I make sure all the bugs are ironed out), but other than that its all based on said module.
Lastly, “whoopsie” support. What I mean is if either a) you insert data incorrectly or b) data would be corrupted (power failure or something), nothing is auto-committed or changed. I’m sure this can be changed if you REALLY wanted it to, but why? PostgreSQL offers a ROLLBACK/COMMIT feature. If an error occurs or otherwise can’t modify data, you just need to issue a ROLLBACK command, otherwise COMMIT it. This was a little difficult for me to grasp at first because I was so used to MySQL that I couldn’t figure out why my INSERT command would return correctly but no data was actually inserted, but it was still amazing once I caught it.
PostgreSQL it was, and I really haven’t looked back. Truthfully, I’ve completed ditched MySQL all together and have been happier without it. I think of it an ex-girlfriend…you got rid of it/her for a reason, after all.