Dave Taylor made an interesting editorial/tutorial in the most recent edition of Linux Journal where he decides to parse the Twitter HTML data to get how many tweets and such a user has made. This got me to wondering something: is it worth it? I mean, Twitter has a pretty robust API where you can already get this information. Do they have a Bash library (which Dave’s article discusses)? No, unfortunately, although that would be pretty interesting. But, as most sysadmins use one language or another that does have an official library binding itself to Twitter, why not use that instead?
I know this sounds weird coming from me, especially since I tend to reinvent the wheel more than I should. Most of the time I do that though it is to get a better understanding as to what is happening in those libraries. Dave teaches us the use of regex, sed, grep and cURL…none of which really are beneficial to this process, and could possibly make it slower via Bash.
By now everyone should know I love Bash and its portability. However, I do also feel in these cases, especially when its giving problems that are not easy to debug, it might be best to just use pre-made solutions. Such is the case, for example, when I was trying to implement RSA into a PAM module I’m working on. I could do it myself, but I know I would not make an efficient solution, so I decided to use a pre-made solution.
My question to the readers, though, is what do you think? Is a bare-bones API (i.e.: Twitter) worth re-writing in a (lets be honest here) outdated language? Or am I just going crazy and being attacked by holiday-cheerful penguins that want me to do nothing but work on benchmarking tests?