More than 24 hours after users began reporting Rackspace hosted services were unresponsive, and the main site went dark, Rackspace has possibly set a new record for transparency and accountability, if not customer satisfaction, by tirelessly tweeting the entire episode.
They also ran to update the company blog (how droll, so Web 1.0, right?) and blamed power outages in their Dallas data center.
For additional amusment, see the
vulturescompetitors flock to #rackspacefail
An official statement has not been made and a request for comment has gone unanswered to date; so the root of the problem is still to be determined. Amazon’s recent calamity was exacerbated by lightning unaccountably penetrating a supposedly world-class data center- it’ll be interesting to see if Rackspace’s facilities have similar flaws.
UPDATE: Rackspace HAS NOT released their incident report, but it’s out in the wild. According to the report, which I won’t post, but will summarize from, since the content is fair game at this point: a mains breaker flipped and one line of generator backups had “excitation failure” which means they didn’t start up properly. Subsequently 3 banks of UPS batteries bled out and slammed a bunch of racks — which means they weren’t charging properly or worse, underdesigned for the load.
What this means in the simplest possible terms: “Heads Will Roll”. Between this and Amazon’s air-to-ground static electricity adventure, data center types are wagging their grimy, highly redundant fingers as hard as possible at these incidents.
UPDATE: the incident report is now public