Posted by: David Schneier
anti-malware, anti-virus, assessment, Audit, hack, HIPAA, regulations, regulatory, Regulatory Compliance, scanning, vulnerability
I read a blog post last week from my friend Ed Moyle in which he discussed a story about how a professor at the University of North Carolina-Chapel Hill was demoted because a server used in her research project was hacked. A committee had concluded that it was the professor’s fault that the server was improperly configured and should be held accountable. She was knocked down a rank and had her salary cut pretty much in half (this after first recommending she be fired). The assignment of blame and the punishment that was levied is a story by itself. But this story has all other kinds of juicy associated with it.
The data on the server included mammogram results from across the state, patient information that was harvested without the patients’ knowledge and included their Social Security numbers (can someone say HIPAA breach?). The vulnerabilities on the server that allowed the breach had existed since 2006. The breach occurred sometime in 2007 but wasn’t discovered until 2009. Although the IT team could determine that a breach had occurred, they had no way of knowing if any information had even been stolen.
So UNC didn’t know for at least three years that it had a vulnerable box plugged into the network and was in possession of illegally obtained information. It turns out the only thing UNC did know was who to blame. But in the end they got that wrong too.
There’s no worse precedent to set than to make business owners, regardless of the vertical, responsible for their own technology. They don’t know anything about ports, settings, patches or upgrades; they only know they sign on and use what they use. And because of economies of scale, it doesn’t ever really make sense for an individual department to hire its own resources. It’s why IT became a centralized resource decades ago and why it makes sense still today.
So why didn’t UNC’s IT department do its job? Why didn’t the group responsible for plugging servers into the network configure the machine properly? How did IT let the machine sit out there for not one, not two but for three years without detecting there was a problem? What sort of scanning tools do they use? Don’t they have antivirus or anti-malware software installed? I mean honestly, how did UNC’s IT people let this situation not only come into existence but also to remain for so long?
I don’t always go out on a limb like this, but UNC is wrong for blaming anyone other than the IT staff responsible for configuring and securing the network. What UNC has right now is a scapegoat, which just seems silly for so esteemed an institution.
Oh and the university also justified its punitive actions by claiming that the data on the server was obtained improperly. UNC is right; it was. But what it failed to realize is that the HIPAA violation falls mostly on the shoulders of the doctors who provided that information. They’re the ones who assume the obligation of protecting their patients’ data and while the professor should have been more on top of that element, it wasn’t her primary obligation; it was the original caregivers’.
Really in the end what this whole mess boils down to is a great big bowl of wrong. Wrong person blamed, wrong handling of the server, and wrong message sent. Wrong, wrong, wrong!