Regulatory Reality

Sep 10 2009   4:16AM GMT

Test what makes sense, not headlines

David Schneier David Schneier Profile: David Schneier

The recent news about a social engineering exercise gone awry serves as a lesson on how not to conduct these kinds of tests. An information security firm had sent a credit union NCUA-branded media to install in order to test if the employees would react appropriately and first attempt to validate that the request was legitimate. The problem was that no one notified the NCUA about their role in the exercise and so when they were contacted by the institution, they assumed this was a legitimate threat and issued a warning to all of their credit union members.

At first blush, this story might appear amusing. Beyond some embarrassed people at the security firm conducting the social engineering exercise and some likely annoyed folks at the NCUA (a federal security alert is not a common or simple occurrence), it would appear you could chalk this up to “no harm, no foul.” From certain vantage points you might even consider this a wildly successful test. After all, the client reacted appropriately by contacting the NCUA and the NCUA reacted appropriately by identifying the actions as unsanctioned and potentially harmful and alerting their member institutions. But for those of us who practice in the industry, this is far from amusing and actually somewhat disturbing.

When one of our member practitioners or firms does something that brings negative attention to the industry or conducts themselves in a way the results in a black eye, it’s always extended to a certain to degree to all of us. At some point in the process, I would have thought that someone working for the offending firm would have reviewed a draft of the plan and flagged the part about including an uninvolved third party as unnecessary or inappropriate (and quite possibly illegal). And besides, grand and elaborate schemes aren’t really necessary. Most breaches that occur aren’t of the James Bond variety, so subtle tactics work best.

I’ve managed and conducted social engineering tests many times in the past and so I speak from experience. On one project, I had a renegade auditor who wanted to test data center physical security by trying to either force or talk his way into the facility. He was told in no uncertain terms that doing so was not authorized or acceptable and that if he did it and was arrested (a very likely scenario), we would not bail him out and he would be fired immediately. It was just a bad idea and not even remotely necessary to test the related controls. And then I was reminded of a story in which a well-known national security firm had their practitioners dress up in firefighting gear and arrive at a bank branch claiming there was a possible fire/smoke condition to see if they would be allowed access to private/protected areas of the bank. Of course they were granted the access and as a result were written up in an industry magazine and considered to be innovative and imaginative.

Social engineering is intended to examine how the human element reacts to a variety of scenarios designed to gain access to sensitive information or secured areas. There are many, many simpler and less obvious techniques available to poke and prod and test the effectiveness of related controls. So why was this test even necessary?

The short answer is that it wasn’t. It was a bad idea in design and execution.

At a basic level, I don’t understand is how this test was even conducted. A common element when executing any form of security work is to inform the key stakeholders of the plan so that things like this don’t happen. When the appropriate party was notified about the suspicious material received from the NCUA they should have known what to do (beyond escalating to the NCUA). We inform the primary security contact of our activities so that they know not to escalate outside of their own institution. We provide specific start and end times, all key details, and status updates along the way.

The wrong messages are sent as a result of these wayward tests. I’m thinking that credit unions will now require all sorts of crazy validation before trusting anything from the NCUA. I’m also concerned that for the bank involved in the firemen scenario, they may not properly evacuate the facility in the event of a real fire because they’ll wait for confirmation that it isn’t another test. Is that really the desired outcome of these exercises? Last year, I managed an exercised involving a phone based phishing test. Two days after we concluded the fieldwork, I received a message from our client sponsor asking if we were still executing the test. Turns out they were the target of a legitimate phishing attempt and because our activities had raised awareness, the situation was escalated appropriately. Doesn’t that make a bit more sense?

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: