Over the years, I’ve gotten used to the people I “visit” trying really hard not to make faces when I’m introduced. Nobody likes to see an auditor roll in the door. I try to make it as easy as possible, and whatever I can to fit into the schedules of busy engineers and managers. But I’ve also gotten used to some tell-tale signs that the audit is not going to go well:
Don’t prepare any information in advance and tell me you’re very busy
We send out requests for information a month in advance and offer custom scripts to help get the information easily. It’s usually information you should have at your fingertips – user lists, MBSA scans, router configurations, etc. Database queries take a little more time. I know you’re really busy – what admin isn’t? When I don’t get any information, it doesn’t make you look busy, it makes you look incompetent.
Don’t answer my emails or phone calls
If your manager has told you to do this, route my requests directly to him, and cc me. Then you’re off the hook and your manager can look bad. That’s what they’re there for. If you’re just avoiding me, well, see the note above.
Be condescending about technical issues
Yes, I know IT Auditors don’t know your systems as well as you do, and we never will. We have to ask for dumb things. Be patient and tolerant, and we’re much more likely to be helpful.
Don’t allow my laptop on your network because “it’s a security issue.”
Please don’t embarrass yourself this way. A competent engineer can route us directly out the firewall without ever touching the network. This statement means you’re either incompetent, lazy, or hiding something. Not to mention the fact that I’ve been vetted, ‘scoped and checked across multiple continents AND my company has a boatload of liability insurance. I break it, I own it. Smile, your network is safe due to your competence, isn’t it? Make it look easy.
Stonewall giving me access to critical systems because “you might break something.”
Other than questioning my technical competence, (thanks!) it tells me that you’re afraid I’m going to find something you don’t want me to see. Truly secure and resilient systems can recover from almost anything an admin can do to them. If your systems aren’t that secure or able to fail over, acknowledge it upfront. We’ll work out what I need to see together.
I can count on the fingers of one hand (and not use all the fingers) the systems I’ve seen where the engineers and managers have been proud to walk me through and show me what they are doing. I love being “wowed.” I don’t get that very often, and I really enjoy seeing a well run network.
I’ve also had engineers take me aside and reveal security issues they were concerned about that weren’t being addressed, and I keep those sources as confidential as I can. If you tell me where the problems are, then I know you are not the problem. If you are losing sleep over some issue, share the pain – I can lose sleep, too. You can use an auditor’s report to get management to pay attention to security issues.
Make the most of my visit. Ask lots of questions. Understand why I’m asking what I’m asking for. It will make your job easier, and I’ll be out of your hair sooner. And who knows, you might want to be an IT Auditor someday. You’d probably be really good at it, because you would know where to look.
I don’t know about you, but looking at packet captures is right up there with looking at Cisco PIX firewall configuration files. Nonetheless, it’s part of my job, on occasion, and although I enjoy the “capturing” part, the “looking through it” part tends to make my eyes cross.
So, a nifty new FREE tool “rumint.” (Short for rumored intelligence – why the name – who knows) Anyway, when you load a capture file (it will run a number of formats, including tcpdump) and select “Text Rainfall” from the View pulldown, and Voila! A screen that pulls ASCII text from each packet in the capture. Oh my. What a thing of beauty. I had an epiphany, it was so easy to read. You can set it for looping, as well.
This tool is part of an emerging field of “Security Data Visualization.” When I first heard of this topic, I thought of dashboards and graphs, but that’s not what this seems to be about, except in a peripheral way. I’ve just bought the first book out on the subject, Security Data Visualization And so far it’s gotten some very good reviews from at least one big name in the field. It’s also written by the author of rumint.
I think what they are shooting for is a new way of looking at data flow that uses the best part of the human brain. Computers can do a lot of things around computation and correlation, but they are basically only as good at it as we tell them to be.
You and I can look at a dataset in a certain way, and it comes together in a gestalt. Computers are not yet able to do this. Like looking at enough pieces of a puzzle, suddenly we will see the picture. I had that exact experience with rumint, which, by the way, can also run with real time packet captures.
And in any case, if it makes your life easier reading packet captures, enjoy! Kudos and thanks to the author.
I know it’s a leading question, but I think we’ve got to start asking ourselves where we are when it comes to information security and managing risks to our organizations.
Continuing my quest for how to measure good security, I ran across an excellent article on the Information Systems Audit and Control Association website. (Yes, I admit it, I visit there and read lots of stuff) The authors grabbed me with a reasonable title: How Can Security be Measured? and one of the ways to examine the organization’s security posture as a whole is to use a capability maturity model. (CMM). Here’s the good point:
Management needs some measure of how secure the organization is. Organizations need to ask themselves:
* How many resources does it take to be “safe”?
* How can the cost of new security measures be justified?
* Is the organization getting its money’s worth?
* When does the organization know it is “safe”?
* How does the organization compare its posture with others in the industry and with best practice standards?
As you can imagine, there are a number of CMMs out there that relate to information security. The article lists several, and goes on to propose its own. Looking at the different varieties, I scanned over the organizations I have audited over the years, and considered where those organizations were in terms of the size of the business, the number of employees in the IT Department, and the complexity of the IT infrastructure.
The COBIT CMM has a structure I like:
Five levels of progressive maturity:
1. Initial/ad hoc
2. Repeatable but intuitive
3. Defined process
4. Managed and measurable
Depending on the size of the organization, we can consider it like so:
1. Initial/ad-hoc – Policies are informal, everybody in IT knows all the systems, all the employees
2. Repeatable but intuitive – Policies are informal, everybody in IT knows what to do
3. Defined process – Procedures have to start getting written down, because the department is too big for everyone to know everything on the systems
4. Managed and measurable – Policies are put in place so that change is managed and communicated due to the size and structure of IT and the business
5. Optimized – Policies and procedures are developed to optimize change and manage risk – including compliance with regulations
If you think about your organization today, where are you in this model?
Setting up your Intrusion Detection System to send you email alerts designed by the consultants who put it in and thinking you are secure is the equivalent of wrapping a chain around the server and tossing it in when you go fishing. It will do just as much, if not more good in the lake as it will on your network.
Here are some rules to follow for using an Intrusion Detection System on your network:
1. “Set It and Forget It” makes an IDS useless.
Why? Activity is happening on the network all the time. Suspicious events happen that are based on low level alerts that can aggregate – setting your email to high alerts only means you are missing the boat. Plan to spend at least an hour a day looking at the primary console and logs after you’ve finished cruising your firewall logs. And no, using an ESM (Enterprise Security Managment) tool to alert you does not get you off the hook. Only the human mind is capable of correlating activities and events in a remotely effective manner. We don’t yet have sufficient heuristics to automate intrusion detection.
Intrusion Detection Systems have a back end database to hold all the signatures they can monitor for. Some IDSs install agents on servers and have remote sensor collectors (say for the other side of the continent). The signatures can be updated almost daily. Do you need to install all the new signatures? No, but you’d better make sure you install the newest and nastiest ones that apply to your network. And keep the servers and database patched.
2. “We get too many false positives!” means it has not been configured correctly.
Why? Intrusion Detection Systems must be tuned. That means using about a month to analyze the traffic your IDS sees and eliminate the normal flow of events from your alerts. IDSs have an enormous database of signatures and if you turn all alerts for those signatures ON, you’ll be watching for UNIX hacks on your all-Microsoft network. Remove unneeded signatures from monitoring, and little by little you will remove alerts that are really normal traffic on your network. Why a month? Some transmissions only occur once a month. And taking out those signatures gives your sensors more CPU to see the traffic.
3. “One Size Fits All” means you’re not wearing anything.
I usually ask for the individual policies for each IDS sensor. For each sensor placement on your network, you want your intrusion detection system to watch for different traffic. It’s the best way to deploy sensors sparingly (and effectively) on a network. One sensor on the core router will not be enough, unless it can hold multiple policies: one for your internal network, one for your DMZ, and one for your extranet.
Think about it. You want to be watching for web-based attacks on your DMZ, but they will mean very little on your corporate network. Those signatures can be minimized internally, unless your internal web servers are high risk. If your DMZ is accessed from the Internet, many more signatures will need to be enabled. If you have one generic policy, you’re drowning in false positives and missing the REAL nasty traffic in the flotsam. And yes, you will have to spend time tuning and updating them on a regular basis.
4. “We have an IDS!” doesn’t mean it’s working.
Have you tested your IDS to make sure it’s working? I’m ashamed to say that too many IT Auditors don’t take a good look at this, and incorporate a simple test into their audits. A well tuned IDS should report an internal user running a portscan. It damages nothing, and is one of the most frequent first steps taken by a hacker with ill intent. And make sure that management knows ahead of time, but not the engineers in charge of the IDS. See what happens, and how quickly they report it.
4. “Oh, we outsource THAT,” means your risk has gone UP when your costs went down.
Unfortunately, I have yet to see an outsourced policy configuration on an IDS that was truly effective. IDSs are time intensive, and no one knows your network like an admin ON your network. As a result, you may get some very well-formatted canned reports every month, and it is certainly better than no IDS at all, but the effectiveness of the system decreases with every step away from your network. It’s a business decision, I know.
The other risk has to do with intrusions – you can outsource the functions, but you cannot outsource the responsibility, for both fiduciary and reputation risk should a breach occur.
Just buy a real boat anchor.
You would think that with all the news and noise about credit card information being stolen, that more folks would pay attention to what they’re signing at restaurants (an especially GOOD place to get your information stolen) gas stations and hotels. With the amount of travel I do, I end up with quite a collection from many places.
But your credit card information (and mine) is only as secure as the hardware at the point of sale. The machine that your card gets swiped through does all the work. And depending on the age of that piece of equipment, all of your information may be transmitted and stored elsewhere to be harvested by thieves. Or the machine may be compromised at the register by a dishonest employee that “harvests” your information. Other machines can be accessed (and hacked) remotely.
So, what do I check? Is the entire credit card number visible on the receipt? What about the expiration date? Some vendors sell machines that save the entire number to their copy and blank out the numbers on mine. You would think that PCI or the FTC’s FACTA law would mandate removal of all numbers on both receipts. True, FACTA does mandate that all but the last five digits be masked, as well as the expiration date. However, it doesn’t apply to manually generated receipts (the old-style imprint) or handwritten invoices or receipts. Notably it also does not require truncation of credit card numbers on the merchant’s transaction record or even the merchant’s copy of the receipt. Does that make sense to you? Me neither.
If you write in a tip, make sure you reconcile that number with what is billed to you….. otherwise you may be paying much more of a gratuity than you intended, AND you will have trouble reconciling expenses (I hate that).
And make sure the card you get back is YOURS. That’s another favorite trick I didn’t know about until recently when someone gave me the heads-up.
I’ve been reading a fascinating book by Andrew Jaquith, Security Metrics – Replacing Fear, Uncertainty and Doubt. This book takes lots of buzzwords, like “ROSI,” “Risk Management,” “key indicators,” “accountability,” and “compliance,” and turns them on their heads.
It has always bothered me that IT security and IT audit pundits and promoters propose all sorts of theories masquerading as fact for assessing risk. Everyone has a different unit of measurement, including some very large standards organizations. This is simply an attempt to justify the cost of securing data. It has always bugged me because I have yet to see a good explanation for measuring events that have not happened. If there is a solid security architecture, Bad Things don’t happen. Mostly. How to get this across in measurable terms is deplorably difficult to the non-IT parts of the business (usually management).
We’ve been reduced to using “compliance requirements” to justify the cost for “security initiatives” across an enterprise, and that limits their applicability to what the regulations require, rather than basing our efforts on solid evidence for security improvements. Measurements and quantification just do not exist. (Gasp! Heresy, I know.)
How do we differentiate between an organization that has no security incidents because of their solid security practices, and an organization that has no incidents due to blind, dumb luck? Or my personal favorite, no incidents because they don’t have any way to even know if such incidents occur? Yes, we’re fine because we have no idea.
Jaquith does a great job of picking apart the BS Bingo, especially flashy terms used by vendors, who must continually sell you something to stay in existence. (When did true improvement turn into the next release?) If you run a Google search on “compliance,” there are 133 million results. Try the same query minus “.com,” and the results fall to a measly 12 million or so. No wonder most of our security spending has gone to product, not process. Companies have turned to compliance as a metric for good security.
Yes, we have no real idea what constitutes good information security practices.
I have a nifty little .vbs script I wrote last year. I send it to the network administrators before I come on site, ask them to run it and send me the results. It tells me username, login ID, description, length of password, last login date, acct locked, etc. It also tells me when the last time the password was changed. I use it to check for terminated users still on the system and that password controls are indeed what they say they are.
In the last 9 out of 10 Windows Domain IT audits, what group of people hasn’t changed their password(s) in over a year (sometimes two)? You guessed it. The last network admin got a little huffy, when I inquired, and replied, “We do comply with corporate policy! We just change them manually.” She cc’d my boss and her boss. Ouch.
I guess she didn’t read the file she sent me: it’s right there in plain text – the exact date. I copied and pasted her team’s last change dates, simply replying to ALL, and referencing the attached file. I try to be polite when watching someone loudly and publicly announce how badly they want to eat their shoes. After a pregnant day of silence, she came back with a very polite response telling me they were designing a new group policy just for their group to ensure passwords were changed in compliance with corporate policy. I could tell the shoe leather wasn’t very tasty.
I’ve done it too, as an administrator; somehow we don’t think that the rules should apply to us. After all, we’re the good guys! How, a non-engineer might ask, do they circumvent the group policy? Simply go into the administrative interface and select the checkmark for “password never expires.” All done.
As an IT auditor, I represent my company’s standard for IT, and so does a network administrator. If I am not following the rules, why should anyone else? Network Administrators have the most powerful rights on the network – capturing their passwords would allow a thief into everything. And the longer you don’t change it, the more time people have to work on getting it.
Plus, it just makes us engineers look bad.
P.S., the next most common group of non-changers? CEOs.
One of the biggest time wasters I experience during an IT audit is have to ask an administrator to:
a. Run tools/scripts for me in order to access information
b. “Shoulder-surfing” with an admin in order to collect information/screen shots.
It’s a waste of my time, since I know where to go on a network to get what I need, and an even bigger waste of an admin’s time to collect all the stuff for me.
If, of course, they already had it on hand, as a good admin should…..but, I digress.
So, OK, Microsoft, SUN, HP, Red Hat, IBM, etc.: isn’t it about time you created an auditor function/ID? How about an ID that would have administrative READ ONLY access? Look everywhere, touch nothing? And, make the ID uniquely trackable? Like the admin ID should be, but again…..
This would have incredible value in the business world, for in-house auditors, as well as us external folks. How about it?
One of my readers has commented about how badly Hannaford and TJMaxx have been treated by the media and Internet commentary because of their data breaches.
From my perspective, concerning the data breaches, I can only speak as an auditor and an engineer, not having been inside either company’s network, but, like you, I can read the news and read between the lines.
And I think that Hannaford was doing a good job and TJMaxx was not. Why?
TJMaxx was not PCI compliant, and Hannaford was. Big deal, you say, we all know about compliance! It’s the “Gentleman’s C.” Absolutely. But Hannaford cared enough to make the effort, at least, and get in line with some basic good security practices.
They were NOT storing Social Security numbers, names addresses and PIN numbers. They were doing it right.
TJMaxx, on the other hand (and a bigger company, at that) was using WEP at all their stores, and wasn’t even baseline with their information storage practices. Didn’t even try to put compensating controls in place (like a firewall between the stores and the corporate network). Have they even done anything different? Nothing in the news about that.
Hannaford was out there replacing hardware in a hurry to get rid of the malware. When was the last time a company replaced hardware in all their stores? Not cheap, and an enormous effort. Maybe it was driven by reputation risk, but that’s 150% more than we know about TJMaxx’s efforts.
Hannaford was the victim of a sophisticated attack, probably (??????) from Russia, and possibly with inside help. (More on the Russians, later.) Could they have caught it? We’ll know more, I hope, and soon.
TJMaxx let a script kiddie and his pals in, because they didn’t want to upgrade their registers and hardware until they absolutely had to. The money that went to banks and fines and external auditors for the next 20 years could have covered it. Easily. They took a risk, and had a “plan” for compliance. Their acquiring bank let them do that because it was better than no plan at all.
They’ve paid the fines and settled the suits, but they’ll be an object lesson for a long time to come.
I live in Portland, Maine, the home base of Hannaford, a regional grocery chain. They are owned by Food Lion, headquartered in Charlotte, NC. In turn, Food Lion is owned by an international company in Belgium, Delhaize.
Just in case you were on a desert island, Hannaford reported a breach in their credit card transaction systems.
Unfortunately, they can’t give us very many details right now for a lot of reasons – but careful reading between the lines can give you a lot of information to draw your own conclusions.
First, they replaced the hardware, at all the store locations. That tells me that it was pretty bad, because if formating the hard drive was not good enough, they ditched the hardware, and that is not a cheap proposition. And they had to keep it quiet until they got all the hardware replaced, or risk being infected again.
Second, this was not an easy breach – they are saying that malware (probably a rootkit, so undetectable by AV) was installed on ALL their store servers – and that could make it a breach from an entirely different source OR an inside job.
When was the last time you could tell something was installed on your servers without Tripwire? Trying to track down when a change was made, and by who/what? Try finding that in your Event Logs from three months ago. Don’t have them? Start going through backup tapes – they are not having any fun.
Third, the malware was uploading information to a remote site in another country. The ONLY way I know to catch this is to monitor all outbound traffic through a central firewall/router. Not many organizations have started doing this yet – but I bet more will now. And what if they used encrypted traffic? You would still see it going through the firewall – but if it was being redirected, how could you identify it?
Fourth, the Feds had to keep this quiet if they were going to catch anybody – the minute it hits the news, the bad guys shut down.
In short, it’s equivalent to a robbery, not someone walking in through an unlocked door. Whoever did this had to work very hard to set it up. Very hard. Capturing streaming transaction data is not the same as cracking a WEP-enabled wireless network.
It’s true that many organizations are doing very poorly with information security, and we have gotten used to blaming bad management practices for breaches – but this is not one of them.