By now I’m sure you’ve heard that Albert Gonzalez is being charged with the attacks on Hannaford, Heartland, 7-Eleven, etc. In between all the excited reporting, are some points that admins and auditors ought to pay attention to. We ought to ponder how this attack is different from attacks in the past, and why this attack was so successful.
1. Using a “team.” Most of his team have not been captured, residing as they may somewhere overseas. Using a multiple talent set across several different technical approaches increases the chances of success. This is becoming more and more common, especially with ATM break ins.
2. They used SQL-injection attacks. This isn’t new, but all of these folks were having quarterly scans from external vendors as part of PCI compliance. Why didn’t the scans catch the injection vulnerabilities? Makes you want to take another look at the scanning company you may be using, doesn’t it?
3. They broke in via wireless. Anyone still using WEP out there – it’s now trivial to crack the protocol, and someone will certainly do it if you offer it up.
4. There’s a big market for those credit cards and the people that can get to them. Over 130 million cards made him a LOT of money.
And we still don’t know “exactly” how he was caught, do we?
Heartland Security has attempted to point the “Public Finger of Blame” at the hapless QSA auditor they used for PCI compliance, saying that the “QSA let us down.” So who is in charge of security, Heartland or the auditor?
Security is a corporate posture, not a pass/fail compliance test. You can pass the test and the next day change settings on the firewall that turn it into a router. Is the QSA still responsible? Nope. We don’t really know all the details of what happened at Heartland. But we do know that being compliant does not equal being secure. Never has, never will.
For a well written post excising this “Finger,” check out this article on CSO, written by Ben Rothke and Anton Chuvakin. Let’s just say that blaming the door lock when you’ve left the windows open is not a viable public relations option.
The corporate security posture should provide a mandate, from the top down, of the company’s position on information security. The power of C-level executives enforcing the mandate has to come into play. Otherwise it’s just window dressing – and open windows are no way to manage the security of your environment.
What IS the corporate policy? How effective is it? Is management promoting AND funding it? Policies that are effective also protect the information of employees. Everybody wins, even, long term, the stockholders.
I ran across a story about a former employee who “broke into” his employer’s computers, according to a news story from a TV station, entitled Cops: Former Worker Hacked Casino Computers.
Now, here’s the real story: If you read the article, the guy did not “hack in.” He used his VPN connection from his home (Clueless Number 1) to go into his employer’s network and access computers to mess up some programming.
His VPN connection had obviously not been disabled (Clueless Number 2) by his employer.
The police (Clueless Number 3) referred to him as a “computer whiz” for using his VPN connection from his home to get into his employer’s network.
Whiz? Cheese Whiz, maybe?
I finally asked that deadly question: “What do your Incident Response Procedures say?” Whoops, there goes all the buddy-buddy geekiness: I have morphed into The Auditor Who Asks Questions.
“Umm, well, they pretty much say to do what we just did.” I notice the vagueness of the reply, but decide to let it pass, for the moment. They don’t really know what their procedures say they should do. Probably the procedures are too generic.
“OK. But what if he has jumped to this box from another box he compromised first? How would you know?” More pained and irritated looks coming my way. “By now, you won’t really be able to tell what happened unless you go to a backup and start analyzing whatever you can find for connection information. But that won’t necessarily give you rootkit information. If you’re lucky, you might see a netcat connection, but only if he hasn’t erased the Event Logs.”
“Even so,” I continue, knowing I am now excluded from the Kool Kids Klub, “If he has gotten your SAM database off the server, wouldn’t he know the administrator password? Is that password the same on every server?”
Turns out the password IS the same, and the Event Logs overwrite according to defaults. Now they can’t trust the server OR the administrator password. But I’m leaving, and besides, this isn’t an audit anyway, just some consulting.
So they left the server alone, because “There are all those websites on it, the users would scream and we’ll watch it carefully.” And never mind about passwords because “It’s a really tough one they’ll never crack.”
I wonder what will happen next, don’t you?
The problem with being a “geek” is that we truly love to tinker, to fix, to improve, to test….etc. So when you announce to a bunch of us that a website on the network has been broken into, there’s lots of leaping into action.
Which is exactly what you don’t want to do. At all.
While visiting a client to talk about network architecture, an engineer rushed into our room to announce that one of their websites had been hacked. We all hopped up and went out with him. (My lecture was boring, anyway.) I wanted to see what they were going to do, and if they were going to follow their own Intrusion Detection Policy. Plus, I was, like them, vastly interested.
Turns out it was a fairly generic attack, with the break-in artist simply using the website for cross-site scripting and redirection.
By the time we got there, two engineers had already been working on the web server, analyzing the code in the html, and checking other settings on the server. They took the web server offline, removed the offending code, looked at the event logs and brought it back up. All good, they said.
“Not really,” I said. “You do know that you can never trust this box again?”
“Not to be a party-pooper, but there’s no way of really knowing if a rootkit has been installed, is there? He could come back tomorrow.”
The four geeks looked pained. “What should we do?”
“Well, we can start with reformating the disk and reinstalling the OS.” I knew the minute I said that I was not going to be the most popular girl in the room. That sort of thing is awfully tedious and boring; no fun for geeks.
“But there’s ten other websites on this server!” Oops, this was going to be a LOT of work.
We segued briefly into the advantages of virtual machine backups, and then returned to the discussion of what to do.
I finally asked that deadly question: “What do your Incident Response Procedures say?”
Articles are being released today about a flaw discovered by security researchers Charlie Miller and Collin Mulliner. They informed Apple a month ago about this flaw, but no fix had been issued. So they decided to go public at the Black Hat conference with a demo of just how easy it is to take over an iPhone. The demo will be done today and I’m sure details of how to do it will be flying. From here, it sounds like a buffer overflow.
Experts are warning that a text message containing a square character means someone is in the process of taking over the phone. They recommend that you shut down the phone immediately and “wait awhile.”
I suppose they think waiting awhile will motivate the hacker to move on to other iPhones. I’d suggest, however, that you turn OFF text messaging until they get this fixed. Shocking to some, I know, but it would be much more shocking to have all your information compromised.
Have a new iPhone with 3G? You can visit a site on YouTube that demonstrates the ease of bypassing both the passcode and the encrypted backup. He has a number of other videos that are equally painful.
Once again, security has taken a backseat to speedy software development. Now Apple is getting a lot (more) bad press.
It always pains me when I get this question from a client’s IT staff. It usually means that auditing has never penetrated to that level, and people are used to doing pretty much what they please around the network. It usually goes with:
“This is a development shop. Those are not production servers or databases – so why are you asking to see users, patching, inventory, etc????”
These are the kinds of questions that will keep me employed as a successful penetration tester AND a digital forensics analyst. When I’m dead someone will prop me up to keep going.
A development environment is EXACTLY where a penetration tester goes first for exactly this reason. When you don’t know what’s running on your network, you don’t know who is on your network.
If it’s on your network, the company is responsible. Legally responsible. And that question will not hold up in court.
It’s a great version of the “sniff test:” Imagine saying it on the witness stand to a judge.
While doing a PCI exam not long ago, I visited a company that was very proud of it’s security measures, and rightly so. They had done a lot of work to secure their environment.
Sometimes it’s the smallest things that we are so used to seeing that we stop “seeing” them. They become part of the background noise of everyday functions and escape our notice. Social engineers are masters of acquiring those functions and using them for the wrong reasons. For example, the building cleaners. Do they have keys to everything in order to clean your offices? What if they decide to clean out your data?
Corporate espionage agents have been known to offer cleaners $50.00 per bag of trash. Another point of easy cash is backup tapes.
When we walked into the tape storage room, I inquired, “Do you have an inventory of the tapes in this room? How often do you check that the inventory is all accounted for?” Nonplussed, the CIO replied that the door was secured and only he and one other IT person had the key, which was signed out in the Data Center whenever it was used. So they weren’t “bothering” to inventory the tapes in the room.
Looking down, I noticed that the wastebasket was empty, with a fresh plastic bag neatly wrapped around it. I said, “Do your cleaners have a key to this room?” “Why, yes,” the CIO replied blankly. Then comprehension dawned on his face.
Next day, a new policy was posted by the tape storage door: all trash receptacles were to be placed outside the door. The CIO informed me that the lock had been changed to the door, and inventories would be done monthly.
There are some companies that go the extra mile of encrypting tapes or requiring that their cleaning companies be bonded AND employees have an annual background check.
It’s expensive, but so is losing the company’s reputation to a building cleaner……
I’m attending an absolutely fascinating course on Digital Forensics provided by SANS. One of the things we will be doing is collecting data from hard drives for various practice exercises.
Imagine my amusement when the handout and appendixes recommend where to get used hard drives to practice on: eBay or Craigslist. Didn’t Simson Garfinkel do this a few years ago? And come up with a whole bunch of juicy information?
How do you dispose of hard drives? There are overwriting programs and businesses that will pick them up and dispose of them securely, providing a certificate (and thus transferring your risk). But how do you know they are performing as agreed?
I’m looking forward to my eBay hard drives and what they will disclose. Hope they’re not yours!
I’m still amazed that folks are going about their business believing that bad things won’t happen. Is it human nature? I thought I’d share with you some of my latest adventures in traveling about and auditing various companies. Just when I think it’s strange, it get stranger.
I was doing an audit and I routinely check for wireless connections. The manager had assured me that their policy was: no wireless. OK, but I check anyway. It’s the nature of my work: controls should be in place and they should be working. Essentially a very simple rule.
Behold, a Linksys wireless router popped up with an obvious default configuration. I followed my trusty wireless signal scanner downstairs through several departments until I came upon it sitting out in the open near a group of desks.
I headed back upstairs and asked the manager about it. His face flushed, and he said, “Where is it?” He followed me downstairs, I pointed out the router, and he reached over and yanked the network cable right out of the wall, looked around, and said, “Who plugged this in?” When no one responded, he took the casing off and stomped on it. A silence ensued.
He was peeved. Glad it wasn’t my router. Not because of the router, mind you, but the person who owned it was obviously going to have a discussion with this manager before long.
Back upstairs, his dignity somewhat restored, the manager asked about my wireless signal scanner, and I promptly demonstrated its virtues (electronics can be soothing). Canary makes a great one that scans for b/g and n networks, giving me the type of encryption AND the SSID so that I don’t have to even open my laptop. It has a visual meter so I can home in on the source of the signal and actually find the access point without my laptop (which is rather obvious).
I was ready to give it to him in hopes of escaping any further compliance corrections, but he seemed calmer at that point and thought getting one of his own was a smashingly good idea. (Sorry, I couldn’t resist).