I’ve visited any number of schools, higher education and universities in the last five years that have been suffering from the “Open Campus” syndrome.
Fundamentally, it’s an attitude on the part of students, teachers/professors and management that their environment won’t be “really” damaged by hackers. In the “Open Campus,” all are good, a few naughty students exist, and information should be available to all.
That is quickly changing, I would imagine, when the students and professors discover that the information that is lost/stolen happens to be their personal information. To wit:
1. In May, 2009 officials at the University of California at Berkeley notified students and the public that hackers had breached a healthcare database at the school, potentially gaining access to the personal information of up to 160,000 students dating back to 1999.
Complicating matters: The breach is thought to have initially occurred in October 2008. Administrators said they didn’t notice it until April 9, 2009, however.
2. Eastern Illinois officials in December 2009 found that hackers accessed a server holding the personal information of more than 9,000 former, current, and prospective students. Hackers had access from March 2000 through November 2009.
3. At Penn State University, more than 30,000 students were told that a series of malware-induced data breaches at computers hosted at three different campus locations had exposed their personal information for an unknown period of time.
I would imagine that the parents aren’t too thrilled, either.
About ten days ago, a splash page appeared when I went to log into my Gmail, indicating I could click the button labeled “Sweet! Check out Buzz” or “Nah, go to my inbox.” I just said “Nah” and went to my inbox, thinking no more about it. Sometime later, I noticed a little icon at the top of my Gmail. I didn’t pay any attention to it. (Bad auditor, bad!)
Turns out “Buzz” is a function that integrates social networking, instant messaging, blogging and any of the other applications within Google. Unfortunately, it does a whole lot more.
When I read that EPIC (Electronic Privacy Information Center) had filed a complaint with the FTC about “Buzz,” I was surprised. Then I read the complaint. I strongly recommend you read it, and you will see why a class action suit has also been filed today on behalf of the 37 million users of Gmail.
It turns out that regardless of whether a user clicked the button labeled “Sweet! Check out Buzz” or “Nah, go to my inbox,” Google Buzz was activated. No big deal? Ohhh yes it was. According to EPIC, and others:
Once Google Buzz is activated, the tool automatically populated my “following” lists using my most frequent email contacts. This happened automatically, after I logged in. Regardless of what I selected at the splash screen. In other words, if I didn’t change any of the default settings in Google Buzz, someone could go into my profile and see the people I email and chat with most.
Google Buzz did not warn me that creating a “Profile” in Buzz would make my frequent email contacts into “followers” and followed by,” and that this list would be made automatically available to those people and public on the web.
As we all know, web pages are archived and stored all the time. I can’t take that information “back.”
And neither can anyone else.
If you’re as horrified as I am, here’s a link to disable the thing. You start by clicking that tiny little colored icon at the top right of the mail page.
That splash page was NOT an opt-out. I had no choice about whether to start using it, or not.
Check it out before you disable it. See how much default information about you and who you contact is available. I’m furious. So are a lot of other people. Google made some changes last week, that do not go far enough:
“Google will stop auto-following the people you regularly email and chat with, but will instead suggest that you follow these people when you first start using Buzz. You’ll be shown a bunch of faces and check boxes to make sure you’re really interested in following these people.” Those “checkboxes” are still automatically turned on.
If you are using this BAD IDEA, you must go in and manually set privacy settings.
Not good news for privacy. Big bad news for Google.
When doing a physical security audit, there’s always the “security by walking around” phase. I find PCs with no screensavers, passwords under keyboards and keys labeled “server room.”
Consider the cigarette smoker. Every company has them. (Better, by far, than the cigar smokers, in my opinion.) Now that they all have to go outside, so as not to pollute everyone’s air inside, they tend to congregate in certain spots.
After having been shooed away from the front door (you DO shoo them away from there, don’t you?), where do they go to commiserate together and stay out of sight from management? Ideally, close to shelter (not wanting to stand in the rain) and the ability to get back in quickly (if the boss calls on the cell phone wanting to know where they are).
Today during a walk around, I headed for the break room to grab some coffee. I noticed a door leading outside directly from the break room. Hmmm, I thought, a perfect “smoker door!” I popped it open and looked down. What did I see?
The “smoker door” special rock! You know, the one that props the door open while the smoker grabs his “air time,” especially if there is no easy way to get back in.
Sure enough, there was no card reader so that the smoker could get back in, AND no other door in sight, either. Thus, the “special rock.”
Obviously, nicotine craving trumps physical security here. The front door has a sign in sheet, cameras, escorts and access cards. The “smoker door” has none of these things. It’s facing the unsecured rear of the building. No chain link, etc, just the local woods.
If I were a pen tester, I’d grab a pack of smokes, walk around to the back of the building where the “smoker door” can be found, and hang out sheepishly holding a smoke until someone comes out. After that, it’s a matter of social engineering. I won’t wait very long!
So, what’s the simplest solution? (other than hanging them up by their toes.) Add a keypad to that door. Get rid of the “special rock,” announce the keypad to all employees, and monitor the times that access there is used.It will give you some good information as to who and when that door is used.
If usage drops off, and/or another “special rock” appears, announce the intent to add a camera to monitor that back door. They’ll eventually get the hint. And the camera might just be a good idea anyway.
I’m sure you’ve noticed the usual plethora of “studies” about 2009 erupting from various security vendors. Some are good, some are barely concealed pitches for product.
Looking over about a dozen of the above, I gleaned some significant facts that might be useful to both admins and auditors:
Social Networks have seen significant increases in malware.
Small to medium businesses are being heavily targeted with specific software written to capture banking information such as usernames, passwords, screenshots and even scan hard drives for sensitive information. These customized trojans are for sale on the DarkWeb.
And my forensic peers are discussing the idea that there will be an increase of forensic examinations on mobile devices, given that banks are starting to release programs that allow purchases and bank access from mobile phones.
Is anyone besides me thinking that banking on mobile phones is a Really Bad idea?
It’s worthwhile to consider the printers, copiers and scanners (or all 3 together – multi-function devices) on your network. How many of your printers allow open access? Open ports? Can I telnet to your printers?
Why worry? Why bother? Well, if you google “printer hard drive,” you can see that hard drive sizes range from 32 megs to 80 gigabytes.
I read some years ago that hackers were using printer hard drives to store warez and other bad things, as well as using the hard drive as a “jumping off” site into the rest of the network. Your network.
For some nifty examples, you can visit Iron Geek’s page on the issue as it relates to HP printers. And do note that printers and copiers have default passwords that easily found on the Web. The info is a little dated (2006) but still quite good.
If you think this is so NOT news, consider that printer vulnerabilities are still coming out, but how many admins think to update their printer software. If you’re like me, only when there’s a problem with a printer.
Given that a printer/copier/scanner has that much storage on the network, what would be the kind of data it would be storing (until overwritten)? Consider the kinds of things that people print – reports, spreadsheets, confidential .PDFs, etc.
Sold or bought a used printer/copier/scanner recently? Did you clear the hard drive?
The confidential information on those hard drives might be medical information, financial information and/or credit card information. As a result, under regulatory laws, the company is liable.
80 gigabytes? Big OUCH.
Not long ago (needless to say I can’t mention time or client name) I was asked by a medium-sized business to investigate some problems they were having with spam, malware, and “weird stuff” on their network.
Their network contained at least 200 users spread out over multiple sites. I asked to speak with their network admin, and they said, “Oh, that’s Sally over in clerical. She’s part-time. We have a local company that comes in when we have any problems with hardware. They monitor our firewall, too. It’s too expensive to have a full-time person.”
Rule of Thumb (I forget how many I’ve got now): More than 50 users? Rent or hire a full-time administrator. (Not from the clerical department either). It’s not fair to the part-time employee, and no one is there to monitor what the IT company is or isn’t doing.
I asked for a network diagram, and they said, “Oh, we don’t have one. Do you really need it?”
Rule of Thumb number something-or-other: If you don’t have a network diagram, you don’t have your network. The hackers do.
I suggested some free tools for acquiring a network diagram, such as Spiceworks, which is nice for monitoring (you have to put up with some ads, but you can get rid of them for a fee) and Look-at-LAN, available for free at CNET, along with other free tools. They said they’d ask the IT company to do it.
At that point, I thought I ought to look at their server room. It was a good sign that they had one, and the door even locked. I went in, looked around. The part-time clerical person said that they had just moved from an older building and the IT company had moved their computers and server room/data center/storage closet over to the new building. It really was a nice room. No temperature monitoring, no fire alarm, and overhead water sprinklers, but a nice new room.
After looking at the equipment for a few minutes, I said, “So, where’s the firewall?” She didn’t know what firewall looked like (bad sign). She called up the IT company, who said it was in the building, because they were getting reports from it.
At that point, I had a brainstorm. I asked, “Which building, exactly?”
A story appeared on Techdirt concerning a woman who has sued Yahoo because her name was linked to an erectile dysfunction drug in search results. How this happened the first time, we really don’t know,but now the dark humor begins.
Actually, this is her second lawsuit concerning the same issue (the first one was tossed). She can’t seem to understand why her name is now linked with the same drug across hundreds of web sites, and blames Yahoo, yet again.
Of course, she fails to realize that by filing yet another suit, with her name linked (yet – again!) to the same drug has virtually enshrined her in the hall of fame for people who are wishing they could disappear. Given the amount of time material stays on the Web (I can find posts I’ve written 12 years ago), she can plan to be perpetually linked, due to the media exposure generated by her lawsuit.
Now, in truly bizarre Internet fashion, sleazy drug sites have been adding her name to advertisements for that same set of drugs, assuming (and rightly so) that the sites will come up when her name is entered in the search engine. The Yahoo search engine lists 290 results, 90% of which are drug/pharmacy/sleazy websites.
This is, of course, in addition to the paid-ads for the same drugs that appear along with the search engine results.
I can’t add to the torture by giving you her name, but you can Google it…… Only Yahoo has the errant connection, which also demonstrates the error in their search algorithm….
Rule of Thumb: Whatever you say on the Web, STAYS on the Web. (Especially if you don’t want it to.)
From a security perspective, hackers can find the same sort of information that might be available, on search engines and cached web files.
OK, I admit it’s a pet peeve. And it’s certainly not going to be an annual report any more. But I can’t imagine how people outside of the technical sector could possibly understand why we have so many different acronyms for the same thing. In the material below, there are four acronyms in four sentences. (six, if you count the repeats.) I’m ROTFL.
Enterprises will give enterprise rights management (ERM) software a second look as an enforcement option coupled with DLP.
The idea of two acronyms “coupling” makes me nervous. Will they have lower case acronyms as offspring?
Information Rights Management (IRM) is a term that applies to a technology which protects sensitive information from unauthorized access. It is sometimes referred to as E-DRM, Enterprise Digital Rights Management. Sensitive data and information such as Patient records, personal tax or financial information in .PDF, XLS, .DOC, .TXT etc., needs security.
It would be smart business for healthcare, legal, and any organization to incorporate DLP in the form of IRM now, before a breach occurs and data is lost.
The marketing stuff above is being pumped out by a “security expert” (names eliminated to protect the guilty) as an add-on alternative, or “enforcement option.” (Where did that come from?)
This same “expert” is promoting a particular product using Forrester’s 2010 predictions (which you can purchase for $499.00)
DLP, IRM, ERM – still have the same issues as the other products that engage encryption:
Who keeps the keys? How many places are they stored? Is access to them logged?
We’ll be so busy deciphering marketing materials, we won’t even notice the data is gone.
I’m sure you’ve heard about the Chinese hack into Google, and there’s some interesting goings-on behind the scenes to identify and fix the hack, deemed “very sophisticated.” It’s even been given a name: “Aurora.” Not only Google has been hit, but Juniper, Adobe and some other 29 firms.
Microsoft has been active in this issue, releasing an advisory, an out-of-cycle patch, and frequent updating of their security blogs, and this blog link has a patch to make sure DEP (Data Execution Prevention) is enabled on your system.
Information about this issue is still see-sawing back and forth as further data is released to the public. It definitely involves all versions of Internet Explorer, but especially IE6. If you’re still running this on your systems, change it now. Microsoft has specifically reported that IE6 was used in the target attacks.
I came across an article on a sister TechTarget site for VMWARE. Its’ title immediately got my attention:
How to steal a virtual machine and its data in 3 easy steps by By Eric Siebert, who has a vmware site of his own and has authored at least one book on VMware.
I have to sing his praises because this article lays it all out in a very coherent package, and is something every admin and auditor ought to think about when it comes to virtual servers. He makes the excellent point that it’s much easier to steal virtual data – and making a copy of virtual image is not logged by console. So a savvy engineer could walk home with data in his pocket. It’s a very educational read. Not to mention a little scary to think about.
My only (VERY) minor issue is that he seems to think that the image w/data will fit on a USB drive – Gee Eric, how big is that USB drive you’ve got? Mine only go up to 16 megabytes!
I’ve been wondering for awhile now about virtual machines. Most bad people try to get in through the hypervisor, which is the remote attack. Why do that when you can just copy the data from the inside?