We security wonks always seem to be put into a position of having to say “no.” That makes us unpopular with the I’m-not-hurting-anything crowd who insist on checking their webmail, IMing their friends, and running assorted and sundry downloaded and web-based applications (but only on their time, of course). Maybe they’re right on some level; many of those things are benign and don’t represent security threats. But there are also potentially dangerous applications such as peer-to-peer (P2P) file sharing that can expose your network to hackers via an open P2P connection (See P2P Leads to Major Leak at Citigroup Unit and Pfizer Falls Victim to P2P Hack). What’s one to do?
Start saying “Yes.” You read that right. Look at it from the user’s standpoint: A blanket prohibition against anything and everything usually foments rebellion on the part of some and they’ll do whatever they want to do with wild abandon. Your network is less secure as a result. But, if you develop policies that allow webmail, online shopping, and IM instead of blocking them at the gateway, while prohibiting the potentially dangerous stuff, you just might find the users starting to ask you if it’s OK to do certain things.
And they just might listen to you if you say “No.”
This has to be one of the most evergreen security topics to come along; no matter how much anyone writes about the dangers of clicking on links or opening attachments in unsolicited email, people continue to do it. SANS NewsBites, March 25, 2008, Vol. 10, Num. 24, begins with this statement:
The Excel story is number two in Top of the News this week because of the critical lesson it teaches: When you see your anti-virus package scanning a Word or Excel file, the odds are VERY high that it won’t find any of the important new vulnerabilities nation states and rich criminals are using to get past the most sophisticated defenses. Don’t open email attachments unless you were expecting them. [Emphasis added] Send a note back and ask the person to embed the text in a simple email. This matters to your career. The people who break this rule will be the reason their organization’s data are stolen and they won’t be able to hide.
(They’re referring to a months-old Excel vulnerability for which the exploit code has just been widely released. For more information on that, you can check out this ComputerWorld article.)
I remember, years ago, a client got a nasty malware infection that resulted in my finally resorting to a full wipe/reload of the OS and all her data. I had solved a couple of minor adware issues for her in the past and, as is my custom, gave her my standard admonition, “NEVER, EVER click on anything if you don’t know where it came from.”
“But I clicked on CANCEL!” she replied. She just couldn’t get her head wrapped around the idea that no means yes, yes means yes, cancel means yes, exit means yes, ANY click means yes.
I’m thankful that most of my clients now either call me or drop me an email if they see a message or pop-up they don’t understand, and malware-related emergencies are way down. But they’re not completely gone. Occasionally, I still get that one dull client who calls to say they clicked on something and now they’ve got popups all over their screen.
All I can say (think) is, “You clicked? Really? Are you nuts?”
Being a Ham Radio operator, I’ve always understood the risk inherent in using radio signals to transmit sensitive information: anyone with the right equipment can receive and record anything transmitted over the air. These days, I’m noticing a lot of people in various offices walking around with these cute wireless headsets hooked up to their office phones.
Ever wondered what kind of security risk these things might pose to your company? Yeah, me too. So, did the folks at Secure Network Technologies as evidenced by their article “Hacking Wireless Headsets” that appeared Jan. 22, 2008 at DarkReading.com, a site that provides in-depth security news and analysis. Here’s an excerpt:
To perform the work, we purchased a commercially available radio scanner. These devices are available at any local electronics retailer at prices ranging from $80 to several thousand dollars. We chose a scanner capable of monitoring frequencies from 900-928 Mhz and the 1.2 Ghz ranges, which is where many of the popular hands-free headsets operate.
We took a position across the street from the facility and started up the scanner. Within seconds of turning on the device we were able to listen to conversations that appeared to be coming from our client’s employees. Several of these conversations discussed the business in detail, as well as very sensitive topics. After some careful listening, we determined that the conversations were indeed coming from our customer.
See the nightmare coming? With the right information you can then use social engineering techniques to get your tentacles very deep into the company. And that’s exactly what they did:
Our plan was to assume an identity of an employee who had never been to the office we were testing. Using that identity, we would enter the building, commandeer a place to sit and work, then see how long we could stay inside the building. After zeroing in on a particular employee, we gathered as much intelligence on him as we could. To prepare for the entry into the facility, we printed a business card with our assumed identity. I put on my best suit, and then went to work.
In all, they spent three days “working” in the company, gaining access to all sorts of information, technology, and resources. Not only that, but they also discovered that the headsets acted as bugging devices; even when disconnected, the headsets continued to transmit. The impersonators were able to listen in on conversations carried on by the wearers.
Be afraid. Be very afraid Seriously, read the article and if your office uses these things, do your own tests to find out where you’re leaking. Then, plug the leaks.
One of the clients I service has information that falls under HIPAA. Prior to last week, all of the data was stored on a server located behind a strong firewall in a building with good physical security. Last week, however, this organization decided to deploy laptops for their field operatives. Major security problem. Full-drive encryption was my first thought.The good thing is that there was nothing on the laptops except for the OS–they were brand new. Nobody had seen them except me. I was able to encrypt the hard drive before any data had been written, thus insuring that no remnants of unencrypted data exist. Every future write to the hard drive will be encrypted.
If you think about it, this is the safest way to do full drive encryption. But what if you want to re-deploy equipment that has had data on it? In this case, you’ll want to first wipe the drive using a good tool like Darik’s Boot and Nuke (DBAN) or CMRR’s Secure Erase, depending on the sensitivity of the data. DBAN will let you write multiple passes of pseudorandom data, which is usually “good enough.” Then, reinstall your OS of choice and run your full drive encryption program assigning a passphrase at least 20 characters long (mine’s 45). All this working of the drive should sufficiently scramble any data remnants.
My company serves as the IT department for several medical, legal, social service, and banking organizations in our area. I don’t have to tell you that every one of these organizations deals with information that falls under various government data security and privacy acts. Every one of these organizations depends on and expects us to put in place measures to protect their data. In other words, if they suffer a breach, they’re going to assign responsibility to us on some level. So, when I decommission a server or PC, I take steps to make sure that no one is going to be able to read anything off the hard drives. Call me paranoid, but consider this: seven in 10 secondhand hard drives still have data. What’s one to do?
It’s well known that simply wiping out partitions and re-formatting drives doesn’t erase anything. It’s equally well known that overwriting every sector with pseudo-random data is considered a secure method of erasure. I give you a two-step approach that may be overkill, but is certainly a procedure that any court would consider a mitigating factor if I or my company is accused of negligence. (I work in a Microsoft environment, so that is the context here.)
Step one is to install TrueCrypt 5, (my hands-down favorite) or another full-drive encryption program, and perform the steps for full-drive encryption; this effectively writes pseudo-random noise to every sector of the hard drive. (Don’t fret about the 20-character password TrueCrypt warns you about–just type “password.” You’re not worried about logon security; you just want to encrypt the hard drive.) This one-pass encryption is probably sufficient for a home PC hard drive, but not for anything else.
Step two is to run a disk erase program that overwrites every sector with pseudo-random bits. I use Darik’s Boot and Nuke (DBAN), without question a best-of-breed open source program. One pass auto-wipe should be sufficient since you’re overwriting what already amounts to pseudo-random noise (created by TrueCrypt) on the hard disk.
After this treatment, any adversary would find it virtually impossible to recover anything usable off of the drive. Give it away, sell it on eBay, do whatever.
And have a good night’s sleep.
Some of these tips may very well be “everybody knows” types of things, but I find that these are often the things that get overlooked. That’s why I’m publishing them as computer security maxims. Take a look at the recent furor surrounding the cold boot attack against disk encryption . That was an “everbody knows,” too.
I get questions all the over at Ask the Geek about using a mail client’s message preview feature. Opinions vary, of course, but for this geek, it’s a bad idea. In order to preview a message, it has to be opened or rendered by the HTML engine. Think about how a PC can be infected by a malicious web site and you’ll immediately understand the danger: The same malicious programs can exist in scripts in HTML messages. It’s a serious security risk.
Security Maxim #6: Always disable any message preview or auto-open features in your e-mail client. View messages as text-only until you know they are safe.
A while back, I wrote an article entitled “Will You Be Used As a Weapon Against Your Own Country?” The flip side of that is being used as a weapon for your own country. It seems the United States Air Force is looking for a few good cyber warriors. From The Register:
In a document [PDF here] released this week, the US Air Force is laying out plans for a new cyber command, which is scheduled to become operational in October . It tries to make the case that the ability to wage war and parry attacks over electronic networks is crucial to maintaining national security.
The document does a good job of making the case:
Mastery of cyberspace is essential to America’s national security. Controlling cyberspace is the prerequisite to effective operations across all strategic and operational domains—securing freedom from attack and freedom to attack.
You have to bear in mind how the Air Force defines cyberspace:
Cyberspace encompasses the electromagnetic spectrum with its distinctive physical properties and those of the man-made electronic systems created to operate across the domain.
This would encompass the entire radio spectrum as well as well as”wired” cyberspace. The Internet, of course, also relies on wireless technology. And much of military command and control relies on radio communications, so the concept makes sense. Communications must be maintained at all costs. This involves mastering many electronic technologies and even, perhaps, physical signaling methods for use in the event an electromagnetic bomb disrupts electronic transmissions.
The Air Force Cyber Command is certainly no place for the technologically challenged, but for those of us who love and understand technology, it could be a great career.
Geek warriors: now that’s one for the books.
True computer and network security takes a lot of work to implement and it takes a lot of work to use. Despite training (if any) and admonitions by their supervisors and the IT department, the lazy create simple, easily-guessable passwords, write them down, and post them on sticky notes right in their cubicle or on their monitor. Even though we IT folks enforce password complexity policies, the effort is wasted if the user post their passwords in plain sight.
Maybe I’m dreaming, but I think that even the lazy can take the time to come up with serious passwords and take measures to make them memorable and/or write them down in a secure way. My article on generating secure passwords describes a method of doing this; it takes a bit of work at first, but once implemented, it’s a simple system that even the lazy can appreciate. (You may guess that I’m no fan of password managers or stored passwords and your guess would be right.)
If more of us IT geeks put more work into developing simple password generation and mnemomic systems for the lazy users, perhaps our networks would be more secure; perhaps not, but it can’t hurt now, can it?
According to researchers at Princeton University, it’s possible to recover encryption keys from memory for some time after a computer is powered down. Their paper, “Lest We Remember: Cold Boot Attacks on Encryption Keys,” begins with this abstract:
Contrary to popular assumption, DRAMs used in most modern computers retain their contents for seconds to minutes after power is lost, even at operating temperatures and even if removed from a motherboard. Although DRAMs become less reliable when they are not refreshed, they are not immediately erased, and their contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access. We use cold reboots to mount attacks on popular disk encryption systems — BitLocker, FileVault, dm-crypt, and TrueCrypt — using no special devices or materials. We experimentally characterize the extent and predictability of memory remanence and report that remanence times can be increased dramatically with simple techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for partially mitigating these risks, we know of no simple remedy that would eliminate them
Check out the researchers’ video demo of the attack:
[kml_flashembed movie="http://www.youtube.com/v/JDaicPIgn9U" width="425" height="350" wmode="transparent" /]
While I don’t consider this a great concern for the average user, it’s a real problem in terms of corporate espionage and national security.
Aside from simply never using standby modes or screen locking, possible solutions would be for encryption programs to require two-factor authentication or for operating systems to securely erase memory as part of the shutdown routine. This article at SANS Internet Storm Center gives further insight into the issue.
Sometimes, it’s a good thing to take a breather from the routine, to venture off into something more fun than the serious day-to-day concerns of network and computer security. One of my interests is cryptography, especially its history, and I love to play around with cryptograms in the daily newspaper, even though they’re just simple substitution ciphers (though there are some puzzle books out there that use polyalphabetic and transposition ciphers).
There’s no question that computers have taken cryptography well out of the realm of human-generated codes and ciphers. Done properly, modern encryption systems produce output that appears to be nothing more than random noise to a human–and no human will ever be able to break those ciphertexts without the help of powerful computers. Yet, there are human-generated ciphers that haven’t been cracked. One of those is the D’Agapeyeff cipher, which appears as “…a cryptogram upon which the reader is invited to test his skill” in the first edition of “Codes & Ciphers, ” written by Alexander D’Agapeyeff, published by Oxford University Press in April, 1939.
The book is an elementary text on classic encryption methods and the cryptogram is placed on the final page of the final chapter which details methods of decryption of the various types of ciphers. Here’s the cryptogram as it appears in the book (this was omitted from later editions for reasons unkown):
75628 28591 62916 48164 91748 58464 74748 28483 81638 18174
74826 26475 83828 49175 74658 37575 75936 36565 81638 17585
75756 46282 92857 46382 75748 38165 81848 56485 64858 56382
72628 36281 81728 16463 75828 16483 63828 58163 63630 47481
91918 46385 84656 48565 62946 26285 91859 17491 72756 46575
71658 36264 74818 28462 82649 18193 65626 48484 91838 57491
81657 27483 83858 28364 62726 26562 83759 27263 82827 27283
82858 47582 81837 28462 82837 58164 75748 58162 92000
I assumed (correctly, I think–see this article) that two numbers represent one letter and that this was some sort of simple substitution cipher. I divided the cryptogram thus, omitting the three zeros that are obviously nulls:
75 62 82 85 91 62 91 64 81 64 91 74 85 84 64 74 74 82 84 83 81 63 81 81 74
74 82 62 64 75 83 82 84 91 75 74 65 83 75 75 75 93 63 65 65 81 63 81 75 85
75 75 64 62 82 92 85 74 63 82 75 74 83 81 65 81 84 85 64 85 64 85 85 63 82
72 62 83 62 81 81 72 81 64 63 75 82 81 64 83 63 82 85 81 63 63 63 04 74 81
91 91 84 63 85 84 65 64 85 65 62 94 62 62 85 91 85 91 74 91 72 75 64 65 75
71 65 83 62 64 74 81 82 84 62 82 64 91 81 93 65 62 64 84 84 91 83 85 74 91
81 65 72 74 83 83 85 82 83 64 62 72 62 65 62 83 75 92 72 63 82 82 72 72 83
82 85 84 75 82 81 83 72 84 62 82 83 75 81 64 75 74 85 81 62 92
You can see that no pair begins with a number less than six and no pair ends with a number greater than five. This suggests a matrix like this:
1 2 3 4 5
6a b c d e
Using this hypothetical grid, 61 is “a,” 65 is “e,” etc. That’s as far as I’ve managed to go.
Anyone else like to play with this?