Good information security requires…good information.
That’s why logs are so important and why so many regulatory and industry directives require companies to not only gather but monitor, read and analyze them.
By the same token, if we’re going to get this log management thing right, we need to share our experiences and pain points with each other and the vendors who want to make their log management products more responsive to our needs, so we, in turn, will keep giving them money.
So, if you have not yet taken the fifth annual SANS Log Management Survey, please take a few minutes. The survey will be up through January. Obviously, the more respondents SANS gets, the more reliable the results. The findings will be released at SANS WhatWorks Log Management and Analysis Summit to be held in Washington April 6-7.
The survey has evolved as organizations experience with log management has evolved, said Stephen Northcutt, SANS CEO. Compliance is now well established as a driver for developing and improving log management programs and deploying automated tools. In fact, the 2008 report showed that compliance was only the second highest reason for collecting log data, behind detection and analysis of security and performance incidents.
With this year’s survey, SANS wants to emphasize getting full value to leverage log management for security and operations.
“The biggest thing in the survey that’s new and different is looking for the ROI,” Northcutt said. “We’re trying to see what the biz case for this is; the compliance case is established. Two years you had to go to the CFO and say, look, I need 200,000 bucks. Here are the findings of the audit report. So, you spent the money and now you’re saying, ‘Gosh, what can I DO with this?'”
I’m not a big fan of states’ rights, which made a lot more sense in 1791 than they do today (Why should one state’s misdemeanor be another state’s felony? Why is a gay couple married in one state and unmarried when they drive over the state line?).
My 18-year-old son wonders why I vote Republican and sound so much like a Democrat. I guess it’s because I like standards but don’t like government spending a lot of money on what it thinks will improve people.
I also share Gary McGraw’s skepticism about Top 10 lists Software [In]security: Top 11 Reasons Why Top 10 (or Top 25) Lists Don’t Work.
So it’s hardly surprising I have mixed feelings about New York state’s plan to use the freshly minted CWE/SANS Top 25 Dangerous Programming Errors list, as a key requirement of its procurement enforcement requirements for software developers.
But while we can quibble over the value of this list or that list, what I do like is that it is incorporated as a component of the Application Security Procurement Language, drafted by New York CISO William Pelgrin and posted on the SANS website. The critical point is that if and when the procurement requirement is adopted, developers of custom software will be held accountable. They’ll be required to demonstrate that security is a core element of their application development lifecycle, from coding thru final pen testing if they want to do business with New York.
What gives me pause, though, is that we’re likely to see a cascade of initiatives in many states, a la the breach disclosure laws that followed California SB 1386 and what we will no doubt see in the wake of the new Nevada and Massachusetts data protection laws. I’d rather see federal standards that can be applied across state borders. The states step into the, ahem, breach, because the feds are slow and/or reluctant to act. California enacted 1386 more than five years ago, and something like 42 states, the District of Columbia and Puerto Rico have followed suit, while a variety of bills have been introduced, debated and left to wither in Congress.
Companies doing business with customers in more than one state (that is, almost everyone except Sam’s Hardware over on Sycamore Street, that Sam III runs since Sam, Jr. retired 20 years after he took over from Sam, Sr.) have had to develop policies and procedures that fit the most stringent of these laws–and they’re relatively straightforward.
What happens when we start to see a smorgasbord of data protection laws (consider that Massachusetts law, which is to go into effect in May is far more demanding than Nevada’s). And secure software development? Now that’s complex. Will one state adopt the SANS guidelines and another, perhaps, insist on incorporating the secure development lifecycle directive being drafted by NIST? Will one state require developers to expunge the SANS 25, another the OWASP Top 10, and yet another an assortment from among the 700 or so errors that can leave code vulnerable?
So, applause to all who try to put teeth into security. Now if it wasn’t like pulling teeth to get everyone pulling together.
The messages trick users into giving up passwords, account numbers and other sensitive information. Sometimes the messages appear after they have logged into an online banking or other financial website, Trusteer said.
Trusteer CTO Amit Klein said the method makes phishing attacks more likely to be successful because they try to trick people after they have logged into a legitimate website. Klein said the major browser makers have been notified.
I can see how the phishing attack can easily trick people. Trusteer said the pop-up window sometimes requests the user to retype their username and password because the session has expired. How many times have you had that happen? It sometimes also asks users to complete a customer satisfaction survey or participate in a promotion. I typically stay away from those and so should you.
Two researchers recently wrote a report outlining how phishers are failing to make a ton of money. The report, which we wrote about last week, said there were too many phishers driving down the price cybercriminals pay for stolen information. There’s varying opinions on this report and some are immediately doubting it because it came from Microsoft Research. More on that in another post.
One of the peculiar properties of the security research community is the reflexive reactions of some of its members to new work by other researchers. In most cases, researchers tend to compliment one another when they’ve produced something new. But there always seems to be a small subset of researchers who race to be the first one to point out that, regardless of the scope or originality of the work, it is: A. nothing new, B. not as severe as it looks, C. easily defended against, or D. all of the above.
The news this week about the MD5 SSL attack from Alex Sotirov, Jake Applebaum and friends brought out the knives in a sadly predictable way. The team was very careful to point out at every opportunity that its work was based heavily on the previous work done on MD5 collisions by a group of Chinese researchers in 2004, and the further work done by several European researchers in 2007. (In fact, the team that produced the 2007 work, which showed the much stronger likelihood of MD5 collisions, also worked with Sotirov and Applebaum.) The latest research simply extended the earlier work and took advantage of some advances in computing power and technology to take it a couple of steps farther than the previous research could go. But for whatever reason, that wasn’t good enough for some people.
I’ve never really understood this impulse to knock down other people’s work in order to try and look smarter yourself. How does that follow? In other news, there are a number of really well-done and readable analyses of the MD5 attack out there, starting with Eric Rescorla’s. He lays out the attack in layman’s terms and describes exactly what new contributions Sotirov and his team made. Nate Lawson also wrote a very useful description of the attack:
The attack is interesting since they take advantage of more than one flaw in a CA. First, they find a CA that still uses MD5 for signing certs. MD5 has been broken for years, and no CA should have been doing this. Next, they prepared an innocent-looking cert request containing the “magic values” necessary to cause an MD5 collision. They were able to do this because of a second flaw. The CA in question used an incrementing serial number instead of a random one. Since the serial is part of the signed data, it is a cheap way to get some randomness. This would have thwarted this particular attack until a pre-image vulnerability was found in MD5. Don’t count on this for security! MD4 fell to a second pre-image attack a few years after the first collision attacks, and attacks only get better over time.
Helpful, cogent analysis of the problem and its mitigating factors without bravado and sniping. What a concept.
When the researchers who produced the elegant MD5 attack I wrote about this morning realized the severity of what they had found, they took two highly unusual steps. First, they consulted with lawyers from the Electronic Frontier Foundation, describing their findings and voicing their concerns about the potential legal ramifications. The researchers were afraid that if the certificate authorities found out about their work and its implications for the security of their digital certificates, the CAs would move to stop the their talk at the 25C3 conference today in Berlin, at the very least, and perhaps sue them for good measure.
Second, the group approached Microsoft and Mozilla, the two dominant browser vendors, and explained that they had a serious browser security issue they’d like to share. But first, they needed some assurances from the two vendors that they wouldn’t share what they heard with the CAs before the researchers were ready to announce their findings. So they asked Microsoft and Mozilla officials to sign non-disclosure agreements. It was a 180-degree reversal from the way that these things normally work.
In most cases, researchers who approach a vendor with a security problem are asked by the vendor to keep quiet about the vulnerability until a patch is ready. But in this instance, the researchers held the upper hand and chose not to even tell the vendors what the issue was until they had the signed NDAs in hand. Alex Sotirov, one of the researchers involved in the project, said that it took some negotiations to get Microsoft officials to agree to the NDA, but they eventually signed on. As did Mozilla.
During their presentation on Tuesday, the researchers said they were hopeful that other researchers would follow their lead. And Dino Dai Zovi, a researcher who was not part of the project but who was briefed on the team’s work, agreed. “A letter from a lawyer is usually enough to stop any researcher,” he said. “But showing up with your own lawyer changes the balance of power.”
In a move that has been anticipated for some time, Nokia on Monday said it has an agreement in place to sell its security business. What did come as a surprise was the identity of the buyer: Check Point. The two companies have been working together for years, with Nokia deploying Check Point’s software on its own security appliances. The terms of the agreement were not disclosed, though Nokia said it expects the deal to be finalized by the end of March.
“As a pioneer in security appliances, the Nokia security appliance business has been an important strategic partner for Check Point and has helped us achieve early leadership in the security appliance market,” said Gil Shwed, Chairman and CEO at Check Point. “Adding Nokia’s security appliance portfolio into Check Point’s broad range of security solutions is the natural conclusion of our long collaboration, and will assure a smooth path forward for our mutual customers.”
Check Point and Nokia have long provided customers with a range of best-of-breed security solutions, proven in high-performance, mission critical environments. Nokia’s security appliance business provides purpose-built security platforms optimized for Check Point Firewall, virtual private network (VPN) and unified threat management (UTM) software.
Nokia’s main focus for years has been its mobile handset business, and its security unit has always been something of an odd fit. It’s an enterprise business in the midst of a company that does most of its work selling consumer handsets. Now, with Check Point taking the reins, Nokia will be free to focus on that business, while Check Point can bring the appliances in-house and have an extra revenue stream.
Several undersea cables in the Mediterranean Sea that carry the bulk of Internet traffic between Asia and Europe have been cut, resulting in a massive Internet outage in Egypt and problems in other countries. Early reports are speculating that the cut, which happened Friday morning, may have been the result of an anchor drop. A report on Fibresystems.org said the three cable cuts happened separately, but within a few minutes of one another. The cuts seem to be in the link between Sicily and Tunisia in the Mediterranean.
France Telecom observed today that 3 major underwater cables were cut: “Sea Me We 4” at 7:28am, “Sea Me We3” at 7:33am and FLAG at 8:06am. The causes of the cut, which is located in the Mediterranean between Sicily and Tunisia, on sections linking Sicily to Egypt, remain unclear.
Most of the B to B traffic between Europe and Asia is rerouted through the USA. Traffic from Europe to Algeria and Tunisia is not affected, but traffic from Europe to the Near East and Asia is interrupted to a greater or lesser extent.
There is apparently a work crew on the way to fix the cuts, but it likely will be several days before the repairs are complete.
The list of things to worry about with the soon-to-be-patched MS08-078 XML data binding vulnerability is getting longer by the minute. The researchers at McAfee’s AVERT Labs report that they have been seeing exploits using Word documents to download and install malicious ActiveX controls on user machines.
Upon opening the word document, the embedded ActiveX control with the following classid is instantiated and executed.
This control stores configuration data for the policy setting Microsoft Scriptlet Component.
The control then makes a request to the Web page hosting the IE 7 exploit. The charm with this approach is that the exploit is downloaded and run without the knowledge or permission of the user. To the unsuspecting user, it will just appear as yet another normal .doc file.
Not good news. Most of the other attacks that have been seen against the vulnerability have been of the drive-by download variety. But this puts things in a different light. The emergency patch for the MS08-078 vulnerability is due later today, and this new attack vector makes applying the fix an even higher priority.
Microsoft on Wednesday will release an emergency out-of-band patch for the XML handling flaw in Internet Explorer that has been the target of malware attacks for the last week or more. This is the second time in the last few months that the company has released a patch outside of its monthly scheduled update cycle. Microsoft issued a security bulletin about the vulnerability last week and later updated it to inform customers that all supported versions of IE are vulnerable to the attack, not just IE 7.
The patch will be rated critical, as you’d expect from an emergency fix, and Microsoft is planning to hold a webcast tomorrow at 1 p.m. PST to explain the vulnerability, the attacks and the fix. Microsoft also released an emergency patch for the MS08-067 RPC vulnerability in October. In that case, just as in the case of the IE XML flaw, Microsoft and other security companies had warned that there were targeted attacks being used against the vulnerability.
The recent release of the “Securing Cyberspace for the 44th President” report spawned a flood of analysis and criticism, and much of it was positive and complimentary. I’ve written before about the idea behind this report and the fact that many, if not most, of the recommendations in it can also be found in the National Strategy to Secure Cyber Space, which was released nearly six years ago. That document has been largely ignored and we have all been paying the price in the interim. The federal government’s virtual abandonment of cybersecurity policy in the last eight years has left all of us more vulnerable, and will end up costing the government, and taxpayers, far more money in the long term.
In reading the various analyses of the report, I found that many people were commending the commission for suggestions that either have failed in the past, or have little chance of working now. I ran across Steve Bellovin’s blog post on the report and it came as no surprise that his analysis was right on the money. Bellovin’s as smart as they come, and it’s worth the time to read through his entire post on the report, but in the meanwhile, here are a few key points:
The analysis of the threat environment is, in my opinion, superb; I don’t think I’ve seen it explicated better. Briefly, the U.S. is facing threats at all levels, from individual cybercriminals to actions perpetrated by nation-states. The report pulls no punches (p. 11):
America’s failure to protect cyberspace is one of the most urgent national security problems facing the new administration that will take office in January 2009. It is, like Ultra and Engima, a battle fought mainly in the shadows. It is a battle we are losing.
That’s it exactly. In fact, it’s a battle we’re not even fighting right now.
The most important technical point in this report, in my opinion, is its realization that one cannot achieve cybersecurity solely by protecting individual components: “There is no way to determine what happens when NIAP-reviewed products are all combined into a composite IT system” (p. 58). Quite right, and too little appreciated; security is a systems property. The report also notes that “security is, in fact, part of the entire design-and-build process”.
It should be, but that hasn’t been the case in many systems for far too long. Bellovin then skewers what has been the dominant federal strategy for remedying this problem:
The discussion of using Federal market powers to “remedy the lack of demand for secure protocols” is too terse, perhaps by intent. As I read that section (p. 58), it is calling for BGP and DNS security. These are indeed important, and were called out by name in the 2002 National Strategy to Secure Cyberspace. However, I fear that simply saying that the Federal government should only buy Internet services from ISPs that support these will do too little. DNSSEC to protect .gov and .mil does not require ISP involvement; in fact, the process is already underway within the government itself. Secured BGP is another matter; that can only be done by ISPs. However, another recent Federal cybersecurity initiative — the Trusted Internet Connection program — has ironically reduced the potential for impact by limiting the government to a very small number of links to ISPs. Furthermore, given how many vital government dealings are with the consumer and private sectors, and given that secured BGP doesn’t work very well without widespread adoption, U.S. cybersecurity really needs mass adoption. This is a clear case where regulation is necessary; furthermore, it must be done in conjunction with other governments.
Bellovin also criticizes the report for calling on the Obama administration to protect online privacy without providing any guidance as to what that means or how to do it. But he leaves the best for last: the omission of any mention of software security and its cascading effect on system and network security.
The buggy software issue is also the problem with the discussion of acquisitions and regulation (p. 55). There are certainly some things that regulations can mandate, such as default secure configurations. Given how long the technical security community has called for such things, it is shameful that vendors still haven’t listened. But what else should be done to ensure that “providers of IT products and systems are accountable and … certify that they have adhered to security and configuration guidelines?” Will we end up with more meaningless checklists demanding antivirus software on machines that shouldn’t need it? Of course, I can’t propose better wording. Quite simply, we don’t know what makes a system secure unless it’s been designed for security from the start. It is quite clear to me that today’s systems are not secure and cannot be made secure.