Security Bytes


May 9, 2012  5:59 PM

Organizations lagging on cloud security training, survey shows

Marcia Savage Marcia Savage Profile: Marcia Savage

Symantec recently released some interesting findings from a survey the company conducted with the Cloud Security Alliance at the CSA Summit in February. The survey went beyond the usual sorts of basic questions to delve into organizations’ knowledge of cloud security. The results – albeit from a small sample size (128 respondents) — were a bit curious.

While 63% rated their cloud security efforts as good, 58% said their staff isn’t well prepared to secure their use of public cloud services. And although 68% said they think cloud security training is important for their organizations’ ability to use public cloud services, less than half (48%) planned to attend cloud security training over the next year. Eighty-six percent of respondents said protecting their organizations’ data was the top factor driving them to cloud security training.

In a blog post, Dave Elliott, senior product marketing manager of global cloud marketing at Symantec, summarized the survey findings:

“In short, what this survey reveals is that it’s important to have your own security for the cloud but that IT staff are not yet well prepared to secure the cloud.”

He added, “Cloud security needs leadership, and it requires standardized training and skills that will enable IT staff to confidently move into the cloud.”

Now, Symantec has a vested interest in promoting cloud security training – it’s partnered with the CSA to offer training for the CSA’s Certificate of Cloud Security Knowledge (CCSK). But if organizations don’t feel prepared for securing cloud computing deployments, it’s a little strange more aren’t seeking out cloud security training for their staff.

I recall seeing a discussion on LinkedIn a few months ago in which security pros debated the value of the CCSK. Some noted that employers don’t recognize the relatively new certification. That will probably change sooner than later, though, as cloud services become more prevalent in the enterprise.

Interestingly, 56% of respondents to the Symantec-CSA survey said advancing their careers was a major factor in their decision to attend cloud security training.

May 9, 2012  4:48 PM

Windows exploits: Data finds Windows Vista infections outpace Windows XP

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

When Microsoft issued version 12 of its Security Intelligence Report (.pdf) last month, its marketing machine had one message it wanted journalists to communicate to businesses: Conficker worm infections are a serious concern.

The messaging about Conficker was extremely strong. Prior to a briefing with a Microsoft executive, reporters were given a slide deck largely void of information except for data about Conficker; Microsoft’s 126-page report had been boiled down to 16 slides. Microsoft proclaimed Conficker as “the No. 1 threat facing businesses over the past 2.5 years.” It was “detected on 1.7 million machines in the fourth quarter of 2011; it was “detected almost 220 million times since 2009;” and there has been a 225% increase in quarterly detections since 2009, Microsoft said.

It sounds alarming, but that’s just marketing at its worst.

Conficker has no payload. There are no cybercriminals controlling it. The worm itself was designed to spread quickly to establish the infrastructure for a botnet. Once it’s installed on an infected machine it opens connections to receive instructions from a remote server. But that function has been neutralized by the Conficker Working Group, which uses the worm’s broken domain algorithm to block it from receiving data.

If Conficker isn’t a serious threat, what is? Here are a few data points to consider from the Microsoft SIR that may be more important than Microsoft’s Conficker message:

Windows exploits rise significantly:  Operating System exploits, specifically targeting Microsoft Windows, skyrocketed by 100% in 2011.

Despite a security update in August 2010 addressing a publicly disclosed vulnerability in Windows Shell, attackers have been successfully targeting the flaw using malicious shortcut files. Exploits against the vulnerability and several others that were detected by Microsoft increased from 400,000 in the first quarter of 2011, to more than 800,000 in the fourth quarter of 2011. The statistics point to the Ramnit worm as the culprit targeting the flaw. It was recently detected transforming into financial malware capable of draining bank accounts.

The other Microsoft Windows flaw being targeted was a Microsoft Windows Help and Support Center vulnerability that can be targeted via a drive-by attack. It was repaired in a security update issued in July 2010.

Windows Vista infection rate higher than Windows XP: The infection rate for 32- and 64-bit editions of Windows Vista SP1 and the 64-bit edition of Windows Vista SP2 outpaced Windows XP SP3. Microsoft says attackers are targeting the newer platforms because companies are migrating to them. Infection rates for the 64-bit editions of Windows Vista and Windows 7 have increased since the first half of 2011, Microsoft said.

Microsoft said the increase is also due to detection signatures it added to its Malicious Software Removal Tool for several malware families in the second half of 2011. “Detections of these families increased significantly on all of the supported platforms after MSRT coverage was added,” the company said in its report. In addition, a security update addressing the Windows Autorun feature in Windows was issued last year and was likely a major factor in driving down the infection rate in Windows XP, the software maker said.

Java exploits are out of control: Java, which is platform independent, has no doubt become a favorite attack tool of cybercriminals. Combined, the top six Java exploits represented millions of unique infections, according to the Microsoft SIR. Exploits delivered through HTML or JavaScript skyrocketed in the second half of 2011. A single Sun Java Runtime vulnerability is responsible for 1.4 million infections in the fourth quarter of 2011. There was an explosion of infections in the fourth quarter of a single Java vulnerability using a MIDI file with a malicious MixerSequencer. Most of the activity is driven by the Black Hole Exploit Kit.

Adobe Reader, Acrobat attacks: While not out of control, it continues to be a favorite attack method of cybercriminals. “Exploits that affect Adobe Reader and Adobe Acrobat accounted for most document format exploits detected throughout the last four quarters.” There were nearly 1 million of them.


May 3, 2012  12:04 PM

Creativity makes information security awareness training stick

Jane Wright Jane Wright Profile: Jane Wright

It often seems security pros place great expectations on users, and are amazed when they fall for an obvious security trap or common social engineering attack. But instead of being amazed, the more appropriate response may be to recognize that traditional information security awareness training programs often don’t work.

According to Bob Rudis, director of enterprise security at Boston-based Liberty Mutual Group, too many companies rely on the computer-based security training courses that each employee must complete once a year to meet compliance requirements. Speaking at the Source Boston conference last month, Rudis shared some more creative ideas he has used to elevate security awareness and reduce security incidents at his company.

For example, Rudis’ team created some simple Flash-based game applications for employees to play. Players win the games by making correct security choices. Even though the games were voluntary, about 25% of Liberty Mutual employees played each game at least once.

For companies that don’t have the budget to create games, Rudis offered cheap, outside-the-box security awareness ideas.  For example, consider your computer-based training (CBT), which probably contains slides showing photos of people working at computers. Rather than using stock images of people in your CBT, Rudis suggested taking photos of your company’s own employees, such as a photo of one of your IT people scratching their head and looking puzzled, or a photo of one of your help desk people looking tired but triumphant. Seeing actual colleagues helps users feel more connected to the training material and thus more likely to remember what they’ve learned. Plus, it will make stars of your staff – an added benefit.

As a security manager, you are competing with so many other demands for users’ attentions, from their own job responsibilities to Facebook and Pinterest and Angry Birds. Making your security lessons visually compelling and a little more fun may go a long way toward ensuring security awareness messages stick in users’ minds for a long time.


May 2, 2012  3:38 PM

Virtualization security best practices in wake of ESX hypervisor code leak

Marcia Savage Marcia Savage Profile: Marcia Savage

As security pros wait for more details about the VMware ESX hypervisor source code leak, should they be panicking?

Well no, not yet, anyway. Without knowing exactly what source code was leaked, it’s hard to know the extent of the threat, security experts have said. However, the answer may come soon — there are rumors that hackers will release more source code on Saturday.

Until then, virtualization security experts are offering some advice for enterprises running ESX. As with most things in security, much of the advice has to do with simply following best practices. However, virtualization security best practices may not always be at the top of an organization’s to-do list; the code leak should provide some prodding.

First off, organizations should block all Internet access to the hypervisor platform — especially to the Service Console — which is something they should already be doing, according to Dave Shackleford, principal consultant at Voodoo Security and senior vice president of research and CTO at IANS. They should also make sure all VMs are patched and restrict any copy/paste or other functionality between the VM and ESX host, he said in an email. (On the patching front, organizations using ESX should pay attention to last week’s security bulletin from VMware about an update for the ESX Service Console Operating System (COS) kernel to fix several security issues).

“Finally, they could set up ACLs or IDS monitoring rules to look for any weird traffic to those systems from both internal and external networks, and do the same on any virtual security tools if they’ve got them,” Shackleford said.

Edward Haletky, owner of AstroArch Consulting and analyst at the Virtualization Practice, wrote in a blog post that organizations should follow virtualization security best practices to pre-plan for the release of the ESX hypervisor code.

“Segregate your management networks, employ proper role-based access controls, ensure any third-party drivers come from well-known sources, set all isolation settings within your virtual machine containers, at-rest disk encryption, properly apply resource limits and  limit para-virtualized driver usage,” he wrote.

Any attacks arising from the code leak will show up shortly after the code is made available, but won’t increase the risk beyond where it is today, Haletky wrote.

“The use of best practices for securing virtual environments is on the rise, but we are still a long way from our goal. Just getting proper management network segregation is an uphill battle. If there is currently a real risk to your virtual environment, it is the lack of following current best practices, not an impending leak of code,” he said.


May 1, 2012  8:38 PM

Oracle trips on TNS zero-day workaround

Marcia Savage Michael Mimoso Profile: maxsteel

Oracle has a problem. And it’s summed up pretty well by the current uproar over the lack of a patch for a zero-day vulnerability in the Oracle TNS Listener. It’s the same problem Microsoft had a decade ago, and the same problem Adobe also has when it comes to security fixes. It’s this perception of arrogance Oracle gives off when serious security issues become public as this one has.

Oracle won’t patch a zero-day in its flagship database management system, and instead offered a workaround with the promise of fixing the vulnerability in the product’s next release. Swish that one around for a while: Oracle won’t patch a zero-day.

And to top it off, the vulnerability in question was reported to Oracle four years ago. In its April Critical Patch Update (CPU), Oracle finally got around to addressing the problem and did so with a workaround. Unfortunately for Oracle, the researcher who reported the vulnerability, Joxean Koret, misunderstood and believed a patch was available, so he spilled the beans on the vulnerability on the Full Disclosure list. The TNS Listener Poison Attack involves a man-in-the-middle attack that could hijack connections, route data from the client to the attacker where data could be stored, dropped or modified via SQL commands. Bad stuff.

According to Ray Stell, a database administrator at Virginia Tech University, the workaround suggested in the CPU is fairly simple to deploy. “You stop the listener, apply a configuration command and edit another configuration file and you’re up and running,” Stell said. Stell has a busy time ahead of him having to patch, er fix, er apply the workaround, to 40 Oracle boxes in his department alone.

The worst-kept secret in database security circles is that companies are very reticent to take database servers down for patching. Few can afford the downtime, much less the testing required to determine whether a patch will break functioning processes. It’s an unacceptable risk for most enterprises.

What should be unacceptable is Oracle’s continued thumbing of its nose toward security. Oracle said it won’t fix the vulnerability until the next full release because, according to its alert: “such back-porting is very difficult or impossible because of the amount of code change required, or because the fix would create significant regressions…”

Experts say the available workaround will keep Oracle installations secure against working exploits. Long term, however, Oracle needs to have its come-to-Jesus moment on security. It needs its version of Trustworthy Computing, which put Microsoft on a better course securing Windows and its other products. Unbreakable was a huge misstep in 2001, putting a massive target on the company’s software that guys like David Litchfield made a living on for a long time.

Publicly tripping over a zero-day vulnerability and working exploit code is just another indication that Oracle doesn’t entirely get it when it comes to security. Too bad, because it’s about time it did.


April 27, 2012  5:25 PM

CISPA intelligence information sharing bill passes House, headed to Senate

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

The Cyber Intelligence Sharing and Protection Act (CISPA), clears security vendors of any liability for sharing customer attack data with the government.

A new cybersecurity bill designed to foster threat intelligence information sharing between the public and private sectors cleared its first major legislative hurdle this week, gaining passage from the House of Representatives. If the bill makes it into law, it would clear security vendors of any legal ramifications in sharing their customer data with federal officials. That’s right: Symantec or any “certified” security vendor would be able to report your company’s infections directly with the feds.

If the Cyber Intelligence Sharing and Protection Act (CISPA), which is being opposed by the White House, privacy advocates and many democrats, passes the Senate in a narrow vote, political observers say it is likely to be vetoed by the president. The bill is being supported by a variety of tech companies, including Symantec, Facebook, Oracle and Microsoft.

The bill enables attack and threat information sharing on a voluntary basis between the federal government and technology, manufacturers and other businesses.  It’s a fascinating piece of legislation because under the voluntary program that the bill creates, it essentially gives security vendors the ability to share specific threat data collected from their customers with federal authorities – data that is not anonymous. The goal is to protect networks against attack, thereby giving the government some oversight into protecting critical infrastructure facilities that are under ownership by some private-sector companies. The controversial bill is being compared to Stop Online Piracy Act (SOPA) by privacy advocates who say that the legislation is too general and offers few safeguards protecting civil liberties.

CISPA amends the National Security Act and requires the director of national intelligence to establish procedures to allow intelligence community elements to share cyberthreat intelligence with private-sector entities and encourage information sharing – a common theme from the Feds at annual security conferences.

The bill would require procedures to ensure threat intelligence is shared only with “certified entities or a person with an appropriate security clearance.” It doesn’t delineate how an organization or individual becomes “certified.” But certification is needed, according to the legislation, to prevent unauthorized disclosure. Certification would be provided to “entities or officers or employees of such entities.”

Security companies, noted as “cybersecurity providers” are authorized under the proposed legislation to use cybersecurity systems to identify and obtain cyberthreat information from their customers and share that data with the federal government. The data would not be sanitized, giving the federal government unprecedented visibility into attacks and their specific targets. Many security providers already collect data on their customers and disseminate the data in threat intelligence reports, but the bill would give federal officials more visibility into attacks on specific private sector firms, such as utilities, chemical rendering companies, manufacturers and other organizations deemed essential to the protection of national security.

There is a provision in the bill “encouraging” the private sector to anonymize or minimize the cyberthreat information it voluntarily shares with others, including the government.

The bill is being supported by Symantec primarily because it takes out the company’s legal liability in sharing the data with the government. In a letter of support from Symantec, Cheri F. McGuire, vice president of global government affairs and cybersecurity policy, praised the goal of the bill (.pdf).

“In order for information sharing to be effective, information must be shared in a timely manner, with the right people or organizations, and with the understanding that so long as an entity shares information in good faith, it will not be faced with legal liability,” McGuire said. “This bi-partisan legislation exemplifies a solid understanding of the shortfalls in the current information-sharing environment, and provides common sense solutions to improve bi-directional, real time information sharing to mitigate cyberthreats.”

The Internet Security Alliance, an industry group that represents VeriSign, Ratheon and others, submitted a similar letter supporting the bill (.pdf).

Some protections were put into the bill. For example, it prevents security firms from being sued over threat data they share with the government. It says the threat data cannot be used by the federal government for a regulatory purpose.  It also prohibits the federal government from searching the information for any other purpose than for the protection of U.S. national security.

It also directs the Inspector General of the Intelligence Community to submit an annual report on how the threat data is being used and if any changes are needed to protect privacy and civil liberties concerns.


April 26, 2012  12:24 PM

For data security, cloud computing customers must accept a DIY approach

Jane Wright Jane Wright Profile: Jane Wright

Earlier this week, Context Information Security revealed the astonishing findings of its investigation into a sampling of four cloud service providers (CSPs) – Amazon EC2, Gigenet, Rackspace and VPS.net. Context found unpatched systems, missing antivirus and back doors left open, leaving cloud customers vulnerable to attacks and breaches.

Perhaps the most dismaying finding from U.K.-based Context’s investigation was the discovery of remnant data left behind by previous cloud customers. As part of its research, Context created virtual machines (VMs) on the CSPs platforms, and was able to see data stored by previous tenants on Rackspace and VPS.net disks. (VPS was using the OnApp platform.) Context referred to this finding as the “dirty disk” problem.

At first it may seem Context’s report serves as notice to CSPs that they are falling short of basic security expectations. Yet, in many ways, the problems can be tied to customers’ own shortcomings. Too often, customers count on their CSP to lock down their applications and safeguard their data, even though most CSPs explicitly state these precautions are not included in their standard offerings. Unfortunately, this sometimes comes as an unpleasant surprise for customers.

The base service offered by many CSPs does not include antivirus, patching or data deletion services. To protect their data security, cloud computing customers need to treat VMs in the cloud as if they were on-site servers. Customers must adopt a “do-it-yourself” (DIY) mindset and apply their own security applications and procedures to their cloud implementation, or pay their CSP for more security services.

The four CSPs investigated by Context are likely representative of the data security problems to be found on all cloud platforms. Companies storing data in the cloud need to act quickly to find out how their CSP is protecting the confidentiality of their data, and do their part in protecting their data in the cloud.


April 25, 2012  2:55 PM

Amazon cloud services: AWS Marketplace offers one-click cloud security

Marcia Savage Marcia Savage Profile: Marcia Savage

The new AWS Marketplace, launched by Amazon last week, is an interesting development on the cloud security front. The Amazon cloud services marketplace allows customers to choose from a menu of various software products and SaaS services, and launch the applications in their EC2 environments with one click.

Several security applications are among the options, including a virtual appliance from Check Point Software Technologies, SaaS endpoint protection from McAfee, and SaaS network IDS and vulnerability assessment from Alert Logic. Customers are charged for what they use on an hourly or monthly basis, and the charges appear on the same bill as their other AWS services.

“We wanted to shrink the time between finding what you want and getting it up and running,” Werner Vogels, CTO at Amazon.com, wrote in a blog post.

By making it easy for organizations to add security to their cloud environments, AWS has made a promising move. Integrating security can be complicated, but the AWS Marketplace appears to eliminate any heavy lifting. It could leave organizations with fewer excuses to not implement cloud security.

But not all is hunky dory with the AWS Marketplace, according to a blog post by Joe Brockmeier at ReadWriteCloud. While the AWS Marketplace makes it simpler to consume single-server apps, it “still leaves a lot of configuration to the end users,” he wrote. For example, he said, deploying Sharepoint with Amazon Virtual Private Cloud involves an architecture that’s “much less simple than a single EC2 image,” which means the marketplace doesn’t offer anything right now for those with needs beyond a single EC2 image.

Still, it will be interesting to see what other security services are offered via the marketplace and whether other cloud providers follow Amazon’s lead in easing the path to cloud security.


April 24, 2012  1:11 PM

Spam filter gets better of Microsoft SDL—almost

Marcia Savage Michael Mimoso Profile: maxsteel

You’d have to be a serious security curmudgeon to try to pick holes in the Microsoft SDL. The company’s security development lifecycle grew out of the Trustworthy Computing initiative, which turned 10 years old this year, and in many organizations, it sets the standard for secure development practices. At a minimum, it put secure development into the consciousness of many organizations and inspired a lot of companies to adopt bits and pieces, if not all, of the SDL.

No program is perfect, however.

Two security program managers working in the Microsoft Security Response Center (MSRC) shared a story during the SOURCE Boston Conference last week that’s worth sharing. It seems not too long ago, a security researcher reported a fairly serious vulnerability to Microsoft via its secure@microsoft.com email address. Turns out, however, that Microsoft’s spam filter kicked in and the vulnerability sat in limbo for months in a spam folder (sometimes it’s the simplest details that get ya.)

The researcher waited a responsible, er, respectable period of time, and eventually went public with details on the vulnerability, thinking Microsoft had ignored the researcher’s efforts. Once the details went full disclosure, Microsoft had to rush an out-of-band fix for the vulnerability; the two program managers refused to reveal the flaw last week.

“Don’t trust spam filtering,” said Jeremy Tinder, one of the managers. “This one was a crisis. Now we read them all (up to 500 a day). We have dedicated individuals to this triage stage of our security response.”

Tinder and his colleague David Seidman explained the MSRC’s role in the SDL at Microsoft and how vulnerabilities are handled once reported—and suggested these are minimal steps that organizations building their own software could follow. It’s a well-reported process that involves several stages:

· Triage—Microsoft determines whether vulnerabilities are security issues, or, for example, a coding or configuration error.

· Reproduce the issue –Microsoft tries to reproduce the security bug with the information provided by the researcher who reported it.

· Analyze the root cause—Once the MSRC is able to reproduce the issue, it determines how much user interaction is required to trigger it, and whether it’s a configuration that’s widely used by all customers.

· Planning—Schedule a fix and move forward after determining the scope of what needs to be fixed and any variants that could also trigger the vulnerability.

· Variant testing/investigation – This is a critical stage where all possible variants are tested before releasing a fix; the last thing the MSRC wants to do is release a fix and then have to re-release it.

· Implementation stage – The MSRC starts developing fixes immediately, and tests in parallel. They test whether other fixes cause regressions.

· Verification—Functional and regression testing is done here to ensure the patch fixes all attack vectors, doesn’t revert previous patches and doesn’t break applications.

· Release: More than a click of a button, Tinder and Seidman said. Involves having the infrastructure in place to push automatic downloads of patches, or give enterprises the ability to choose when and how to apply fixes.

Ultimately, the MSRC shoots for 60 to 90 days to turn around a patch, depending on testing and any issues that could arise and cause a regression forcing the MSRC to start over.

And oh yeah, check those spam folders.


April 19, 2012  1:04 PM

Experts differ on European ‘cookie law’ advice

Jane Wright Jane Wright Profile: Jane Wright

Many IT managers in the U.K. are in a quandary right now as they decide how, and how far, to comply with the impending European “cookie law.” IT managers in the U.S. will soon face the same dilemma.

Beginning May 26, the U.K. Information Commissioner’s Office (ICO) will enforce the privacy and electronic communications regulations (PECR), requiring website operators to explicitly ask permission from visitors before placing a cookie in a visitor’s browser. As you can imagine, many organizations are unhappy about this. They believe asking permission for cookies will cause their customers to flee to other websites, and they worry about abandoning some established programs (such as Google Analytics), which require the use of cookies to function properly. As a result, many IT managers feel stuck between compliance (with the possible loss of customers and information) and non-compliance (with possible penalties from the ICO).

The dilemma doesn’t stop with the U.K. Other countries in the European Union will likely implement the PECR soon, so organizations operating anywhere in Europe will need to develop a cookie compliance strategy.  It’s not an easy task, though, when a lot of the details remain unclear. For example, it is not yet known how the ICO will find out about errant websites, or if the ICO will respond to non-compliance with fines or just warnings, at least at first.

U.S. organizations are equally baffled by the cookie law. Must a U.S.-based organization comply if it serves customers in the UK or anywhere in the European Union?  Does it matter where the website is hosted?  To answer these questions, we’ve recently published two articles offering advice for U.S. organizations contemplating the cookie law. But even our two expert contributors do not agree on the best course of action. One expert advises U.S. organizations to begin taking proactive steps toward compliance, while another suggests U.S. organizations hold off for now.

As the enforcement date draws near, SearchSecurity.co.UK will continue to bring you updated news and advice from a variety of expert perspectives so you can decide on the best strategy for your organization.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: