Security Bytes


May 3, 2012  12:04 PM

Creativity makes information security awareness training stick

Jane Wright Jane Wright Profile: Jane Wright

It often seems security pros place great expectations on users, and are amazed when they fall for an obvious security trap or common social engineering attack. But instead of being amazed, the more appropriate response may be to recognize that traditional information security awareness training programs often don’t work.

According to Bob Rudis, director of enterprise security at Boston-based Liberty Mutual Group, too many companies rely on the computer-based security training courses that each employee must complete once a year to meet compliance requirements. Speaking at the Source Boston conference last month, Rudis shared some more creative ideas he has used to elevate security awareness and reduce security incidents at his company.

For example, Rudis’ team created some simple Flash-based game applications for employees to play. Players win the games by making correct security choices. Even though the games were voluntary, about 25% of Liberty Mutual employees played each game at least once.

For companies that don’t have the budget to create games, Rudis offered cheap, outside-the-box security awareness ideas.  For example, consider your computer-based training (CBT), which probably contains slides showing photos of people working at computers. Rather than using stock images of people in your CBT, Rudis suggested taking photos of your company’s own employees, such as a photo of one of your IT people scratching their head and looking puzzled, or a photo of one of your help desk people looking tired but triumphant. Seeing actual colleagues helps users feel more connected to the training material and thus more likely to remember what they’ve learned. Plus, it will make stars of your staff – an added benefit.

As a security manager, you are competing with so many other demands for users’ attentions, from their own job responsibilities to Facebook and Pinterest and Angry Birds. Making your security lessons visually compelling and a little more fun may go a long way toward ensuring security awareness messages stick in users’ minds for a long time.

May 2, 2012  3:38 PM

Virtualization security best practices in wake of ESX hypervisor code leak

Marcia Savage Marcia Savage Profile: Marcia Savage

As security pros wait for more details about the VMware ESX hypervisor source code leak, should they be panicking?

Well no, not yet, anyway. Without knowing exactly what source code was leaked, it’s hard to know the extent of the threat, security experts have said. However, the answer may come soon — there are rumors that hackers will release more source code on Saturday.

Until then, virtualization security experts are offering some advice for enterprises running ESX. As with most things in security, much of the advice has to do with simply following best practices. However, virtualization security best practices may not always be at the top of an organization’s to-do list; the code leak should provide some prodding.

First off, organizations should block all Internet access to the hypervisor platform — especially to the Service Console — which is something they should already be doing, according to Dave Shackleford, principal consultant at Voodoo Security and senior vice president of research and CTO at IANS. They should also make sure all VMs are patched and restrict any copy/paste or other functionality between the VM and ESX host, he said in an email. (On the patching front, organizations using ESX should pay attention to last week’s security bulletin from VMware about an update for the ESX Service Console Operating System (COS) kernel to fix several security issues).

“Finally, they could set up ACLs or IDS monitoring rules to look for any weird traffic to those systems from both internal and external networks, and do the same on any virtual security tools if they’ve got them,” Shackleford said.

Edward Haletky, owner of AstroArch Consulting and analyst at the Virtualization Practice, wrote in a blog post that organizations should follow virtualization security best practices to pre-plan for the release of the ESX hypervisor code.

“Segregate your management networks, employ proper role-based access controls, ensure any third-party drivers come from well-known sources, set all isolation settings within your virtual machine containers, at-rest disk encryption, properly apply resource limits and  limit para-virtualized driver usage,” he wrote.

Any attacks arising from the code leak will show up shortly after the code is made available, but won’t increase the risk beyond where it is today, Haletky wrote.

“The use of best practices for securing virtual environments is on the rise, but we are still a long way from our goal. Just getting proper management network segregation is an uphill battle. If there is currently a real risk to your virtual environment, it is the lack of following current best practices, not an impending leak of code,” he said.


May 1, 2012  8:38 PM

Oracle trips on TNS zero-day workaround

Marcia Savage Michael Mimoso Profile: maxsteel

Oracle has a problem. And it’s summed up pretty well by the current uproar over the lack of a patch for a zero-day vulnerability in the Oracle TNS Listener. It’s the same problem Microsoft had a decade ago, and the same problem Adobe also has when it comes to security fixes. It’s this perception of arrogance Oracle gives off when serious security issues become public as this one has.

Oracle won’t patch a zero-day in its flagship database management system, and instead offered a workaround with the promise of fixing the vulnerability in the product’s next release. Swish that one around for a while: Oracle won’t patch a zero-day.

And to top it off, the vulnerability in question was reported to Oracle four years ago. In its April Critical Patch Update (CPU), Oracle finally got around to addressing the problem and did so with a workaround. Unfortunately for Oracle, the researcher who reported the vulnerability, Joxean Koret, misunderstood and believed a patch was available, so he spilled the beans on the vulnerability on the Full Disclosure list. The TNS Listener Poison Attack involves a man-in-the-middle attack that could hijack connections, route data from the client to the attacker where data could be stored, dropped or modified via SQL commands. Bad stuff.

According to Ray Stell, a database administrator at Virginia Tech University, the workaround suggested in the CPU is fairly simple to deploy. “You stop the listener, apply a configuration command and edit another configuration file and you’re up and running,” Stell said. Stell has a busy time ahead of him having to patch, er fix, er apply the workaround, to 40 Oracle boxes in his department alone.

The worst-kept secret in database security circles is that companies are very reticent to take database servers down for patching. Few can afford the downtime, much less the testing required to determine whether a patch will break functioning processes. It’s an unacceptable risk for most enterprises.

What should be unacceptable is Oracle’s continued thumbing of its nose toward security. Oracle said it won’t fix the vulnerability until the next full release because, according to its alert: “such back-porting is very difficult or impossible because of the amount of code change required, or because the fix would create significant regressions…”

Experts say the available workaround will keep Oracle installations secure against working exploits. Long term, however, Oracle needs to have its come-to-Jesus moment on security. It needs its version of Trustworthy Computing, which put Microsoft on a better course securing Windows and its other products. Unbreakable was a huge misstep in 2001, putting a massive target on the company’s software that guys like David Litchfield made a living on for a long time.

Publicly tripping over a zero-day vulnerability and working exploit code is just another indication that Oracle doesn’t entirely get it when it comes to security. Too bad, because it’s about time it did.


April 27, 2012  5:25 PM

CISPA intelligence information sharing bill passes House, headed to Senate

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

The Cyber Intelligence Sharing and Protection Act (CISPA), clears security vendors of any liability for sharing customer attack data with the government.

A new cybersecurity bill designed to foster threat intelligence information sharing between the public and private sectors cleared its first major legislative hurdle this week, gaining passage from the House of Representatives. If the bill makes it into law, it would clear security vendors of any legal ramifications in sharing their customer data with federal officials. That’s right: Symantec or any “certified” security vendor would be able to report your company’s infections directly with the feds.

If the Cyber Intelligence Sharing and Protection Act (CISPA), which is being opposed by the White House, privacy advocates and many democrats, passes the Senate in a narrow vote, political observers say it is likely to be vetoed by the president. The bill is being supported by a variety of tech companies, including Symantec, Facebook, Oracle and Microsoft.

The bill enables attack and threat information sharing on a voluntary basis between the federal government and technology, manufacturers and other businesses.  It’s a fascinating piece of legislation because under the voluntary program that the bill creates, it essentially gives security vendors the ability to share specific threat data collected from their customers with federal authorities – data that is not anonymous. The goal is to protect networks against attack, thereby giving the government some oversight into protecting critical infrastructure facilities that are under ownership by some private-sector companies. The controversial bill is being compared to Stop Online Piracy Act (SOPA) by privacy advocates who say that the legislation is too general and offers few safeguards protecting civil liberties.

CISPA amends the National Security Act and requires the director of national intelligence to establish procedures to allow intelligence community elements to share cyberthreat intelligence with private-sector entities and encourage information sharing – a common theme from the Feds at annual security conferences.

The bill would require procedures to ensure threat intelligence is shared only with “certified entities or a person with an appropriate security clearance.” It doesn’t delineate how an organization or individual becomes “certified.” But certification is needed, according to the legislation, to prevent unauthorized disclosure. Certification would be provided to “entities or officers or employees of such entities.”

Security companies, noted as “cybersecurity providers” are authorized under the proposed legislation to use cybersecurity systems to identify and obtain cyberthreat information from their customers and share that data with the federal government. The data would not be sanitized, giving the federal government unprecedented visibility into attacks and their specific targets. Many security providers already collect data on their customers and disseminate the data in threat intelligence reports, but the bill would give federal officials more visibility into attacks on specific private sector firms, such as utilities, chemical rendering companies, manufacturers and other organizations deemed essential to the protection of national security.

There is a provision in the bill “encouraging” the private sector to anonymize or minimize the cyberthreat information it voluntarily shares with others, including the government.

The bill is being supported by Symantec primarily because it takes out the company’s legal liability in sharing the data with the government. In a letter of support from Symantec, Cheri F. McGuire, vice president of global government affairs and cybersecurity policy, praised the goal of the bill (.pdf).

“In order for information sharing to be effective, information must be shared in a timely manner, with the right people or organizations, and with the understanding that so long as an entity shares information in good faith, it will not be faced with legal liability,” McGuire said. “This bi-partisan legislation exemplifies a solid understanding of the shortfalls in the current information-sharing environment, and provides common sense solutions to improve bi-directional, real time information sharing to mitigate cyberthreats.”

The Internet Security Alliance, an industry group that represents VeriSign, Ratheon and others, submitted a similar letter supporting the bill (.pdf).

Some protections were put into the bill. For example, it prevents security firms from being sued over threat data they share with the government. It says the threat data cannot be used by the federal government for a regulatory purpose.  It also prohibits the federal government from searching the information for any other purpose than for the protection of U.S. national security.

It also directs the Inspector General of the Intelligence Community to submit an annual report on how the threat data is being used and if any changes are needed to protect privacy and civil liberties concerns.


April 26, 2012  12:24 PM

For data security, cloud computing customers must accept a DIY approach

Jane Wright Jane Wright Profile: Jane Wright

Earlier this week, Context Information Security revealed the astonishing findings of its investigation into a sampling of four cloud service providers (CSPs) – Amazon EC2, Gigenet, Rackspace and VPS.net. Context found unpatched systems, missing antivirus and back doors left open, leaving cloud customers vulnerable to attacks and breaches.

Perhaps the most dismaying finding from U.K.-based Context’s investigation was the discovery of remnant data left behind by previous cloud customers. As part of its research, Context created virtual machines (VMs) on the CSPs platforms, and was able to see data stored by previous tenants on Rackspace and VPS.net disks. (VPS was using the OnApp platform.) Context referred to this finding as the “dirty disk” problem.

At first it may seem Context’s report serves as notice to CSPs that they are falling short of basic security expectations. Yet, in many ways, the problems can be tied to customers’ own shortcomings. Too often, customers count on their CSP to lock down their applications and safeguard their data, even though most CSPs explicitly state these precautions are not included in their standard offerings. Unfortunately, this sometimes comes as an unpleasant surprise for customers.

The base service offered by many CSPs does not include antivirus, patching or data deletion services. To protect their data security, cloud computing customers need to treat VMs in the cloud as if they were on-site servers. Customers must adopt a “do-it-yourself” (DIY) mindset and apply their own security applications and procedures to their cloud implementation, or pay their CSP for more security services.

The four CSPs investigated by Context are likely representative of the data security problems to be found on all cloud platforms. Companies storing data in the cloud need to act quickly to find out how their CSP is protecting the confidentiality of their data, and do their part in protecting their data in the cloud.


April 25, 2012  2:55 PM

Amazon cloud services: AWS Marketplace offers one-click cloud security

Marcia Savage Marcia Savage Profile: Marcia Savage

The new AWS Marketplace, launched by Amazon last week, is an interesting development on the cloud security front. The Amazon cloud services marketplace allows customers to choose from a menu of various software products and SaaS services, and launch the applications in their EC2 environments with one click.

Several security applications are among the options, including a virtual appliance from Check Point Software Technologies, SaaS endpoint protection from McAfee, and SaaS network IDS and vulnerability assessment from Alert Logic. Customers are charged for what they use on an hourly or monthly basis, and the charges appear on the same bill as their other AWS services.

“We wanted to shrink the time between finding what you want and getting it up and running,” Werner Vogels, CTO at Amazon.com, wrote in a blog post.

By making it easy for organizations to add security to their cloud environments, AWS has made a promising move. Integrating security can be complicated, but the AWS Marketplace appears to eliminate any heavy lifting. It could leave organizations with fewer excuses to not implement cloud security.

But not all is hunky dory with the AWS Marketplace, according to a blog post by Joe Brockmeier at ReadWriteCloud. While the AWS Marketplace makes it simpler to consume single-server apps, it “still leaves a lot of configuration to the end users,” he wrote. For example, he said, deploying Sharepoint with Amazon Virtual Private Cloud involves an architecture that’s “much less simple than a single EC2 image,” which means the marketplace doesn’t offer anything right now for those with needs beyond a single EC2 image.

Still, it will be interesting to see what other security services are offered via the marketplace and whether other cloud providers follow Amazon’s lead in easing the path to cloud security.


April 24, 2012  1:11 PM

Spam filter gets better of Microsoft SDL—almost

Marcia Savage Michael Mimoso Profile: maxsteel

You’d have to be a serious security curmudgeon to try to pick holes in the Microsoft SDL. The company’s security development lifecycle grew out of the Trustworthy Computing initiative, which turned 10 years old this year, and in many organizations, it sets the standard for secure development practices. At a minimum, it put secure development into the consciousness of many organizations and inspired a lot of companies to adopt bits and pieces, if not all, of the SDL.

No program is perfect, however.

Two security program managers working in the Microsoft Security Response Center (MSRC) shared a story during the SOURCE Boston Conference last week that’s worth sharing. It seems not too long ago, a security researcher reported a fairly serious vulnerability to Microsoft via its secure@microsoft.com email address. Turns out, however, that Microsoft’s spam filter kicked in and the vulnerability sat in limbo for months in a spam folder (sometimes it’s the simplest details that get ya.)

The researcher waited a responsible, er, respectable period of time, and eventually went public with details on the vulnerability, thinking Microsoft had ignored the researcher’s efforts. Once the details went full disclosure, Microsoft had to rush an out-of-band fix for the vulnerability; the two program managers refused to reveal the flaw last week.

“Don’t trust spam filtering,” said Jeremy Tinder, one of the managers. “This one was a crisis. Now we read them all (up to 500 a day). We have dedicated individuals to this triage stage of our security response.”

Tinder and his colleague David Seidman explained the MSRC’s role in the SDL at Microsoft and how vulnerabilities are handled once reported—and suggested these are minimal steps that organizations building their own software could follow. It’s a well-reported process that involves several stages:

· Triage—Microsoft determines whether vulnerabilities are security issues, or, for example, a coding or configuration error.

· Reproduce the issue –Microsoft tries to reproduce the security bug with the information provided by the researcher who reported it.

· Analyze the root cause—Once the MSRC is able to reproduce the issue, it determines how much user interaction is required to trigger it, and whether it’s a configuration that’s widely used by all customers.

· Planning—Schedule a fix and move forward after determining the scope of what needs to be fixed and any variants that could also trigger the vulnerability.

· Variant testing/investigation – This is a critical stage where all possible variants are tested before releasing a fix; the last thing the MSRC wants to do is release a fix and then have to re-release it.

· Implementation stage – The MSRC starts developing fixes immediately, and tests in parallel. They test whether other fixes cause regressions.

· Verification—Functional and regression testing is done here to ensure the patch fixes all attack vectors, doesn’t revert previous patches and doesn’t break applications.

· Release: More than a click of a button, Tinder and Seidman said. Involves having the infrastructure in place to push automatic downloads of patches, or give enterprises the ability to choose when and how to apply fixes.

Ultimately, the MSRC shoots for 60 to 90 days to turn around a patch, depending on testing and any issues that could arise and cause a regression forcing the MSRC to start over.

And oh yeah, check those spam folders.


April 19, 2012  1:04 PM

Experts differ on European ‘cookie law’ advice

Jane Wright Jane Wright Profile: Jane Wright

Many IT managers in the U.K. are in a quandary right now as they decide how, and how far, to comply with the impending European “cookie law.” IT managers in the U.S. will soon face the same dilemma.

Beginning May 26, the U.K. Information Commissioner’s Office (ICO) will enforce the privacy and electronic communications regulations (PECR), requiring website operators to explicitly ask permission from visitors before placing a cookie in a visitor’s browser. As you can imagine, many organizations are unhappy about this. They believe asking permission for cookies will cause their customers to flee to other websites, and they worry about abandoning some established programs (such as Google Analytics), which require the use of cookies to function properly. As a result, many IT managers feel stuck between compliance (with the possible loss of customers and information) and non-compliance (with possible penalties from the ICO).

The dilemma doesn’t stop with the U.K. Other countries in the European Union will likely implement the PECR soon, so organizations operating anywhere in Europe will need to develop a cookie compliance strategy.  It’s not an easy task, though, when a lot of the details remain unclear. For example, it is not yet known how the ICO will find out about errant websites, or if the ICO will respond to non-compliance with fines or just warnings, at least at first.

U.S. organizations are equally baffled by the cookie law. Must a U.S.-based organization comply if it serves customers in the UK or anywhere in the European Union?  Does it matter where the website is hosted?  To answer these questions, we’ve recently published two articles offering advice for U.S. organizations contemplating the cookie law. But even our two expert contributors do not agree on the best course of action. One expert advises U.S. organizations to begin taking proactive steps toward compliance, while another suggests U.S. organizations hold off for now.

As the enforcement date draws near, SearchSecurity.co.UK will continue to bring you updated news and advice from a variety of expert perspectives so you can decide on the best strategy for your organization.


April 18, 2012  4:12 PM

Cloud security vendors win funding for technologies

Marcia Savage Marcia Savage Profile: Marcia Savage

The rush to the cloud can often make security an afterthought, but if recent funding announcements are any indication, the VC community wants to reverse that trend.

CloudPassage, Cloud Lock and Symplified are among the cloud security vendors winning funding this year.

San Francisco-based CloudPassage announced last week that it won $14 million in funding. The company said it would use the money, which brings its total funding to $21 million, to further market and develop its Halo cloud server security platform.

In late March, Waltham, Mass.-based CloudLock said it raised $8.7 million in funding to expand its engineering and sales efforts and extend its cloud security technologies to new platforms. The cloud security vendor provides a security SaaS add on for Google Apps. When I met with Tsahy Shapsa, Cloud Lock co-founder and vice president of sales and marketing, at the RSA Conference 2012, he said the company planned to expand its service to protect other cloud platforms.

Earlier this year, Boulder, Colo.-based Symplified garnered a whopping $20 million in VC financing led by Ignition Partners.

When announcing the CloudLock funding, Luke Burns, a partner with Ascent Venture Partners — CloudLock’s new investor — noted that increased collaboration is a major benefit of cloud computing, but organizations “lose sight and control of the data being shared, both internally and externally.” CloudLock, he added, bridges a “critical, emerging security gap.”

Meanwhile, Brian Melton, managing director at Tenaya Capital – which led CloudPassage’s latest funding – said the cloud security vendor’s technology addresses a large market opportunity. He noted that security has been a “key barrier to cloud adoption.”

The fact that VCs see cloud security as an opportunity is a promising sign. It should help cloud service providers understand that security is critical and provide cloud users with more options for securing their cloud environments.


April 12, 2012  1:49 PM

The importance of using a full security threat definition

Jane Wright Jane Wright Profile: Jane Wright

How do you define a security threat? If you’re like most IT security professionals, your security threat definition is probably: “The potential occurrence of an attack against an organization’s infrastructure and assets.”

If this was a pop quiz, you could get half credit for that answer. It’s partly true, but it’s not the whole answer, and it’s not the answer your executive leaders and board of directors need to hear.

Christopher Armstrong, CISO of Livermore, Calif.-based Allgress Inc., popped this quiz on the audience during a business risk session at SecureWorld last month, and almost everyone gave the IT-centric answer above. But Armstrong made a strong case for changing our perspective when we talk about security threats.

When you talk to a CEO or a board member about the threats to his or her organization, Armstrong said, there’s no need to go into great detail about the type of attack that may occur, the motivation of the attacker, etc. All he or she really wants to know is: What will it cost us? And, what’s the probability it will happen? 

Telling the CEO or the board that a widespread threat could steal your sensitive customer data isn’t likely to get you the funding you need to stop that threat. But tell them the threat could cost the organization $10 million and there’s a 50% chance it will happen, and they just may open the checkbook for you.

By looking at security projects from a board member’s perspective, as well your own infosec perspective, you’re more likely to get the resources you need to advance your security initiatives.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: