Security Bytes

August 29, 2012  1:21 PM

Trend Micro shoots down Crisis Trojan threat to VMware

Marcia Savage Marcia Savage Profile: Marcia Savage

Last week, when Symantec researchers said they had discovered the Windows version of the Crisis Trojan could spread to VMware virtual machines, it was big news. But Trend Micro doesn’t see Crisis as a major threat for enterprises using VMware. In fact, executives at the company think Crisis’s potential to spread to virtual machines was overblown.

“There was a fair amount of hype,” Harish Agastya, director of product marketing for data center security at Trend Micro, told me in a meeting this week at VMworld in San Francisco.

The Crisis malware only impacts Windows-based Type2 hypervisor deployments, not Type 1 hypervisor deployments, which are what most enterprises use, he said. “It’s specific to Type 2,” he said.

Warren Wu, director of product group management in the data center business unit, wrote a blog post that provided more details on the different deployments and attack scenarios. Here’s his description:

Type 1 Hypervisor deployment – Prime examples are VMware ESX, Citrix Xensource etc. It would help to think of these products as replacing the Host OS (Windows/Linux) and executing right on the actual machine hardware. This software is like an operating system and directly controls the hardware. In turn, the hypervisor allows multiple virtual machines to execute simultaneously.  Almost all data center deployments use this kind of virtualization. This is NOT the deployment this malware attacks. I’m not aware of malware capable of infecting Type 1 Hypervisors in the wild.

Type 2 Hypervisor deployment – Example VMware Workstation, VMware Player etc. In this case the hypervisor installs on TOP of a standard operating system (Windows/Linux) and in turn hosts multiple virtual machines on top. It is this second scenario that the malware infects. First, the host operating system is compromised. This could be a well-known Windows/Mac OS attack (with the only added wrinkle being the OS is detected and the appropriate executable is installed). It then looks for VMDK files and probably instantiates the VM (using VmPlayer) and then uses the same infection as that used for the Host OS. This type of an infection can be stopped with up-to-date, endpoint antimalware solutions.

What makes Crisis unique, Wu wrote, is that it specifically seeks out virtual machines and tries to infect them. It also infects the VM through the underlying infrastructure by modifying the VMDK file instead of infecting the VM through more conventional avenues such as file shares, he said.

Trend Micro has made a name for itself in virtualization security, so what the company is saying about Crisis carries a lot of weight. Trend Micro was the first security vendor to partner with VMware and produce an agentless antivirus product. At VMWorld, the company launched the latest version of its Deep Security server security platform, which provides anti-malware and firewall protection, intrusion prevention and integrity monitoring to protect virtual servers and desktops.

The new version features caching and de-duplication functions to reduce file scanning and improve performance and hypervisor integrity monitoring. Deep Security 9 also includes integration with VMware’s vCloud Director and Amazon Web Services. That integration combined with a unified management console will enable customers to manage security of their physical, virtual and cloud servers from a single console, Agastya said.

Trend also launched Trend Ready for Cloud Service Providers, a program that provides certification that Trend Micro’s cloud security products – Deep Security and Secure Cloud– are compatible within a service provider’s environment, said Scott Montgomery, global strategic director of cloud provider business development at Trend. AWS, Dell, HP Cloud Services and Savvis are among the cloud service providers that have received the Trend Ready designation.

July 26, 2012  6:46 PM

FFIEC cloud computing risks document: Where’s the beef?

Marcia Savage Marcia Savage Profile: Marcia Savage

It seems the Federal Financial Institutions Examination Council could have done a little better with its cloud computing advisory. Earlier this month, the FFIEC issued a statement on outsourced cloud computing. The resource document outlines key cloud computing risks financial institutions should consider.

In the document, the FFIEC said it considers cloud computing to be another form of outsourcing with “the same basic risk characteristics and risk management requirements as traditional outsourcing.”

Right there, I think a lot of security experts would disagree. Cloud computing involves so many new elements — namely multi-tenancy – that present different risks than traditional outsourcing models. The FFIEC cloud computing statement covers multi-tenancy and other issues associated with cloud computing, such as potential complications with regulatory compliance due to data location, but at a high level without much detail. The document also covers familiar ground like vendor management and due diligence, stressing the importance of both in cloud computing arrangements.

Perhaps the FFIEC figured others, such as the National Institute of Standards and Technology (NIST), have already provided ample guidance on cloud computing risks. Late last year, NIST released its Guidelines on Security and Privacy in Public Cloud Computing (.pdf), which covers threats and risks associated with public cloud computing and provides organizations with recommendations.

Still, banks look to the FFIEC for guidance, and if any industry needs to be careful with moving data into the cloud, it’s banks. The FFIEC’s rather cursory treatment of the subject is puzzling indeed.

July 18, 2012  9:04 PM

Federal cloud computing strategy faces challenges, GAO finds

Marcia Savage Marcia Savage Profile: Marcia Savage

A recent audit into U.S. federal agencies’ adoption of cloud computing services highlighted challenges that likely would resonate with private enterprises looking to move applications to the cloud.

The report by the U.S. Government Accountability Office looked at the progress seven agencies have made in implementing the White House’s “Cloud First” policy. According to GAO, agencies need to do better planning – of the 20 plans for implementing cloud solutions the agencies submitted, seven didn’t include estimated costs. None of the plans for meeting the federal cloud computing strategy included details on how legacy systems would be retired or repurposed.

What’s telling, though, is the list of cloud challenges GAO compiled after talking to agencies. Topping the list is concern over the ability – or inability – of cloud providers to meet federal security requirements. For example, State Department officials reported that cloud providers can’t match the department’s ability to monitor its systems in real time. Also, Treasury officials noted that meeting a FISMA requirement for maintaining a physical inventory is tough since they don’t have insight into the cloud provider’s infrastructure and assets.

Other challenges cited in the GAO report include agencies not having the necessary expertise to implement cloud services; a Health and Human Services official reported that it’s difficult to teach staff a new set of procedures, such as monitoring performance in a cloud environment. Another challenge: Ensuring data portability and interoperability by avoiding vendor lock-in.

Sounds familiar, right? Security, cloud provider transparency, lack of expertise, vendor lock-in are all issues that organizations, both public and private, are wrestling with as they try to take advantage of cloud services.  In its report, the GAO notes that recently issued federal guidance and initiatives, including guidance from NIST and FedRAMP, address some of these issues. Still, the issues are far from easy to solve. The journey to the cloud is a pretty bumpy one right now, as the GAO report found.

July 13, 2012  10:56 PM

Yahoo fixes flaw that led to password breach

Marcia Savage Marcia Savage Profile: Marcia Savage

Well it could have been worse. Yahoo on Friday said it has fixed the vulnerability that allowed hackers to expose approximately 450,000 email addresses and passwords belonging to the Yahoo Contributor Network. That’s a huge number but still small potatoes compared to the half billion visitors Yahoo claims each month.

The online giant said in a blog post Friday that the compromised data was an older file containing email addresses and passwords provided by writers who joined Associated Content prior to May 2010, when Yahoo acquired it and renamed it the Yahoo Contributor Network. “This compromised file was a standalone file that was not used to grant access to Yahoo systems and services,” Yahoo said.

In addition to fixing the vulnerability that led to the breach, the company said it deployed additional security measures for affected Yahoo users, boosted its underlying security controls and is notifying affected users. “In addition, we will continue to take significant measures to protect our users and their data,” Yahoo said.

Yahoo’s blog post touted its response to the breach as “swift” but the company had already taken a lot of punches since the reports of the breach were published Thursday. Some security pros berated Yahoo for lack of security while others expressed mock surprise that the struggling company still had so many members. For sure, the breach – the latest in a series of password breaches – is yet another reminder of the need for users to be more careful about the passwords they create and for companies to take proper steps to secure those passwords.


July 11, 2012  5:04 PM

AWS outage doesn’t discourage Netflix from banking on the cloud

Marcia Savage Marcia Savage Profile: Marcia Savage

Last month’s Amazon Web Services cloud outage sparked a lot of online discussion and debate over the viability of cloud services. According to published reports, an online dating company ditched AWS after massive storms caused power outages and knocked out service in one of Amazon’s U.S. East-1 Availability Zones June 29.

But Netflix – one of Amazon’s biggest cloud customers – said it remains “bullish on the cloud” despite the AWS outage. In a blog post Friday, Greg Orzell, software architect at Netflix and Ariel Tseitlin, director of cloud solutions at the company, wrote a post mortem of the outage, which they said was one of the most significant Netflix had experienced in over a year. The outage showed up things that both AWS and Netflix could do better, they wrote.

“Our own root-cause analysis uncovered some interesting findings, including an edge-case in our internal mid-tier load-balancing service,” they wrote. “This caused unhealthy instances to fail to deregister from the load balancer which black-holed a large amount of traffic into the unavailable zone. In addition, the network calls to the instances in the unavailable zone were hanging, rather than returning no route to host.”

Netflix is working to improve its resiliency and is working closely with Amazon on ways to improve the cloud provider’s systems, “focusing our efforts on eliminating single points of failure that can cause region-wide outage and isolating the failures of individual zones,” Orzell and Tseitlin wrote.

“While it’s easy and common to blame the cloud for outages because it’s outside of our control, we found that our overall availability over the past several years has steadily improved,” they wrote. “When we dig into the root causes of our biggest outages, we find that we can typically put in resiliency patterns to mitigate service disruption.”

Last summer, I attended a session at the Gartner Catalyst Conference 2011, on planning for resiliency in the cloud. Richard Jones, a managing vice president at Gartner, said the public cloud is a utility and utilities fail, making it critical that customers prepare for downtime. Enterprises often assume cloud services are reliable but they need to take responsibility for uptime, he said.

Seemed like sound advice to me. Other companies may want to look to Netflix for cues on planning for cloud resiliency.

July 9, 2012  1:20 PM

DNSChanger malware problems unlikely

Robert Westervelt Robert Westervelt Profile: Robert Westervelt

DNSChanger infections have declined precipitously, but remaining systems could have Internet access turned off today.

It appears the Internet will not be thrown into turmoil as a result of the FBI shutting down the servers feeding systems containing DNSChanger malware.

The DNS Working Group, made up of a number of experts from security firms, DNS providers and the government, has been tracking infections. As of June 11, there were only about 69,000 DNSChanger infections in the United States and far fewer in other countries.  The working group also estimated that globally there were approximately 303,000 systems containing the malware.

When the FBI arrested six Estonian nationals in November, charging them with running a sophisticated Internet fraud ring, investigators seized servers in data centers in Estonia, New York, and Chicago that were pointing victims to spoofed websites. The FBI estimated at the time that there were 500,000 infections in the U.S. and up to 4 million abroad.

With the news coverage aimed at consumers with little knowledge of the malware, it is very likely that the number of infections has drastically declined, although the working group hasn’t released updated figures.  When the replacement DNS servers designed to avoid disruption are turned off today there won’t likely be any serious problems. It has still generated a number of hyped headlines including “Internet doomsday virus,” and “Internet blackout looms.” Let’s put this in context: There are still 2.5 million machines infected with Conficker.

The DNSChanger malware is a good example of the need for increased security vigilance on the part of average computer users. It can go a long way to reducing the number of serious incidents by disrupting the spread of malware. The working group has a great security protection Web page that leads computer users to additional information about phishing, antimalware and Windows 7 security features.  The links lead to solid information from the U.S. Computer Emergency Readiness Team, the Carnegie Mellon Cylab Usable Privacy and Security Laboratory and the FBI. The advice is good, and is without the marketing spin designed to sell security software.

Another great resource that puts the DNSChanger problem into context is Canada’s Public Safety office, which published a document in November. The Canadian DNS Changer TDSS/Alureon/TidServ/TDL4 Malware Web page has been updated to help people determine if their systems have been infected and contains tools to help victims remove the infection.

Checking a system can be done by simply visiting a websiteor manually depending on your operating system.

June 28, 2012  1:08 PM

Putting the mobile botnet threat in perspective

Jane Wright Jane Wright Profile: Jane Wright

Mobile device security threats are taking center stage as IT managers strive to protect and control these nimble creatures that contain company information and access the company network.  But looking at the big picture of all IT security concerns, just how significant are specific types of mobile device threats? According to one expert, mobile botnets, at least, should not keep you awake at night.

Mobile botnets are created when an attacker infects a number of mobile devices with malicious software. The infected devices communicate with other mobile devices, thus spreading the infection and growing the botnet. The attacker’s goal, in theory, is to gain root control of the mobile devices in order to use their combined bandwidth and computing power for nefarious means.

In an interview with News Director Rob Westervelt, Joe Stewart, director of malware research at Dell SecureWorks, provided his perspective on the relative importance of the mobile botnet threat. Because mobile networks don’t have as much bandwidth as broadband connections, Stewart said, mobile botnets are not likely to be very profitable for the botnet operator.

“I don’t think you can say at this time that someone will get a whole lot of value out of a mobile botnet,” Stewart said. “There are certain categories where it is useful, but as a DDoS botnet, it would probably be pretty abysmal.”

However, findings by Symantec Corp. suggest revenue for the mobile botnet “industry” may be on the rise.  Writing in Symantec’s official blog in February, Symantec Security Response Engineer Cathal Mullaney noted the discovery of one particular mobile botnet that had the ability to use premium SMS scamming to generate millions of dollars a year.

Still, all indications suggest mobile botnets are a small niche in the overall threat landscape. Antimalware investments might be better spent in other areas right now, but be wary of a possible invasion of mobile botnets in the future as attackers prey on the relatively easy vulnerabilities of mobile platforms.

June 27, 2012  9:55 PM

Operation High Roller: Server-side automation in online bank fraud

Marcia Savage Marcia Savage Profile: Marcia Savage

I’ve covered a lot on online bank fraud in the past – there seems to be no end to the increasingly sneaky techniques cybercriminals develop to siphon money out of victims’ bank accounts. This week, McAfee Inc. and Guardian Analytics Inc. released the findings of their investigation into a global fraud ring that takes the old techniques up a notch.

In their report, “Dissecting Operation High Roller” (.pdf), the companies report cybercriminals — building on older Zeus and SpyEye tactics – are targeting high-balance bank accounts belonging to businesses and individuals. Unlike past online bank fraud attacks using Zeus and SpyEye, though, these new attacks use server-side components and heavy automation. According to the report, the attacks have been mostly in Europe, but are now spreading to the U.S.

Criminals have tried to steal more than $78 million in fraudulent transfers from at least 60 financial institutions, including large global banks, credit unions and regional banks, the report said.

In a blog post, Dave Marcus, director of advanced research and threat intelligence at McAfee, noted that by shifting from traditional man-in-the-browser attacks on a victim’s PC to server-side automation attacks, criminals have moved from multipurpose botnet servers to cloud-based servers that are purpose-built and dedicated to processing fraudulent transactions. The strategy, he said, helps criminals move faster and avoid detection.

The report describes attacks in the U.S. and The Netherlands as using a server located with an ISP with “crime-friendly usage policies” and moved frequently to avoid discovery.

All pretty unsettling stuff, suffice to say. And, according to the report, financial institutions can expect even more automated and creative forms of fraud in the future.

June 21, 2012  12:50 PM

Review your security contingency plan during the 2012 Olympic Games

Jane Wright Jane Wright Profile: Jane Wright

As the opening day of the 2012 Olympic Games nears, IT teams in the U.K. are busy expanding their companies’ security policies and reviewing their security contingency plans. They are preparing for 17 days of Games, which will surely produce crowded transportation systems, overloaded Internet connections, and employees whose attention may be diverted by swimming relays and equestrian events.

The Olympics provide a good opportunity for companies in the U.S. and around the world to review their security policies and plans, too. Security pros can watch how their peers in the U.K. handle the pressures and disruptions caused by the Olympics, and consider how they would handle such an event if it occurred in their city.

Security contingency plans, which are similar to disaster recovery plans or business continuity plans, lay out the steps IT should take as soon as a disruptive event occurs. The idea is to make important decisions in advance, and have the necessary resources already in place, so the team can react quickly to maintain the security of their company’s data and other IT assets. Yet, according to our application security expert Michael Cobb, many companies’ security contingency plans are either unrealistic or woefully out-of-date.

Could your company continue operating securely in a chaotic environment — whether that chaos is caused by a scheduled event, such as the Olympics, or by an unplanned natural event? The 2012 Olympics serve as a reminder for all firms to review and revise their security contingency plans in light of current concerns and resources.

The relatively quiet summer months may be a good time to set up components of your security contingency plan. One of the most important components to handle in advance is widespread telecommuting. During a major event, more employees may have to work from home. You can prepare now by having all employees sign a remote working policy agreement, test the security of the Internet connection in their home, and receive training on topics, such as securely filing sensitive documents from their home office.

The Olympics also provide a hook for continued security awareness training. The IT department could send out an email educating users about Olympic ticket scams, providing a helpful lesson for any too-good-to-be-true email offer. Or the IT department could run a summertime security contest, posting a short information security quiz on the company’s Intranet and awarding gold, silver and bronze medals to the employees or departments who score highest on the quiz.

Even if the Olympics are not held in the U.S. until at least 2024, there is bound to be a significant event that will affect your company’s security posture in the near future. Prepare and practice now so your security team can execute flawlessly and take home the gold.

June 14, 2012  2:49 PM

Opinion: LinkedIn hacking incident betrays users’ trust

Jane Wright Jane Wright Profile: Jane Wright

Security awareness training often teaches the importance of password length and password complexity, but these best practices, as it turns out, may be creating a false sense of security. Even worse, users who cooperate and create long, complex passwords may feel betrayed when the organizations they trusted prove fallible and their passwords are hacked.

The recent LinkedIn hacking incident, in which 6.4 million LinkedIn passwords were stolen (or possibly leaked), demonstrated the strength of a user’s password is no defense when an Internet application provider is attacked. Even if each LinkedIn password was as long and complex as possible, it wouldn’t have mattered; the Russian hackers still found the hashed LinkedIn passwords and posted them for all to see.

According to some analysts reviewing the LinkedIn breach, the social networking site had failed to protect users’ passwords with a strong hashing algorithm. That’s where the sense of betrayal comes in. If users are doing their part by using strong passwords, they should be able to trust the application provider to take strong precautions, too.

The situation spurred LinkedIn to take stronger precautions now. In a blog post, LinkedIn said it would use better hashing and salting to protect its account databases in the future.

Organizations can learn from LinkedIn’s public mea culpa. If your IT staff has been lecturing users on strong passwords, but your organization’s passwords are stolen, how will your users react? After years of building trust between IT and users, an incident like this could destroy the relationship in one day.

The LinkedIn incident is a reminder of the need to properly balance responsibility for secure access management among users and IT. Yes, user training is important, but IT security teams must go the extra mile to protect account credentials and prove themselves worthy of users’ trust.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: