It seems the Federal Financial Institutions Examination Council could have done a little better with its cloud computing advisory. Earlier this month, the FFIEC issued a statement on outsourced cloud computing. The resource document outlines key cloud computing risks financial institutions should consider.
In the document, the FFIEC said it considers cloud computing to be another form of outsourcing with “the same basic risk characteristics and risk management requirements as traditional outsourcing.”
Right there, I think a lot of security experts would disagree. Cloud computing involves so many new elements — namely multi-tenancy – that present different risks than traditional outsourcing models. The FFIEC cloud computing statement covers multi-tenancy and other issues associated with cloud computing, such as potential complications with regulatory compliance due to data location, but at a high level without much detail. The document also covers familiar ground like vendor management and due diligence, stressing the importance of both in cloud computing arrangements.
Perhaps the FFIEC figured others, such as the National Institute of Standards and Technology (NIST), have already provided ample guidance on cloud computing risks. Late last year, NIST released its Guidelines on Security and Privacy in Public Cloud Computing (.pdf), which covers threats and risks associated with public cloud computing and provides organizations with recommendations.
Still, banks look to the FFIEC for guidance, and if any industry needs to be careful with moving data into the cloud, it’s banks. The FFIEC’s rather cursory treatment of the subject is puzzling indeed.
A recent audit into U.S. federal agencies’ adoption of cloud computing services highlighted challenges that likely would resonate with private enterprises looking to move applications to the cloud.
The report by the U.S. Government Accountability Office looked at the progress seven agencies have made in implementing the White House’s “Cloud First” policy. According to GAO, agencies need to do better planning – of the 20 plans for implementing cloud solutions the agencies submitted, seven didn’t include estimated costs. None of the plans for meeting the federal cloud computing strategy included details on how legacy systems would be retired or repurposed.
What’s telling, though, is the list of cloud challenges GAO compiled after talking to agencies. Topping the list is concern over the ability – or inability – of cloud providers to meet federal security requirements. For example, State Department officials reported that cloud providers can’t match the department’s ability to monitor its systems in real time. Also, Treasury officials noted that meeting a FISMA requirement for maintaining a physical inventory is tough since they don’t have insight into the cloud provider’s infrastructure and assets.
Other challenges cited in the GAO report include agencies not having the necessary expertise to implement cloud services; a Health and Human Services official reported that it’s difficult to teach staff a new set of procedures, such as monitoring performance in a cloud environment. Another challenge: Ensuring data portability and interoperability by avoiding vendor lock-in.
Sounds familiar, right? Security, cloud provider transparency, lack of expertise, vendor lock-in are all issues that organizations, both public and private, are wrestling with as they try to take advantage of cloud services. In its report, the GAO notes that recently issued federal guidance and initiatives, including guidance from NIST and FedRAMP, address some of these issues. Still, the issues are far from easy to solve. The journey to the cloud is a pretty bumpy one right now, as the GAO report found.
Well it could have been worse. Yahoo on Friday said it has fixed the vulnerability that allowed hackers to expose approximately 450,000 email addresses and passwords belonging to the Yahoo Contributor Network. That’s a huge number but still small potatoes compared to the half billion visitors Yahoo claims each month.
The online giant said in a blog post Friday that the compromised data was an older file containing email addresses and passwords provided by writers who joined Associated Content prior to May 2010, when Yahoo acquired it and renamed it the Yahoo Contributor Network. “This compromised file was a standalone file that was not used to grant access to Yahoo systems and services,” Yahoo said.
In addition to fixing the vulnerability that led to the breach, the company said it deployed additional security measures for affected Yahoo users, boosted its underlying security controls and is notifying affected users. “In addition, we will continue to take significant measures to protect our users and their data,” Yahoo said.
Yahoo’s blog post touted its response to the breach as “swift” but the company had already taken a lot of punches since the reports of the breach were published Thursday. Some security pros berated Yahoo for lack of security while others expressed mock surprise that the struggling company still had so many members. For sure, the breach – the latest in a series of password breaches – is yet another reminder of the need for users to be more careful about the passwords they create and for companies to take proper steps to secure those passwords.
Last month’s Amazon Web Services cloud outage sparked a lot of online discussion and debate over the viability of cloud services. According to published reports, an online dating company ditched AWS after massive storms caused power outages and knocked out service in one of Amazon’s U.S. East-1 Availability Zones June 29.
But Netflix – one of Amazon’s biggest cloud customers – said it remains “bullish on the cloud” despite the AWS outage. In a blog post Friday, Greg Orzell, software architect at Netflix and Ariel Tseitlin, director of cloud solutions at the company, wrote a post mortem of the outage, which they said was one of the most significant Netflix had experienced in over a year. The outage showed up things that both AWS and Netflix could do better, they wrote.
“Our own root-cause analysis uncovered some interesting findings, including an edge-case in our internal mid-tier load-balancing service,” they wrote. “This caused unhealthy instances to fail to deregister from the load balancer which black-holed a large amount of traffic into the unavailable zone. In addition, the network calls to the instances in the unavailable zone were hanging, rather than returning no route to host.”
Netflix is working to improve its resiliency and is working closely with Amazon on ways to improve the cloud provider’s systems, “focusing our efforts on eliminating single points of failure that can cause region-wide outage and isolating the failures of individual zones,” Orzell and Tseitlin wrote.
“While it’s easy and common to blame the cloud for outages because it’s outside of our control, we found that our overall availability over the past several years has steadily improved,” they wrote. “When we dig into the root causes of our biggest outages, we find that we can typically put in resiliency patterns to mitigate service disruption.”
Last summer, I attended a session at the Gartner Catalyst Conference 2011, on planning for resiliency in the cloud. Richard Jones, a managing vice president at Gartner, said the public cloud is a utility and utilities fail, making it critical that customers prepare for downtime. Enterprises often assume cloud services are reliable but they need to take responsibility for uptime, he said.
Seemed like sound advice to me. Other companies may want to look to Netflix for cues on planning for cloud resiliency.
DNSChanger infections have declined precipitously, but remaining systems could have Internet access turned off today.
It appears the Internet will not be thrown into turmoil as a result of the FBI shutting down the servers feeding systems containing DNSChanger malware.
The DNS Working Group, made up of a number of experts from security firms, DNS providers and the government, has been tracking infections. As of June 11, there were only about 69,000 DNSChanger infections in the United States and far fewer in other countries. The working group also estimated that globally there were approximately 303,000 systems containing the malware.
When the FBI arrested six Estonian nationals in November, charging them with running a sophisticated Internet fraud ring, investigators seized servers in data centers in Estonia, New York, and Chicago that were pointing victims to spoofed websites. The FBI estimated at the time that there were 500,000 infections in the U.S. and up to 4 million abroad.
With the news coverage aimed at consumers with little knowledge of the malware, it is very likely that the number of infections has drastically declined, although the working group hasn’t released updated figures. When the replacement DNS servers designed to avoid disruption are turned off today there won’t likely be any serious problems. It has still generated a number of hyped headlines including “Internet doomsday virus,” and “Internet blackout looms.” Let’s put this in context: There are still 2.5 million machines infected with Conficker.
The DNSChanger malware is a good example of the need for increased security vigilance on the part of average computer users. It can go a long way to reducing the number of serious incidents by disrupting the spread of malware. The working group has a great security protection Web page that leads computer users to additional information about phishing, antimalware and Windows 7 security features. The links lead to solid information from the U.S. Computer Emergency Readiness Team, the Carnegie Mellon Cylab Usable Privacy and Security Laboratory and the FBI. The advice is good, and is without the marketing spin designed to sell security software.
Another great resource that puts the DNSChanger problem into context is Canada’s Public Safety office, which published a document in November. The Canadian DNS Changer TDSS/Alureon/TidServ/TDL4 Malware Web page has been updated to help people determine if their systems have been infected and contains tools to help victims remove the infection.
Checking a system can be done by simply visiting a websiteor manually depending on your operating system.
Mobile device security threats are taking center stage as IT managers strive to protect and control these nimble creatures that contain company information and access the company network. But looking at the big picture of all IT security concerns, just how significant are specific types of mobile device threats? According to one expert, mobile botnets, at least, should not keep you awake at night.
Mobile botnets are created when an attacker infects a number of mobile devices with malicious software. The infected devices communicate with other mobile devices, thus spreading the infection and growing the botnet. The attacker’s goal, in theory, is to gain root control of the mobile devices in order to use their combined bandwidth and computing power for nefarious means.
In an interview with SearchSecurity.com News Director Rob Westervelt, Joe Stewart, director of malware research at Dell SecureWorks, provided his perspective on the relative importance of the mobile botnet threat. Because mobile networks don’t have as much bandwidth as broadband connections, Stewart said, mobile botnets are not likely to be very profitable for the botnet operator.
“I don’t think you can say at this time that someone will get a whole lot of value out of a mobile botnet,” Stewart said. “There are certain categories where it is useful, but as a DDoS botnet, it would probably be pretty abysmal.”
However, findings by Symantec Corp. suggest revenue for the mobile botnet “industry” may be on the rise. Writing in Symantec’s official blog in February, Symantec Security Response Engineer Cathal Mullaney noted the discovery of one particular mobile botnet that had the ability to use premium SMS scamming to generate millions of dollars a year.
Still, all indications suggest mobile botnets are a small niche in the overall threat landscape. Antimalware investments might be better spent in other areas right now, but be wary of a possible invasion of mobile botnets in the future as attackers prey on the relatively easy vulnerabilities of mobile platforms.
I’ve covered a lot on online bank fraud in the past – there seems to be no end to the increasingly sneaky techniques cybercriminals develop to siphon money out of victims’ bank accounts. This week, McAfee Inc. and Guardian Analytics Inc. released the findings of their investigation into a global fraud ring that takes the old techniques up a notch.
In their report, “Dissecting Operation High Roller” (.pdf), the companies report cybercriminals — building on older Zeus and SpyEye tactics – are targeting high-balance bank accounts belonging to businesses and individuals. Unlike past online bank fraud attacks using Zeus and SpyEye, though, these new attacks use server-side components and heavy automation. According to the report, the attacks have been mostly in Europe, but are now spreading to the U.S.
Criminals have tried to steal more than $78 million in fraudulent transfers from at least 60 financial institutions, including large global banks, credit unions and regional banks, the report said.
In a blog post, Dave Marcus, director of advanced research and threat intelligence at McAfee, noted that by shifting from traditional man-in-the-browser attacks on a victim’s PC to server-side automation attacks, criminals have moved from multipurpose botnet servers to cloud-based servers that are purpose-built and dedicated to processing fraudulent transactions. The strategy, he said, helps criminals move faster and avoid detection.
The report describes attacks in the U.S. and The Netherlands as using a server located with an ISP with “crime-friendly usage policies” and moved frequently to avoid discovery.
All pretty unsettling stuff, suffice to say. And, according to the report, financial institutions can expect even more automated and creative forms of fraud in the future.
As the opening day of the 2012 Olympic Games nears, IT teams in the U.K. are busy expanding their companies’ security policies and reviewing their security contingency plans. They are preparing for 17 days of Games, which will surely produce crowded transportation systems, overloaded Internet connections, and employees whose attention may be diverted by swimming relays and equestrian events.
The Olympics provide a good opportunity for companies in the U.S. and around the world to review their security policies and plans, too. Security pros can watch how their peers in the U.K. handle the pressures and disruptions caused by the Olympics, and consider how they would handle such an event if it occurred in their city.
Security contingency plans, which are similar to disaster recovery plans or business continuity plans, lay out the steps IT should take as soon as a disruptive event occurs. The idea is to make important decisions in advance, and have the necessary resources already in place, so the team can react quickly to maintain the security of their company’s data and other IT assets. Yet, according to our application security expert Michael Cobb, many companies’ security contingency plans are either unrealistic or woefully out-of-date.
Could your company continue operating securely in a chaotic environment — whether that chaos is caused by a scheduled event, such as the Olympics, or by an unplanned natural event? The 2012 Olympics serve as a reminder for all firms to review and revise their security contingency plans in light of current concerns and resources.
The relatively quiet summer months may be a good time to set up components of your security contingency plan. One of the most important components to handle in advance is widespread telecommuting. During a major event, more employees may have to work from home. You can prepare now by having all employees sign a remote working policy agreement, test the security of the Internet connection in their home, and receive training on topics, such as securely filing sensitive documents from their home office.
The Olympics also provide a hook for continued security awareness training. The IT department could send out an email educating users about Olympic ticket scams, providing a helpful lesson for any too-good-to-be-true email offer. Or the IT department could run a summertime security contest, posting a short information security quiz on the company’s Intranet and awarding gold, silver and bronze medals to the employees or departments who score highest on the quiz.
Even if the Olympics are not held in the U.S. until at least 2024, there is bound to be a significant event that will affect your company’s security posture in the near future. Prepare and practice now so your security team can execute flawlessly and take home the gold.
Security awareness training often teaches the importance of password length and password complexity, but these best practices, as it turns out, may be creating a false sense of security. Even worse, users who cooperate and create long, complex passwords may feel betrayed when the organizations they trusted prove fallible and their passwords are hacked.
The recent LinkedIn hacking incident, in which 6.4 million LinkedIn passwords were stolen (or possibly leaked), demonstrated the strength of a user’s password is no defense when an Internet application provider is attacked. Even if each LinkedIn password was as long and complex as possible, it wouldn’t have mattered; the Russian hackers still found the hashed LinkedIn passwords and posted them for all to see.
According to some analysts reviewing the LinkedIn breach, the social networking site had failed to protect users’ passwords with a strong hashing algorithm. That’s where the sense of betrayal comes in. If users are doing their part by using strong passwords, they should be able to trust the application provider to take strong precautions, too.
The situation spurred LinkedIn to take stronger precautions now. In a blog post, LinkedIn said it would use better hashing and salting to protect its account databases in the future.
Organizations can learn from LinkedIn’s public mea culpa. If your IT staff has been lecturing users on strong passwords, but your organization’s passwords are stolen, how will your users react? After years of building trust between IT and users, an incident like this could destroy the relationship in one day.
The LinkedIn incident is a reminder of the need to properly balance responsibility for secure access management among users and IT. Yes, user training is important, but IT security teams must go the extra mile to protect account credentials and prove themselves worthy of users’ trust.
Wednesday’s Cornerstones of Trust Conference featured an interesting CSO discussion of some of the hottest topics infosecurity pros are dealing with today, including the BYOD trend, cloud computing and big data security. The annual conference, held in Foster City, Calif., is sponsored by ISSA’s Silicon Valley and San Francisco chapters, and San Francisco Bay Area InfraGard.
Mobile, cloud and BYOD are all part of an overarching trend towards consumerization of IT that’s driving demand for convenient, easy access to corporate data, said Preston Wood, CSO at Zions Bancorporation, a Salt Lake City-based bank holding company. “We need to find a way to enable that and not be a roadblock,” he said.
At Cisco Systems, the mobile trend is far from new, said Steve Martino, a Cisco vice president in charge of information security for the networking giant. Thirty percent of the workforce has more than two mobile devices. “If we try to prevent it, they’ll find ways around it,” he said.
Instead, organizations should consider flexible mobile policies that permit network access based on the user, device and location, Martino said. For example, a user with a phone that doesn’t have mobile device management (MDM) software may get access to some services but not others.
With cloud computing, information security’s historic reliance on preventative controls won’t work so well, Wood said. The cloud trend presents the opportunity to focus more on detective controls of rapid response and risk mitigation. Each organization will have a different risk appetite and some aspects of the business will still require preventative controls. “There’s no one-size-fits-all,” Wood said. “You need to ask the business that risk question.”
On the topic of big data security – using big data techniques for security analytics — Wood suggested organizations can get started on that path by digging into data they already have on hand, such as firewall or IDS logs. Administrators often don’t look back to see if firewall policies are still working – that might be an area to explore, he said. The approach of mining data to obtain more security builds on itself.
“Start with what you already have,” Wood said. “And start by asking some innovative questions of that data.”
Earlier in the day, Wood presented a keynote on big data and security analytics, which unfortunately I missed, but I did cover his presentation at RSA Conference 2012, as did many other reporters. His RSA presentation was widely covered and justly so. He’s put into practice what others are only talking about at a conceptual level. At RSA, he and others from Zions detailed how the company harnessed information from its disparate security data sources by developing Hadoop-based security data warehouse. Using big data techniques enabled the company to speed forensics investigations, improve fraud detection and overall security, they said.
On Wednesday, Wood also offered some career advice to security pros: Don’t limit yourself to the “echo chamber of security.” Security pros should try to learn about other disciplines; big data security, for example, offers the opportunity to reach out to business units that have experience with analytics, he said.
At Cisco, employees are rotated, for example, from security to IT or from a business unit into security, Martino said. That practice helps the security organization understand the pain points throughout the business, he said. The company also has created security advocates in other parts of the business, which gets others involved in security.
Wood also urged attendees to spend more time on strategy. A lot of security organizations find themselves fighting fires all the time instead of looking at the big picture, he said. Security teams need people with the skills to deal with daily operations but who can also look ahead and strategize.