I’ve covered a lot on online bank fraud in the past – there seems to be no end to the increasingly sneaky techniques cybercriminals develop to siphon money out of victims’ bank accounts. This week, McAfee Inc. and Guardian Analytics Inc. released the findings of their investigation into a global fraud ring that takes the old techniques up a notch.
In their report, “Dissecting Operation High Roller” (.pdf), the companies report cybercriminals — building on older Zeus and SpyEye tactics – are targeting high-balance bank accounts belonging to businesses and individuals. Unlike past online bank fraud attacks using Zeus and SpyEye, though, these new attacks use server-side components and heavy automation. According to the report, the attacks have been mostly in Europe, but are now spreading to the U.S.
Criminals have tried to steal more than $78 million in fraudulent transfers from at least 60 financial institutions, including large global banks, credit unions and regional banks, the report said.
In a blog post, Dave Marcus, director of advanced research and threat intelligence at McAfee, noted that by shifting from traditional man-in-the-browser attacks on a victim’s PC to server-side automation attacks, criminals have moved from multipurpose botnet servers to cloud-based servers that are purpose-built and dedicated to processing fraudulent transactions. The strategy, he said, helps criminals move faster and avoid detection.
The report describes attacks in the U.S. and The Netherlands as using a server located with an ISP with “crime-friendly usage policies” and moved frequently to avoid discovery.
All pretty unsettling stuff, suffice to say. And, according to the report, financial institutions can expect even more automated and creative forms of fraud in the future.
As the opening day of the 2012 Olympic Games nears, IT teams in the U.K. are busy expanding their companies’ security policies and reviewing their security contingency plans. They are preparing for 17 days of Games, which will surely produce crowded transportation systems, overloaded Internet connections, and employees whose attention may be diverted by swimming relays and equestrian events.
The Olympics provide a good opportunity for companies in the U.S. and around the world to review their security policies and plans, too. Security pros can watch how their peers in the U.K. handle the pressures and disruptions caused by the Olympics, and consider how they would handle such an event if it occurred in their city.
Security contingency plans, which are similar to disaster recovery plans or business continuity plans, lay out the steps IT should take as soon as a disruptive event occurs. The idea is to make important decisions in advance, and have the necessary resources already in place, so the team can react quickly to maintain the security of their company’s data and other IT assets. Yet, according to our application security expert Michael Cobb, many companies’ security contingency plans are either unrealistic or woefully out-of-date.
Could your company continue operating securely in a chaotic environment — whether that chaos is caused by a scheduled event, such as the Olympics, or by an unplanned natural event? The 2012 Olympics serve as a reminder for all firms to review and revise their security contingency plans in light of current concerns and resources.
The relatively quiet summer months may be a good time to set up components of your security contingency plan. One of the most important components to handle in advance is widespread telecommuting. During a major event, more employees may have to work from home. You can prepare now by having all employees sign a remote working policy agreement, test the security of the Internet connection in their home, and receive training on topics, such as securely filing sensitive documents from their home office.
The Olympics also provide a hook for continued security awareness training. The IT department could send out an email educating users about Olympic ticket scams, providing a helpful lesson for any too-good-to-be-true email offer. Or the IT department could run a summertime security contest, posting a short information security quiz on the company’s Intranet and awarding gold, silver and bronze medals to the employees or departments who score highest on the quiz.
Even if the Olympics are not held in the U.S. until at least 2024, there is bound to be a significant event that will affect your company’s security posture in the near future. Prepare and practice now so your security team can execute flawlessly and take home the gold.
Security awareness training often teaches the importance of password length and password complexity, but these best practices, as it turns out, may be creating a false sense of security. Even worse, users who cooperate and create long, complex passwords may feel betrayed when the organizations they trusted prove fallible and their passwords are hacked.
The recent LinkedIn hacking incident, in which 6.4 million LinkedIn passwords were stolen (or possibly leaked), demonstrated the strength of a user’s password is no defense when an Internet application provider is attacked. Even if each LinkedIn password was as long and complex as possible, it wouldn’t have mattered; the Russian hackers still found the hashed LinkedIn passwords and posted them for all to see.
According to some analysts reviewing the LinkedIn breach, the social networking site had failed to protect users’ passwords with a strong hashing algorithm. That’s where the sense of betrayal comes in. If users are doing their part by using strong passwords, they should be able to trust the application provider to take strong precautions, too.
The situation spurred LinkedIn to take stronger precautions now. In a blog post, LinkedIn said it would use better hashing and salting to protect its account databases in the future.
Organizations can learn from LinkedIn’s public mea culpa. If your IT staff has been lecturing users on strong passwords, but your organization’s passwords are stolen, how will your users react? After years of building trust between IT and users, an incident like this could destroy the relationship in one day.
The LinkedIn incident is a reminder of the need to properly balance responsibility for secure access management among users and IT. Yes, user training is important, but IT security teams must go the extra mile to protect account credentials and prove themselves worthy of users’ trust.
Wednesday’s Cornerstones of Trust Conference featured an interesting CSO discussion of some of the hottest topics infosecurity pros are dealing with today, including the BYOD trend, cloud computing and big data security. The annual conference, held in Foster City, Calif., is sponsored by ISSA’s Silicon Valley and San Francisco chapters, and San Francisco Bay Area InfraGard.
Mobile, cloud and BYOD are all part of an overarching trend towards consumerization of IT that’s driving demand for convenient, easy access to corporate data, said Preston Wood, CSO at Zions Bancorporation, a Salt Lake City-based bank holding company. “We need to find a way to enable that and not be a roadblock,” he said.
At Cisco Systems, the mobile trend is far from new, said Steve Martino, a Cisco vice president in charge of information security for the networking giant. Thirty percent of the workforce has more than two mobile devices. “If we try to prevent it, they’ll find ways around it,” he said.
Instead, organizations should consider flexible mobile policies that permit network access based on the user, device and location, Martino said. For example, a user with a phone that doesn’t have mobile device management (MDM) software may get access to some services but not others.
With cloud computing, information security’s historic reliance on preventative controls won’t work so well, Wood said. The cloud trend presents the opportunity to focus more on detective controls of rapid response and risk mitigation. Each organization will have a different risk appetite and some aspects of the business will still require preventative controls. “There’s no one-size-fits-all,” Wood said. “You need to ask the business that risk question.”
On the topic of big data security – using big data techniques for security analytics — Wood suggested organizations can get started on that path by digging into data they already have on hand, such as firewall or IDS logs. Administrators often don’t look back to see if firewall policies are still working – that might be an area to explore, he said. The approach of mining data to obtain more security builds on itself.
“Start with what you already have,” Wood said. “And start by asking some innovative questions of that data.”
Earlier in the day, Wood presented a keynote on big data and security analytics, which unfortunately I missed, but I did cover his presentation at RSA Conference 2012, as did many other reporters. His RSA presentation was widely covered and justly so. He’s put into practice what others are only talking about at a conceptual level. At RSA, he and others from Zions detailed how the company harnessed information from its disparate security data sources by developing Hadoop-based security data warehouse. Using big data techniques enabled the company to speed forensics investigations, improve fraud detection and overall security, they said.
On Wednesday, Wood also offered some career advice to security pros: Don’t limit yourself to the “echo chamber of security.” Security pros should try to learn about other disciplines; big data security, for example, offers the opportunity to reach out to business units that have experience with analytics, he said.
At Cisco, employees are rotated, for example, from security to IT or from a business unit into security, Martino said. That practice helps the security organization understand the pain points throughout the business, he said. The company also has created security advocates in other parts of the business, which gets others involved in security.
Wood also urged attendees to spend more time on strategy. A lot of security organizations find themselves fighting fires all the time instead of looking at the big picture, he said. Security teams need people with the skills to deal with daily operations but who can also look ahead and strategize.
Unemployment is at 0% for information security professionals! This good news was reported this spring in CompTIA’s 9th annual Information Security Trends report. The report cited U.S. Bureau of Labor Statistics (BLS) research conducted in the spring of 2011, which also noted the unemployment rate at just under 4% for the IT industry overall. Clearly, skilled security professionals should have no trouble getting information technology security jobs right now.
But companies are having trouble filling those jobs. According to CompTIA’s survey of 500 IT and business executives in the U.S., conducted at the end of 2011, 40% of companies are having difficulty hiring IT security specialists.
During a recent conversation with Todd Thibodeaux, president and CEO of CompTIA, I asked him why companies are having hiring problems, and I expected his answer would relate to the need for more CompTIA certifications. Or perhaps he’d say companies can’t pay enough to hire the talent they need. But Thibodeaux’s response brought up another perspective on the hiring challenge. He believes organizations are having trouble hiring IT security pros in the U.S. partly because of depressed housing values.
“The challenge is recruiting within physical regions,” Thibodeaux said. “Organizations don’t want to outsource their security, and they certainly don’t want to off-shore their security. So they need to hire locally.”
Yet with many IT professionals’ homes underwater with their mortgages right now, would-be employees are not able to move to take new jobs. So even though hiring organizations are willing to pay good salaries, they are largely at the mercy of larger economic forces beyond their control.
This phenomenon is more noticeable in some parts of the country, Thibodeaux said. Areas with high concentrations of technology companies are fortunate enough to have a larger pool of IT professionals from which to hire. But for companies not located in high-tech regions, it appears hiring has stalled. Companies and employees alike are waiting for home values to rise so people can move to fill IT security job gaps.
Is the answer simply to wait out the housing market? Thibodeaux believes a better answer may lie in college education. “Many colleges want to teach, not train,” Thibodeaux said. “But companies need people coming out of college who have been trained in technical skills.”
Perhaps this unusual situation of low unemployment in IT security combined with low home values will motivate some U.S. colleges to beef up their IT security courses with more hands-on training. Sure, that will take time — at least four years if incoming freshmen start now. But with home values inching back up slowly, those four years may turn out to be the quicker fix.
Security experts have warned about the potential problems caused by military cyberstrikes. Experts say cyberwarfare is difficult to plan and worse, it puts innocent people at risk.
Stuxnet was part of a secret joint U.S.-Israeli cyberattack operation which began with approval by the Bush Administration and continued with the nod from the Obama White House, according to a detailed account of the attack written by David Sanger in a report published today in the New York Times.
To put the pieces of the Stuxnet puzzle together, Sanger conducted interviews with unnamed sources involved with the Stuxnet operation dubbed “Olympic Games.” While it confirms a lot of speculation about the nation-states behind the Stuxnet worm, it also raises a lot of questions about cyberwarfare and its use by a sitting president. Should members of Congress have been notified of the operation? Were any U.S. citizens put at risk?
Even well planned military cyberstrikes go wrong
A 2009 study by the nonprofit research firm RAND Corp. urged the United States not to invest in offensive cyberweapons. It is too difficult to predict the outcome of an attack, making strategic planning a guessing game, according to the report’s author, Martin C. Libicki. “Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked,” Libicki wrote. Indeed, according to the Times story, Stuxnet clearly caused some disruption, but it was anyone’s guess as to how far it set back Iran’s nuclear program.
Even worse, Sanger’s account of the operation detailed a major coding error that enabled the offensive malware to escape into the wild. This led to its detection and analysis by antimalware vendors. Indeed there were facilities in the United States using the Siemens systems that the worm could have sought out. While the threat was minimal – Stuxnet still would have to get through the buffer zone isolating a facility from the Internet – those quoted in Sanger’s story said it was easy to get through the Iranian facility’s buffer zone using a simple thumb drive. I’ve heard of penetration testers using this trick to great success: dropping thumb drives in areas throughout a targeted organization to see if any curious employees would insert the device into their computer. “It turns out there is always an idiot around who doesn’t think much about the thumb drive in their hand,” according to an unnamed official referring to how Stuxnet was planted at the underground uranium enrichment site in Natanz, Iran.
If that’s the case then the operation certainly could have put U.S. citizens at risk right here on our own soil. It also has the potential to fan the flames of retaliation or similar offensive cyberwarefare operations from our adversaries. We’ve already encountered reports that government agencies and even critical infrastructure facilities, such as power plants have been penetrated in some way.
Network security luminary Marcus Ranum, CSO of Tenable Network Security, told SearchSecurity about his concern over militarized cyberspace and even outlined the problem caused by the Stuxnet-like strikes.
Critical infrastructure protection
I wrote about a 2010 report by the Center for Strategic and International Studies (CSIS), which consisted of a global survey of more than 600 IT pros at critical infrastructure facilities. The main finding was that systems that run power plants, manage the distribution of hazardous chemicals and help monitor water treatment plants are in a dire need of stronger safeguards. The survey found that those facilities are under a constant barrage of attacks. A U.S.-China Economic Review Commission report last October cited a significant attack targeting U.S. Satellites. The examples go on and on.
But the problem goes beyond the potential threat to power plants and oil and chemical refineries. Earlier this year researchers demonstrated a theoretical attack targeting the systems that control the locking mechanisms at a prison. Imagine the chaos that would cause if cybercriminals were to target the prison system.
There is plenty of recognition of the seriousness of the problem, but very little transparency of where the nation stands on protecting critical assets, said Andy Purdy, chief cybersecurity strategist at CSC, and a member of the team that developed the U.S. National Strategy to Secure Cyberspace in 2003. In an interview I had with Purdy at the 2012 RSA Conference, Purdy cited some progress, but admitted that the lack of transparency leaves very little information for authorities to track the progress the nation is making in protecting critical systems. Purdy cited substantial federal funding being invested into SCADA system security, the progress of the Industrial Control Systems CERT and several plans and reports outlining the role of the public and private sector in protecting critical systems, digital identities for Internet users and the role ISPs should play in controlling customers with compromised systems.
Perhaps security luminary Dan Geer is thinking ahead to disaster recovery after a cyberstrike. He speaks incessantly at security conferences and summits about the need for system redundancy and manual processes to help lessen the disruption and chaos when Internet connected systems fail. Not only do we need redundant systems and manual processes, but we need skilled people who know how they function, Geer says.
Stuxnet details conclusion
The details about the planning operation behind Stuxnet should be a reminder that military action, whether physical or digital, needs to be thoroughly vetted or else innocent citizens could be inadvertently put at risk. It should be a call to action for stricter oversight of the security of critical infrastructure both publicly or privately owned. It’s amazing to me that despite all of the increased rhetoric about better protecting the nation’s critical infrastructure there has been very little evidence of progress. Just words.
After working hard to create sound security policies, it’s easy for enterprise information security managers to be dismayed when users ignore the rules and knowingly bypass security controls. When those rule-breakers are executives, it feels like salt on the wound. After all, who should understand the importance of protecting an organization’s assets better than its top executives? Yet, a survey at Infosecurity Europe revealed that, in 43% of organizations, senior managers and even the board of directors do not follow their organizations’ security policies and procedures.
The survey was conducted last month by security consulting firm Cryptzone Group. They asked 300 IT professionals who within their organizations is least likely to follow security policies and procedures. According to the Cryptzone report, Perceptions of security awareness (.pdf), 20% said senior managers are least likely to follow the rules, and 23% pointed their finger directly at the CEO or CTO.
The Cryptzone report didn’t dig into the reasons behind these perturbing findings, but I’d venture there are five primary reasons why executives disobey corporate security policies. (You’ll either laugh or cry about the last one.)
1. They are discreetly excused from taking security training programs;
2. They do not agree wholeheartedly with the security policy;
3. They believe the risks they are taking aren’t all that bad;
4. They are in a hurry;
5. They think IT will take care of things if something (like a data breach) occurs.
The antidote for all these reasons can, of course, be found in corporate security training. But because senior managers probably can’t or won’t take time out of their workdays to attend more training (see reason #4), security pros will have to keep finding creative ways to get the message out. Multimedia playing in the office kitchen, occasional text reminders sent to managers’ phones, and other friendly methods of interjecting bits of the security policy into managers’ minds must be a never-ending process in every organization.
A bane for U.S.-based cloud providers for several months now has been the assumption among cloud customers and service providers outside the U.S. – especially in Europe – that the Patriot Act gives the U.S. more access to cloud data than other governments. The idea, then, is that it’s safer to store your data with a cloud provider in a location free from such governmental access. A recent study debunked this Patriot Act cloud notion by showing that, in fact, other governments have just as much access as the U.S. for national security or law enforcement reasons.
The study, published by the global law firm Hogan Lovells (.pdf), looked at the laws of ten countries, including the U.S., France, Germany, Canada and Japan, and found each one vested authority in the government to require a cloud service provider to disclose customer data. The study showed that even countries with strict privacy laws have anti-terrorism laws that allow for expedited government access to cloud data.
“On the fundamental question of governmental access to data in the cloud, we conclude …that it is not possible to isolate data in the cloud from governmental access based on the physical location of the cloud service provider or its facilities,” wrote Christopher Wolf, co-director of Hogan Lovells’ privacy and information practice, and Winston Maxwell, a partner in the firm’s Paris office.
In a blog post, Dave Asprey, vice president of cloud security at Trend Micro, said the research “proves a bigger point; that your data will be disclosed with or without your permission, and with or without your knowledge, if you’re in one of the 10 countries covered.”
The only solution to this problem, he added, is encryption. But how encryption keys are handled is critical; encryption keys need to be on a policy-based management server at another cloud provider or under your own control, Asprey wrote. Now, Trend Micro has a vested interest here since it provides encryption key management, but it’s a point worth noting for organizations concerned about protecting cloud data not just from governments, but from cybercriminals.
For another examination of the Patriot Act’s impact on cloud computing, check out the article by SearchCloudSecurity.com contributor Francoise Gilbert. She looks at the rules for the federal government to access data and how they undercut concerns about the Patriot Act and cloud providers based in the U.S.
For years, the mantra of the security industry has been to get enterprises to look internally for weaknesses and activity that can raise a red flag to a malware-infected machine or an employee with malicious intentions. But how do you know how secure your partners and clients are?
It’s not difficult to see the security risks posed by a contractor taking care of payroll, a managed services provider, or the string of businesses that make up the supply chain. A breach at any of those businesses could have a serious impact your company’s security. An enterprise CISO or IT director has little control over the security of their partner networks. Managing business partner security risks has been left to putting in protections in service-level agreements. From a technology perspective, enterprises can review the logs to look for suspicious behavior if partners are given access to company resources.
Derek Gabbard and his team at Lookingglass Cyber Solutions aim to change all that. The company’s technology, which is being used by a variety of government and financial organizations, can map out the networks of partners and clients and apply a layer of threat intelligence data to determine if there are any potential compromises. The technology provides companies with third-party risk management capabilities.
Called ScoutVision, the technology can get information about a company’s business partner networks once the partner’s IP address range is fed into it. It bases its threat analysis on security vendor intelligence feeds licensed by Lookingglass, honeypots and other proprietary threat intelligence data.Lookingglass monitors communication in cybercriminal networks. It ties intelligence on botnets and malware attacks to trace a threat back to a network that has been penetrated.
The company boasts that nearly 40 separate distinct sources of threat intelligence data are used in the analysis. It looks at dark IP space and passive DNS data globally. The service can provide all the threat intelligence data it has about the entire network and describe, for example, if it has 20 to 30 bad hosts, Gabbard said. For example, if any Microsoft IP addresses have been communicating directly with a darknet, the company can characterize the nature of the communication to determine the nature of the threat.
Gabbard was CTO of network traffic analysis firm Soteria Network Technologies, a firm that appears to be synonymous with Lookingglass. Soteria has had a number of contracts with the Department of Homeland Security. He served as senior member of the technical staff at Carnegie Mellon University’s CERT Coordination Center. Gabbard told me that up until now companies have been focusing internally with little regard to the security of their partner systems.
I can’t find a company that is taking Lookingglass’ approach. SIEM systems such as HP Arcsight, and network appliances like RSA Netwitness or Solera Networks, don’t provide external network visibility in the same context, Gabbard said. The technology could eventually be integrated with a network appliance, he said. As CEO of Lookingglass, Gabbard is looking to extend ScoutVision to a broader set of customers.
So what does a company do with the threat data provided by Lookingglass?
Gabbard said he believes the information gleaned by the service can be actionable. The first commercial customers consisted of pilot projects conducted in 2010. So far the service has resulted in mainly reporting and phone calls to third parties. Some early adopters create reports and inform their partners of the potential security issues. Depending on their relationship, they’ll say “hey, your network’s messed up,” he said. “Clean it up or we’ll have to restrict access.”
The firm is gaining interest. In January, the fledgling company received $5 million in funding from Alsop Louie Partners, a firm that includes Gilman Louie, the founder and former CEO of In-Q-Tel – the investment arm of the Central Intelligence Agency. It will be interesting to watch if other security vendors attempt to take a similar approach with existing security appliances. The potential exists to apply the technology to companies with an extensive supply chain.
Ethical hackers hired by an organization to assess its vulnerabilities must always be careful to not “cross the line” and get themselves into trouble with the law. With all the computer security laws in the U.S., it can be a challenge for ethical hackers to ensure they are obeying all the laws.
But according to David Snead, an attorney in Washington D.C. who frequently represents IT security providers and consultants, it is possible to focus on just a handful of laws to avoid lawsuits and stay out of jail.
During a session at the Source Conference in Boston last month, Snead listed the overwhelming number of laws related to IT security in the U.S. But ethical hackers can focus on just three laws that are most likely to lead to litigation, according to Snead:
• Computer Fraud and Abuse Act (CFAA), which makes it illegal to access a computer or network without proper authorization.
• Wiretap Act, which can be applied to packet sniffing.
• Stored Communications Act (SCA), which can be applied to any email that was meant to be confidential.
Similarly, each state has different laws, and few organizations have the time or resources to ensure they are compliant in all 50 states. Snead recommended ethical hackers and security consultant assist their client organizations by ensuring they are compliant in just three states, at least initially. The three states should be:
• The organization’s own headquarter state;
• The state where most of the organization’s employees work;
• The state where most of the organization’s customers live or work.
In some cases, these three scenarios may point to just one or two states, making the consultant’s job that much easier.
In my view, Snead was bold to make these recommendations. Many lawyers, when asked which IT security laws their clients should obey, would probably say, “All of them.” But Snead’s advice comes from a real-world perspective, and it’s this kind of realistic advice that’s greatly appreciated by security practitioners — especially the many independent penetration testers out there — who are often grappling with their budgets.
Still, security pros must understand the risks of following this advice. As Snead explained, triaging the laws this way will avert most legal problems. But the pen tester’s client organization could still get tripped up by a lesser-known law if a creative prosecutor convinces the court it applies to the organization’s security practices.