Ethical hackers hired by an organization to assess its vulnerabilities must always be careful to not “cross the line” and get themselves into trouble with the law. With all the computer security laws in the U.S., it can be a challenge for ethical hackers to ensure they are obeying all the laws.
But according to David Snead, an attorney in Washington D.C. who frequently represents IT security providers and consultants, it is possible to focus on just a handful of laws to avoid lawsuits and stay out of jail.
During a session at the Source Conference in Boston last month, Snead listed the overwhelming number of laws related to IT security in the U.S. But ethical hackers can focus on just three laws that are most likely to lead to litigation, according to Snead:
• Computer Fraud and Abuse Act (CFAA), which makes it illegal to access a computer or network without proper authorization.
• Wiretap Act, which can be applied to packet sniffing.
• Stored Communications Act (SCA), which can be applied to any email that was meant to be confidential.
Similarly, each state has different laws, and few organizations have the time or resources to ensure they are compliant in all 50 states. Snead recommended ethical hackers and security consultant assist their client organizations by ensuring they are compliant in just three states, at least initially. The three states should be:
• The organization’s own headquarter state;
• The state where most of the organization’s employees work;
• The state where most of the organization’s customers live or work.
In some cases, these three scenarios may point to just one or two states, making the consultant’s job that much easier.
In my view, Snead was bold to make these recommendations. Many lawyers, when asked which IT security laws their clients should obey, would probably say, “All of them.” But Snead’s advice comes from a real-world perspective, and it’s this kind of realistic advice that’s greatly appreciated by security practitioners — especially the many independent penetration testers out there — who are often grappling with their budgets.
Still, security pros must understand the risks of following this advice. As Snead explained, triaging the laws this way will avert most legal problems. But the pen tester’s client organization could still get tripped up by a lesser-known law if a creative prosecutor convinces the court it applies to the organization’s security practices.
It’s anyone’s guess how the FedRAMP cloud security initiative will pan out, but the pieces are coming together. Last week, the U.S. General Services Administration released an initial list of approved third-party assessment organizations (3PAOs).
Launched by the Obama administration in December, the Federal Risk and Authorization Management Program (FedRAMP) aims to set a standard approach for assessing the security of cloud services. The goal is to cut the cost and time spent on agency cloud assessments and authorizations.
3PAOs will assess cloud service providers’ security controls to validate they meet FedRAMP requirements. Their assessments will be reviewed by the FedRAMP Joint Authorization Board, which can grant provisional authorizations that federal agencies can use.
Here’s the list of accredited 3PAOs: COACT, Department of Transportation Enterprise Service Center, Dynamics Research Corp., J.D. Biggs and Associates, Knowledge Consulting Group, Logyx, Lunarline, SRA International and Veris Group.
If you’re wondering how these companies became 3PAOs, they had to submit an application demonstrating technical competence in assessing security of cloud-based systems, according to the GSA. They also had to meet ISO/IEC 17020:1998 requirements for companies performing assessments.
When I wrote about FedRAMP earlier this year, the program drew praise, criticism and cautious optimism. Will it get bogged down in bureaucracy? Will it become simply another paper-pushing compliance exercise? Will it help advance cloud security standards for the private sector? Hard to say how long it will take until we know those answers, but at least FedRAMP appears to be on schedule. With the release of the 3PAOs, the program moves closer its target of becoming operational next month.
I’m planning to speak with one of the 3PAOs tomorrow; hopefully I’ll have some additional information from that interview about the 3PAO process and FedRAMP in general. If I do, I’ll post it on SearchCloudSecurity.com.
Chief information security officers have a lot on their plate. Between data protection, malware detection, compliance regulations, social media security, mobile device management (MDM) and many more areas that fall into the realm of the security team, the chief information security officer (CISO) is obliged to wear many hats each day.
A recent survey by IBM highlighted this multitude of CISO responsibilities. In the report, Finding a strategic voice: Insights from the 2012 IBM Chief Information Security Officer assessment(.pdf), IBM said the ideal CISO must “assume a business leadership position and dispel the idea that information security is a technology support function. Their purview must encompass education and cultural change, not just security technology and processes. Leaders will need to reorient their security organizations around proactive risk management rather than crisis response and compliance. And the management of information security must migrate from discrete and fragmented initiatives to an integrated, systemic approach.”
That’s a tall order, and trying to accomplish it all could lead to CISO burnout. It’s not so much that there’s too much to do (although there is). The real problem causing CISOs to reach for the Pepto Bismol is there are too many conflicting demands coming at them from different angles.
But changes to the CISO role may be on the way, according to Jon Olstik, a security analyst at research firm Enterprise Strategy Group. Olstik believes the CISO function will naturally and of necessity divide into two roles: CSO and CISTO.
The chief security officer (CSO) will focus on the intersection of risk and business. The CSO will deal with compliance and legal issues, and be the person who goes before the board of directors to explain the expected return on a $1 million security investment.
The chief information security technology office (CISTO) will focus on IT security architecture and infrastructure. The CISTO will handle security controls, including monitoring and reporting the company’s defenses.
Olstik sums it up like this: CSOs create cybersecurity policies; CISTOs enforce them.
Allocating responsibilities in this way will probably be greatly appreciated by today’s overburdened CISOs. Training programs could focus on the two different career paths, and security professionals could aspire in the direction that best suits their personalities and skills.
Information security spending is thought to be recession proof, but does it have the legs to outrun the current downturn? In-Q-Tel partner Peter Kuper thinks so, but there are still some rough times ahead.
Kuper, who has handled some high-profile IPOs in the security market, told Information Security Decisions 2012 attendees this week in New York City to stop spending on technology that doesn’t work. Investments in legacy security standbys (hello AV, firewalls et. al.) need to be tempered. Maybe Kuper has a vested interest in his remarks, but he’s also right. Signature-based defenses don’t work anymore. Kuper said it; analysts tell you the same thing and so do research firms. The Verizon Data Breach Investigations Report is probably the most sobering barometer of the ineffectiveness of today’s security technology: 96% of the attacks behind the breaches Verizon investigated were not complicated attacks; 97% could have been prevented with rudimentary controls; 92% of incidents were discovered by a third party, and only after months of constant infection.
Checkbox security ran by PCI and other mandates is heavily to blame here as well. Security managers are using compliance as a life preserver and to beg for budget. Budgets, meanwhile, are largely flat to slightly up, yet companies are nearly 100% owned.
“Where is the ROI there?” Kuper asked. “You’re asking for increased budget, yet three-quarters of you get your butt handed to you in minutes or less. How is that a good ROI for a CFO? Try explaining that to someone that doesn’t understand security.”
Couple that with some weak economic indicators that foreshadow another downturn-despite the market being back to pre-recession 2007 levels-and you’ve got a rocky road ahead friends.
Looking for a silver lining? OK. Venture capital firms are looking at security companies, and acquisitions are still happening in security, which are indications of innovation and some areas of strength. SIM vendors were the last market segment in play with Q1 Labs (IBM), Nitro Security (McAfee), LogLogic (Tibco) and ArcSight (HP) getting scooped up by larger vendors. Palo Alto, meanwhile, is going public soon, Kuper said, after booking $200 million last summer alone. Qualys is also perpetually in the IPO conversation. Sourcefire has been public since 2007, and after a rocky start, is trading 113% higher than last year.
“VCs were not investing much in security for a long while,” Kuper said. “But security is looking good again. I know a lot of VCs and they’re starting to call back. VCs are making money in security investing in innovative technology. It’s a good sign VCs are investing. Innovation cycles are up and a lot of good companies are getting funding.”
At an event last week in San Francisco that covered a variety of cloud security issues, infosec expert Kevin Walker told attendees to be aggressive with cloud service providers and hold them accountable when it comes to security.
“The key for us practitioners is to go into this with eyes wide open,” said Walker, who has held senior security positions at Symantec and Cisco, among other global firms. He spoke at the Cloud Security Symposium, which was sponsored by Trend Micro.
The traditional focus on building fortresses with firewalls and IPSes won’t translate to the cloud, he said. Cloud provider requirements include increased transparency about their operations and how they detect rogue tenants, and information security pros need to be aggressive in making sure providers meet security requirements, he said.
That’s certainly easier said than done, especially when business units are going around IT and signing up on Amazon. It’s a hard to press for security when you don’t even know what cloud services your company is using.
In many cases, lines of business aren’t waiting for IT when they need something – they simply use their credit card to buy cloud services, said JJ DiGeronimo, senior accelerate practice manager and cloud strategist at VMware. “IT departments have true competition from outside service providers,” she told attendees.
“People are used to securing a box, but now we’re moving to securing the data,” she said. “Data is going to sit everywhere and you’ll have to manage it regardless of where it sits.”
Data-centric security has been an ongoing theme in the industry for several years as corporate network boundaries crumble as employees increasingly become more mobile. Enterprise adoption of cloud computing is becoming yet another driver.
“If you can’t control the systems anymore. … That’s the only way to do it [security] — to protect the data,” Trend Micro CTO Raimund Genes told me in an interview.
Trend Micro naturally has a vested interest in this trend – the company sells encryption products including a key management service for cloud and virtual environments – but it does make sense given that enterprise data is increasingly flowing to cloud environments and becoming harder to track. Maybe the rise of cloud computing will help push data-centric security into the mainstream.
In the meantime, if you’re looking for ways to track down unauthorized use of cloud services by your developers or sales executives, we published tips in this article.
In the world of financial cybercrime, there are three primary groups of fraudsters at work. First up are the developers who write the applications to grab credit card and bank account data. In the middle are the “carders” who sell the ill-gotten data to, if you will, end users. The final group consists of these users or buyers who pay for the hot data and use it to make purchases or move funds to their own accounts.
Those fighting the battles have to make tough decisions about where to focus their resources. Should they go after the developers, the carders or the end users of the stolen financial data? The answer is surely a multi-pronged approach, with different tactics aimed at flushing out and stopping each group of criminals.
Law enforcement officials recently trained their sights on the middle group. In Operation Wreaking hAVoC, the FBI worked with the Serious Organized Crime Agency (SOCA) in the U.K. and authorities in other countries to shut down 36 carder sites. (The word hAVoC reflects the Automated Vending Carts, or AVCs, which are websites used by carders to sell financial information.)
SOCA said the successful operation will reduce international financial crime by ₤500 million (or more than $800 million) in the coming years. A SOCA representative told me they came to this figure by considering the average cost of the damage that could be incurred from each piece of stolen financial data. Credit card numbers with CVV codes have a damage value of up to $500 in the U.S. or ₤200 in the U.K., he said. If a full data dump from the card’s magnetic strip is included, or if bank account details are associated with the card, the potential damages go up significantly.
Operation hAVoC is a good example of the effective ways law enforcement agencies around the world can work together to successfully combat global networks of cybercriminals. But they won’t be able to bask in their success for long. Other carders are probably already dusting off their wares and pulling their vending carts onto the streets.
Symantec recently released some interesting findings from a survey the company conducted with the Cloud Security Alliance at the CSA Summit in February. The survey went beyond the usual sorts of basic questions to delve into organizations’ knowledge of cloud security. The results – albeit from a small sample size (128 respondents) — were a bit curious.
While 63% rated their cloud security efforts as good, 58% said their staff isn’t well prepared to secure their use of public cloud services. And although 68% said they think cloud security training is important for their organizations’ ability to use public cloud services, less than half (48%) planned to attend cloud security training over the next year. Eighty-six percent of respondents said protecting their organizations’ data was the top factor driving them to cloud security training.
In a blog post, Dave Elliott, senior product marketing manager of global cloud marketing at Symantec, summarized the survey findings:
“In short, what this survey reveals is that it’s important to have your own security for the cloud but that IT staff are not yet well prepared to secure the cloud.”
He added, “Cloud security needs leadership, and it requires standardized training and skills that will enable IT staff to confidently move into the cloud.”
Now, Symantec has a vested interest in promoting cloud security training – it’s partnered with the CSA to offer training for the CSA’s Certificate of Cloud Security Knowledge (CCSK). But if organizations don’t feel prepared for securing cloud computing deployments, it’s a little strange more aren’t seeking out cloud security training for their staff.
I recall seeing a discussion on LinkedIn a few months ago in which security pros debated the value of the CCSK. Some noted that employers don’t recognize the relatively new certification. That will probably change sooner than later, though, as cloud services become more prevalent in the enterprise.
Interestingly, 56% of respondents to the Symantec-CSA survey said advancing their careers was a major factor in their decision to attend cloud security training.
When Microsoft issued version 12 of its Security Intelligence Report (.pdf) last month, its marketing machine had one message it wanted journalists to communicate to businesses: Conficker worm infections are a serious concern.
The messaging about Conficker was extremely strong. Prior to a briefing with a Microsoft executive, reporters were given a slide deck largely void of information except for data about Conficker; Microsoft’s 126-page report had been boiled down to 16 slides. Microsoft proclaimed Conficker as “the No. 1 threat facing businesses over the past 2.5 years.” It was “detected on 1.7 million machines in the fourth quarter of 2011; it was “detected almost 220 million times since 2009;” and there has been a 225% increase in quarterly detections since 2009, Microsoft said.
It sounds alarming, but that’s just marketing at its worst.
Conficker has no payload. There are no cybercriminals controlling it. The worm itself was designed to spread quickly to establish the infrastructure for a botnet. Once it’s installed on an infected machine it opens connections to receive instructions from a remote server. But that function has been neutralized by the Conficker Working Group, which uses the worm’s broken domain algorithm to block it from receiving data.
If Conficker isn’t a serious threat, what is? Here are a few data points to consider from the Microsoft SIR that may be more important than Microsoft’s Conficker message:
Windows exploits rise significantly: Operating System exploits, specifically targeting Microsoft Windows, skyrocketed by 100% in 2011.
Despite a security update in August 2010 addressing a publicly disclosed vulnerability in Windows Shell, attackers have been successfully targeting the flaw using malicious shortcut files. Exploits against the vulnerability and several others that were detected by Microsoft increased from 400,000 in the first quarter of 2011, to more than 800,000 in the fourth quarter of 2011. The statistics point to the Ramnit worm as the culprit targeting the flaw. It was recently detected transforming into financial malware capable of draining bank accounts.
The other Microsoft Windows flaw being targeted was a Microsoft Windows Help and Support Center vulnerability that can be targeted via a drive-by attack. It was repaired in a security update issued in July 2010.
Windows Vista infection rate higher than Windows XP: The infection rate for 32- and 64-bit editions of Windows Vista SP1 and the 64-bit edition of Windows Vista SP2 outpaced Windows XP SP3. Microsoft says attackers are targeting the newer platforms because companies are migrating to them. Infection rates for the 64-bit editions of Windows Vista and Windows 7 have increased since the first half of 2011, Microsoft said.
Microsoft said the increase is also due to detection signatures it added to its Malicious Software Removal Tool for several malware families in the second half of 2011. “Detections of these families increased significantly on all of the supported platforms after MSRT coverage was added,” the company said in its report. In addition, a security update addressing the Windows Autorun feature in Windows was issued last year and was likely a major factor in driving down the infection rate in Windows XP, the software maker said.
Adobe Reader, Acrobat attacks: While not out of control, it continues to be a favorite attack method of cybercriminals. “Exploits that affect Adobe Reader and Adobe Acrobat accounted for most document format exploits detected throughout the last four quarters.” There were nearly 1 million of them.
It often seems security pros place great expectations on users, and are amazed when they fall for an obvious security trap or common social engineering attack. But instead of being amazed, the more appropriate response may be to recognize that traditional information security awareness training programs often don’t work.
According to Bob Rudis, director of enterprise security at Boston-based Liberty Mutual Group, too many companies rely on the computer-based security training courses that each employee must complete once a year to meet compliance requirements. Speaking at the Source Boston conference last month, Rudis shared some more creative ideas he has used to elevate security awareness and reduce security incidents at his company.
For example, Rudis’ team created some simple Flash-based game applications for employees to play. Players win the games by making correct security choices. Even though the games were voluntary, about 25% of Liberty Mutual employees played each game at least once.
For companies that don’t have the budget to create games, Rudis offered cheap, outside-the-box security awareness ideas. For example, consider your computer-based training (CBT), which probably contains slides showing photos of people working at computers. Rather than using stock images of people in your CBT, Rudis suggested taking photos of your company’s own employees, such as a photo of one of your IT people scratching their head and looking puzzled, or a photo of one of your help desk people looking tired but triumphant. Seeing actual colleagues helps users feel more connected to the training material and thus more likely to remember what they’ve learned. Plus, it will make stars of your staff – an added benefit.
As a security manager, you are competing with so many other demands for users’ attentions, from their own job responsibilities to Facebook and Pinterest and Angry Birds. Making your security lessons visually compelling and a little more fun may go a long way toward ensuring security awareness messages stick in users’ minds for a long time.
As security pros wait for more details about the VMware ESX hypervisor source code leak, should they be panicking?
Well no, not yet, anyway. Without knowing exactly what source code was leaked, it’s hard to know the extent of the threat, security experts have said. However, the answer may come soon — there are rumors that hackers will release more source code on Saturday.
Until then, virtualization security experts are offering some advice for enterprises running ESX. As with most things in security, much of the advice has to do with simply following best practices. However, virtualization security best practices may not always be at the top of an organization’s to-do list; the code leak should provide some prodding.
First off, organizations should block all Internet access to the hypervisor platform — especially to the Service Console — which is something they should already be doing, according to Dave Shackleford, principal consultant at Voodoo Security and senior vice president of research and CTO at IANS. They should also make sure all VMs are patched and restrict any copy/paste or other functionality between the VM and ESX host, he said in an email. (On the patching front, organizations using ESX should pay attention to last week’s security bulletin from VMware about an update for the ESX Service Console Operating System (COS) kernel to fix several security issues).
“Finally, they could set up ACLs or IDS monitoring rules to look for any weird traffic to those systems from both internal and external networks, and do the same on any virtual security tools if they’ve got them,” Shackleford said.
Edward Haletky, owner of AstroArch Consulting and analyst at the Virtualization Practice, wrote in a blog post that organizations should follow virtualization security best practices to pre-plan for the release of the ESX hypervisor code.
“Segregate your management networks, employ proper role-based access controls, ensure any third-party drivers come from well-known sources, set all isolation settings within your virtual machine containers, at-rest disk encryption, properly apply resource limits and limit para-virtualized driver usage,” he wrote.
Any attacks arising from the code leak will show up shortly after the code is made available, but won’t increase the risk beyond where it is today, Haletky wrote.
“The use of best practices for securing virtual environments is on the rise, but we are still a long way from our goal. Just getting proper management network segregation is an uphill battle. If there is currently a real risk to your virtual environment, it is the lack of following current best practices, not an impending leak of code,” he said.