Unemployment is at 0% for information security professionals! This good news was reported this spring in CompTIA’s 9th annual Information Security Trends report. The report cited U.S. Bureau of Labor Statistics (BLS) research conducted in the spring of 2011, which also noted the unemployment rate at just under 4% for the IT industry overall. Clearly, skilled security professionals should have no trouble getting information technology security jobs right now.
But companies are having trouble filling those jobs. According to CompTIA’s survey of 500 IT and business executives in the U.S., conducted at the end of 2011, 40% of companies are having difficulty hiring IT security specialists.
During a recent conversation with Todd Thibodeaux, president and CEO of CompTIA, I asked him why companies are having hiring problems, and I expected his answer would relate to the need for more CompTIA certifications. Or perhaps he’d say companies can’t pay enough to hire the talent they need. But Thibodeaux’s response brought up another perspective on the hiring challenge. He believes organizations are having trouble hiring IT security pros in the U.S. partly because of depressed housing values.
“The challenge is recruiting within physical regions,” Thibodeaux said. “Organizations don’t want to outsource their security, and they certainly don’t want to off-shore their security. So they need to hire locally.”
Yet with many IT professionals’ homes underwater with their mortgages right now, would-be employees are not able to move to take new jobs. So even though hiring organizations are willing to pay good salaries, they are largely at the mercy of larger economic forces beyond their control.
This phenomenon is more noticeable in some parts of the country, Thibodeaux said. Areas with high concentrations of technology companies are fortunate enough to have a larger pool of IT professionals from which to hire. But for companies not located in high-tech regions, it appears hiring has stalled. Companies and employees alike are waiting for home values to rise so people can move to fill IT security job gaps.
Is the answer simply to wait out the housing market? Thibodeaux believes a better answer may lie in college education. “Many colleges want to teach, not train,” Thibodeaux said. “But companies need people coming out of college who have been trained in technical skills.”
Perhaps this unusual situation of low unemployment in IT security combined with low home values will motivate some U.S. colleges to beef up their IT security courses with more hands-on training. Sure, that will take time — at least four years if incoming freshmen start now. But with home values inching back up slowly, those four years may turn out to be the quicker fix.
Security experts have warned about the potential problems caused by military cyberstrikes. Experts say cyberwarfare is difficult to plan and worse, it puts innocent people at risk.
Stuxnet was part of a secret joint U.S.-Israeli cyberattack operation which began with approval by the Bush Administration and continued with the nod from the Obama White House, according to a detailed account of the attack written by David Sanger in a report published today in the New York Times.
To put the pieces of the Stuxnet puzzle together, Sanger conducted interviews with unnamed sources involved with the Stuxnet operation dubbed “Olympic Games.” While it confirms a lot of speculation about the nation-states behind the Stuxnet worm, it also raises a lot of questions about cyberwarfare and its use by a sitting president. Should members of Congress have been notified of the operation? Were any U.S. citizens put at risk?
Even well planned military cyberstrikes go wrong
A 2009 study by the nonprofit research firm RAND Corp. urged the United States not to invest in offensive cyberweapons. It is too difficult to predict the outcome of an attack, making strategic planning a guessing game, according to the report’s author, Martin C. Libicki. “Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked,” Libicki wrote. Indeed, according to the Times story, Stuxnet clearly caused some disruption, but it was anyone’s guess as to how far it set back Iran’s nuclear program.
Even worse, Sanger’s account of the operation detailed a major coding error that enabled the offensive malware to escape into the wild. This led to its detection and analysis by antimalware vendors. Indeed there were facilities in the United States using the Siemens systems that the worm could have sought out. While the threat was minimal – Stuxnet still would have to get through the buffer zone isolating a facility from the Internet – those quoted in Sanger’s story said it was easy to get through the Iranian facility’s buffer zone using a simple thumb drive. I’ve heard of penetration testers using this trick to great success: dropping thumb drives in areas throughout a targeted organization to see if any curious employees would insert the device into their computer. “It turns out there is always an idiot around who doesn’t think much about the thumb drive in their hand,” according to an unnamed official referring to how Stuxnet was planted at the underground uranium enrichment site in Natanz, Iran.
If that’s the case then the operation certainly could have put U.S. citizens at risk right here on our own soil. It also has the potential to fan the flames of retaliation or similar offensive cyberwarefare operations from our adversaries. We’ve already encountered reports that government agencies and even critical infrastructure facilities, such as power plants have been penetrated in some way.
Network security luminary Marcus Ranum, CSO of Tenable Network Security, told SearchSecurity about his concern over militarized cyberspace and even outlined the problem caused by the Stuxnet-like strikes.
Critical infrastructure protection
I wrote about a 2010 report by the Center for Strategic and International Studies (CSIS), which consisted of a global survey of more than 600 IT pros at critical infrastructure facilities. The main finding was that systems that run power plants, manage the distribution of hazardous chemicals and help monitor water treatment plants are in a dire need of stronger safeguards. The survey found that those facilities are under a constant barrage of attacks. A U.S.-China Economic Review Commission report last October cited a significant attack targeting U.S. Satellites. The examples go on and on.
But the problem goes beyond the potential threat to power plants and oil and chemical refineries. Earlier this year researchers demonstrated a theoretical attack targeting the systems that control the locking mechanisms at a prison. Imagine the chaos that would cause if cybercriminals were to target the prison system.
There is plenty of recognition of the seriousness of the problem, but very little transparency of where the nation stands on protecting critical assets, said Andy Purdy, chief cybersecurity strategist at CSC, and a member of the team that developed the U.S. National Strategy to Secure Cyberspace in 2003. In an interview I had with Purdy at the 2012 RSA Conference, Purdy cited some progress, but admitted that the lack of transparency leaves very little information for authorities to track the progress the nation is making in protecting critical systems. Purdy cited substantial federal funding being invested into SCADA system security, the progress of the Industrial Control Systems CERT and several plans and reports outlining the role of the public and private sector in protecting critical systems, digital identities for Internet users and the role ISPs should play in controlling customers with compromised systems.
Perhaps security luminary Dan Geer is thinking ahead to disaster recovery after a cyberstrike. He speaks incessantly at security conferences and summits about the need for system redundancy and manual processes to help lessen the disruption and chaos when Internet connected systems fail. Not only do we need redundant systems and manual processes, but we need skilled people who know how they function, Geer says.
Stuxnet details conclusion
The details about the planning operation behind Stuxnet should be a reminder that military action, whether physical or digital, needs to be thoroughly vetted or else innocent citizens could be inadvertently put at risk. It should be a call to action for stricter oversight of the security of critical infrastructure both publicly or privately owned. It’s amazing to me that despite all of the increased rhetoric about better protecting the nation’s critical infrastructure there has been very little evidence of progress. Just words.
After working hard to create sound security policies, it’s easy for enterprise information security managers to be dismayed when users ignore the rules and knowingly bypass security controls. When those rule-breakers are executives, it feels like salt on the wound. After all, who should understand the importance of protecting an organization’s assets better than its top executives? Yet, a survey at Infosecurity Europe revealed that, in 43% of organizations, senior managers and even the board of directors do not follow their organizations’ security policies and procedures.
The survey was conducted last month by security consulting firm Cryptzone Group. They asked 300 IT professionals who within their organizations is least likely to follow security policies and procedures. According to the Cryptzone report, Perceptions of security awareness (.pdf), 20% said senior managers are least likely to follow the rules, and 23% pointed their finger directly at the CEO or CTO.
The Cryptzone report didn’t dig into the reasons behind these perturbing findings, but I’d venture there are five primary reasons why executives disobey corporate security policies. (You’ll either laugh or cry about the last one.)
1. They are discreetly excused from taking security training programs;
2. They do not agree wholeheartedly with the security policy;
3. They believe the risks they are taking aren’t all that bad;
4. They are in a hurry;
5. They think IT will take care of things if something (like a data breach) occurs.
The antidote for all these reasons can, of course, be found in corporate security training. But because senior managers probably can’t or won’t take time out of their workdays to attend more training (see reason #4), security pros will have to keep finding creative ways to get the message out. Multimedia playing in the office kitchen, occasional text reminders sent to managers’ phones, and other friendly methods of interjecting bits of the security policy into managers’ minds must be a never-ending process in every organization.
A bane for U.S.-based cloud providers for several months now has been the assumption among cloud customers and service providers outside the U.S. – especially in Europe – that the Patriot Act gives the U.S. more access to cloud data than other governments. The idea, then, is that it’s safer to store your data with a cloud provider in a location free from such governmental access. A recent study debunked this Patriot Act cloud notion by showing that, in fact, other governments have just as much access as the U.S. for national security or law enforcement reasons.
The study, published by the global law firm Hogan Lovells (.pdf), looked at the laws of ten countries, including the U.S., France, Germany, Canada and Japan, and found each one vested authority in the government to require a cloud service provider to disclose customer data. The study showed that even countries with strict privacy laws have anti-terrorism laws that allow for expedited government access to cloud data.
“On the fundamental question of governmental access to data in the cloud, we conclude …that it is not possible to isolate data in the cloud from governmental access based on the physical location of the cloud service provider or its facilities,” wrote Christopher Wolf, co-director of Hogan Lovells’ privacy and information practice, and Winston Maxwell, a partner in the firm’s Paris office.
In a blog post, Dave Asprey, vice president of cloud security at Trend Micro, said the research “proves a bigger point; that your data will be disclosed with or without your permission, and with or without your knowledge, if you’re in one of the 10 countries covered.”
The only solution to this problem, he added, is encryption. But how encryption keys are handled is critical; encryption keys need to be on a policy-based management server at another cloud provider or under your own control, Asprey wrote. Now, Trend Micro has a vested interest here since it provides encryption key management, but it’s a point worth noting for organizations concerned about protecting cloud data not just from governments, but from cybercriminals.
For another examination of the Patriot Act’s impact on cloud computing, check out the article by SearchCloudSecurity.com contributor Francoise Gilbert. She looks at the rules for the federal government to access data and how they undercut concerns about the Patriot Act and cloud providers based in the U.S.
For years, the mantra of the security industry has been to get enterprises to look internally for weaknesses and activity that can raise a red flag to a malware-infected machine or an employee with malicious intentions. But how do you know how secure your partners and clients are?
It’s not difficult to see the security risks posed by a contractor taking care of payroll, a managed services provider, or the string of businesses that make up the supply chain. A breach at any of those businesses could have a serious impact your company’s security. An enterprise CISO or IT director has little control over the security of their partner networks. Managing business partner security risks has been left to putting in protections in service-level agreements. From a technology perspective, enterprises can review the logs to look for suspicious behavior if partners are given access to company resources.
Derek Gabbard and his team at Lookingglass Cyber Solutions aim to change all that. The company’s technology, which is being used by a variety of government and financial organizations, can map out the networks of partners and clients and apply a layer of threat intelligence data to determine if there are any potential compromises. The technology provides companies with third-party risk management capabilities.
Called ScoutVision, the technology can get information about a company’s business partner networks once the partner’s IP address range is fed into it. It bases its threat analysis on security vendor intelligence feeds licensed by Lookingglass, honeypots and other proprietary threat intelligence data.Lookingglass monitors communication in cybercriminal networks. It ties intelligence on botnets and malware attacks to trace a threat back to a network that has been penetrated.
The company boasts that nearly 40 separate distinct sources of threat intelligence data are used in the analysis. It looks at dark IP space and passive DNS data globally. The service can provide all the threat intelligence data it has about the entire network and describe, for example, if it has 20 to 30 bad hosts, Gabbard said. For example, if any Microsoft IP addresses have been communicating directly with a darknet, the company can characterize the nature of the communication to determine the nature of the threat.
Gabbard was CTO of network traffic analysis firm Soteria Network Technologies, a firm that appears to be synonymous with Lookingglass. Soteria has had a number of contracts with the Department of Homeland Security. He served as senior member of the technical staff at Carnegie Mellon University’s CERT Coordination Center. Gabbard told me that up until now companies have been focusing internally with little regard to the security of their partner systems.
I can’t find a company that is taking Lookingglass’ approach. SIEM systems such as HP Arcsight, and network appliances like RSA Netwitness or Solera Networks, don’t provide external network visibility in the same context, Gabbard said. The technology could eventually be integrated with a network appliance, he said. As CEO of Lookingglass, Gabbard is looking to extend ScoutVision to a broader set of customers.
So what does a company do with the threat data provided by Lookingglass?
Gabbard said he believes the information gleaned by the service can be actionable. The first commercial customers consisted of pilot projects conducted in 2010. So far the service has resulted in mainly reporting and phone calls to third parties. Some early adopters create reports and inform their partners of the potential security issues. Depending on their relationship, they’ll say “hey, your network’s messed up,” he said. “Clean it up or we’ll have to restrict access.”
The firm is gaining interest. In January, the fledgling company received $5 million in funding from Alsop Louie Partners, a firm that includes Gilman Louie, the founder and former CEO of In-Q-Tel – the investment arm of the Central Intelligence Agency. It will be interesting to watch if other security vendors attempt to take a similar approach with existing security appliances. The potential exists to apply the technology to companies with an extensive supply chain.
Ethical hackers hired by an organization to assess its vulnerabilities must always be careful to not “cross the line” and get themselves into trouble with the law. With all the computer security laws in the U.S., it can be a challenge for ethical hackers to ensure they are obeying all the laws.
But according to David Snead, an attorney in Washington D.C. who frequently represents IT security providers and consultants, it is possible to focus on just a handful of laws to avoid lawsuits and stay out of jail.
During a session at the Source Conference in Boston last month, Snead listed the overwhelming number of laws related to IT security in the U.S. But ethical hackers can focus on just three laws that are most likely to lead to litigation, according to Snead:
• Computer Fraud and Abuse Act (CFAA), which makes it illegal to access a computer or network without proper authorization.
• Wiretap Act, which can be applied to packet sniffing.
• Stored Communications Act (SCA), which can be applied to any email that was meant to be confidential.
Similarly, each state has different laws, and few organizations have the time or resources to ensure they are compliant in all 50 states. Snead recommended ethical hackers and security consultant assist their client organizations by ensuring they are compliant in just three states, at least initially. The three states should be:
• The organization’s own headquarter state;
• The state where most of the organization’s employees work;
• The state where most of the organization’s customers live or work.
In some cases, these three scenarios may point to just one or two states, making the consultant’s job that much easier.
In my view, Snead was bold to make these recommendations. Many lawyers, when asked which IT security laws their clients should obey, would probably say, “All of them.” But Snead’s advice comes from a real-world perspective, and it’s this kind of realistic advice that’s greatly appreciated by security practitioners — especially the many independent penetration testers out there — who are often grappling with their budgets.
Still, security pros must understand the risks of following this advice. As Snead explained, triaging the laws this way will avert most legal problems. But the pen tester’s client organization could still get tripped up by a lesser-known law if a creative prosecutor convinces the court it applies to the organization’s security practices.
It’s anyone’s guess how the FedRAMP cloud security initiative will pan out, but the pieces are coming together. Last week, the U.S. General Services Administration released an initial list of approved third-party assessment organizations (3PAOs).
Launched by the Obama administration in December, the Federal Risk and Authorization Management Program (FedRAMP) aims to set a standard approach for assessing the security of cloud services. The goal is to cut the cost and time spent on agency cloud assessments and authorizations.
3PAOs will assess cloud service providers’ security controls to validate they meet FedRAMP requirements. Their assessments will be reviewed by the FedRAMP Joint Authorization Board, which can grant provisional authorizations that federal agencies can use.
Here’s the list of accredited 3PAOs: COACT, Department of Transportation Enterprise Service Center, Dynamics Research Corp., J.D. Biggs and Associates, Knowledge Consulting Group, Logyx, Lunarline, SRA International and Veris Group.
If you’re wondering how these companies became 3PAOs, they had to submit an application demonstrating technical competence in assessing security of cloud-based systems, according to the GSA. They also had to meet ISO/IEC 17020:1998 requirements for companies performing assessments.
When I wrote about FedRAMP earlier this year, the program drew praise, criticism and cautious optimism. Will it get bogged down in bureaucracy? Will it become simply another paper-pushing compliance exercise? Will it help advance cloud security standards for the private sector? Hard to say how long it will take until we know those answers, but at least FedRAMP appears to be on schedule. With the release of the 3PAOs, the program moves closer its target of becoming operational next month.
I’m planning to speak with one of the 3PAOs tomorrow; hopefully I’ll have some additional information from that interview about the 3PAO process and FedRAMP in general. If I do, I’ll post it on SearchCloudSecurity.com.
Chief information security officers have a lot on their plate. Between data protection, malware detection, compliance regulations, social media security, mobile device management (MDM) and many more areas that fall into the realm of the security team, the chief information security officer (CISO) is obliged to wear many hats each day.
A recent survey by IBM highlighted this multitude of CISO responsibilities. In the report, Finding a strategic voice: Insights from the 2012 IBM Chief Information Security Officer assessment(.pdf), IBM said the ideal CISO must “assume a business leadership position and dispel the idea that information security is a technology support function. Their purview must encompass education and cultural change, not just security technology and processes. Leaders will need to reorient their security organizations around proactive risk management rather than crisis response and compliance. And the management of information security must migrate from discrete and fragmented initiatives to an integrated, systemic approach.”
That’s a tall order, and trying to accomplish it all could lead to CISO burnout. It’s not so much that there’s too much to do (although there is). The real problem causing CISOs to reach for the Pepto Bismol is there are too many conflicting demands coming at them from different angles.
But changes to the CISO role may be on the way, according to Jon Olstik, a security analyst at research firm Enterprise Strategy Group. Olstik believes the CISO function will naturally and of necessity divide into two roles: CSO and CISTO.
The chief security officer (CSO) will focus on the intersection of risk and business. The CSO will deal with compliance and legal issues, and be the person who goes before the board of directors to explain the expected return on a $1 million security investment.
The chief information security technology office (CISTO) will focus on IT security architecture and infrastructure. The CISTO will handle security controls, including monitoring and reporting the company’s defenses.
Olstik sums it up like this: CSOs create cybersecurity policies; CISTOs enforce them.
Allocating responsibilities in this way will probably be greatly appreciated by today’s overburdened CISOs. Training programs could focus on the two different career paths, and security professionals could aspire in the direction that best suits their personalities and skills.
Information security spending is thought to be recession proof, but does it have the legs to outrun the current downturn? In-Q-Tel partner Peter Kuper thinks so, but there are still some rough times ahead.
Kuper, who has handled some high-profile IPOs in the security market, told Information Security Decisions 2012 attendees this week in New York City to stop spending on technology that doesn’t work. Investments in legacy security standbys (hello AV, firewalls et. al.) need to be tempered. Maybe Kuper has a vested interest in his remarks, but he’s also right. Signature-based defenses don’t work anymore. Kuper said it; analysts tell you the same thing and so do research firms. The Verizon Data Breach Investigations Report is probably the most sobering barometer of the ineffectiveness of today’s security technology: 96% of the attacks behind the breaches Verizon investigated were not complicated attacks; 97% could have been prevented with rudimentary controls; 92% of incidents were discovered by a third party, and only after months of constant infection.
Checkbox security ran by PCI and other mandates is heavily to blame here as well. Security managers are using compliance as a life preserver and to beg for budget. Budgets, meanwhile, are largely flat to slightly up, yet companies are nearly 100% owned.
“Where is the ROI there?” Kuper asked. “You’re asking for increased budget, yet three-quarters of you get your butt handed to you in minutes or less. How is that a good ROI for a CFO? Try explaining that to someone that doesn’t understand security.”
Couple that with some weak economic indicators that foreshadow another downturn-despite the market being back to pre-recession 2007 levels-and you’ve got a rocky road ahead friends.
Looking for a silver lining? OK. Venture capital firms are looking at security companies, and acquisitions are still happening in security, which are indications of innovation and some areas of strength. SIM vendors were the last market segment in play with Q1 Labs (IBM), Nitro Security (McAfee), LogLogic (Tibco) and ArcSight (HP) getting scooped up by larger vendors. Palo Alto, meanwhile, is going public soon, Kuper said, after booking $200 million last summer alone. Qualys is also perpetually in the IPO conversation. Sourcefire has been public since 2007, and after a rocky start, is trading 113% higher than last year.
“VCs were not investing much in security for a long while,” Kuper said. “But security is looking good again. I know a lot of VCs and they’re starting to call back. VCs are making money in security investing in innovative technology. It’s a good sign VCs are investing. Innovation cycles are up and a lot of good companies are getting funding.”
At an event last week in San Francisco that covered a variety of cloud security issues, infosec expert Kevin Walker told attendees to be aggressive with cloud service providers and hold them accountable when it comes to security.
“The key for us practitioners is to go into this with eyes wide open,” said Walker, who has held senior security positions at Symantec and Cisco, among other global firms. He spoke at the Cloud Security Symposium, which was sponsored by Trend Micro.
The traditional focus on building fortresses with firewalls and IPSes won’t translate to the cloud, he said. Cloud provider requirements include increased transparency about their operations and how they detect rogue tenants, and information security pros need to be aggressive in making sure providers meet security requirements, he said.
That’s certainly easier said than done, especially when business units are going around IT and signing up on Amazon. It’s a hard to press for security when you don’t even know what cloud services your company is using.
In many cases, lines of business aren’t waiting for IT when they need something – they simply use their credit card to buy cloud services, said JJ DiGeronimo, senior accelerate practice manager and cloud strategist at VMware. “IT departments have true competition from outside service providers,” she told attendees.
“People are used to securing a box, but now we’re moving to securing the data,” she said. “Data is going to sit everywhere and you’ll have to manage it regardless of where it sits.”
Data-centric security has been an ongoing theme in the industry for several years as corporate network boundaries crumble as employees increasingly become more mobile. Enterprise adoption of cloud computing is becoming yet another driver.
“If you can’t control the systems anymore. … That’s the only way to do it [security] — to protect the data,” Trend Micro CTO Raimund Genes told me in an interview.
Trend Micro naturally has a vested interest in this trend – the company sells encryption products including a key management service for cloud and virtual environments – but it does make sense given that enterprise data is increasingly flowing to cloud environments and becoming harder to track. Maybe the rise of cloud computing will help push data-centric security into the mainstream.
In the meantime, if you’re looking for ways to track down unauthorized use of cloud services by your developers or sales executives, we published tips in this article.