Security awareness training often teaches the importance of password length and password complexity, but these best practices, as it turns out, may be creating a false sense of security. Even worse, users who cooperate and create long, complex passwords may feel betrayed when the organizations they trusted prove fallible and their passwords are hacked.
The recent LinkedIn hacking incident, in which 6.4 million LinkedIn passwords were stolen (or possibly leaked), demonstrated the strength of a user’s password is no defense when an Internet application provider is attacked. Even if each LinkedIn password was as long and complex as possible, it wouldn’t have mattered; the Russian hackers still found the hashed LinkedIn passwords and posted them for all to see.
According to some analysts reviewing the LinkedIn breach, the social networking site had failed to protect users’ passwords with a strong hashing algorithm. That’s where the sense of betrayal comes in. If users are doing their part by using strong passwords, they should be able to trust the application provider to take strong precautions, too.
The situation spurred LinkedIn to take stronger precautions now. In a blog post, LinkedIn said it would use better hashing and salting to protect its account databases in the future.
Organizations can learn from LinkedIn’s public mea culpa. If your IT staff has been lecturing users on strong passwords, but your organization’s passwords are stolen, how will your users react? After years of building trust between IT and users, an incident like this could destroy the relationship in one day.
The LinkedIn incident is a reminder of the need to properly balance responsibility for secure access management among users and IT. Yes, user training is important, but IT security teams must go the extra mile to protect account credentials and prove themselves worthy of users’ trust.
Wednesday’s Cornerstones of Trust Conference featured an interesting CSO discussion of some of the hottest topics infosecurity pros are dealing with today, including the BYOD trend, cloud computing and big data security. The annual conference, held in Foster City, Calif., is sponsored by ISSA’s Silicon Valley and San Francisco chapters, and San Francisco Bay Area InfraGard.
Mobile, cloud and BYOD are all part of an overarching trend towards consumerization of IT that’s driving demand for convenient, easy access to corporate data, said Preston Wood, CSO at Zions Bancorporation, a Salt Lake City-based bank holding company. “We need to find a way to enable that and not be a roadblock,” he said.
At Cisco Systems, the mobile trend is far from new, said Steve Martino, a Cisco vice president in charge of information security for the networking giant. Thirty percent of the workforce has more than two mobile devices. “If we try to prevent it, they’ll find ways around it,” he said.
Instead, organizations should consider flexible mobile policies that permit network access based on the user, device and location, Martino said. For example, a user with a phone that doesn’t have mobile device management (MDM) software may get access to some services but not others.
With cloud computing, information security’s historic reliance on preventative controls won’t work so well, Wood said. The cloud trend presents the opportunity to focus more on detective controls of rapid response and risk mitigation. Each organization will have a different risk appetite and some aspects of the business will still require preventative controls. “There’s no one-size-fits-all,” Wood said. “You need to ask the business that risk question.”
On the topic of big data security – using big data techniques for security analytics — Wood suggested organizations can get started on that path by digging into data they already have on hand, such as firewall or IDS logs. Administrators often don’t look back to see if firewall policies are still working – that might be an area to explore, he said. The approach of mining data to obtain more security builds on itself.
“Start with what you already have,” Wood said. “And start by asking some innovative questions of that data.”
Earlier in the day, Wood presented a keynote on big data and security analytics, which unfortunately I missed, but I did cover his presentation at RSA Conference 2012, as did many other reporters. His RSA presentation was widely covered and justly so. He’s put into practice what others are only talking about at a conceptual level. At RSA, he and others from Zions detailed how the company harnessed information from its disparate security data sources by developing Hadoop-based security data warehouse. Using big data techniques enabled the company to speed forensics investigations, improve fraud detection and overall security, they said.
On Wednesday, Wood also offered some career advice to security pros: Don’t limit yourself to the “echo chamber of security.” Security pros should try to learn about other disciplines; big data security, for example, offers the opportunity to reach out to business units that have experience with analytics, he said.
At Cisco, employees are rotated, for example, from security to IT or from a business unit into security, Martino said. That practice helps the security organization understand the pain points throughout the business, he said. The company also has created security advocates in other parts of the business, which gets others involved in security.
Wood also urged attendees to spend more time on strategy. A lot of security organizations find themselves fighting fires all the time instead of looking at the big picture, he said. Security teams need people with the skills to deal with daily operations but who can also look ahead and strategize.
Unemployment is at 0% for information security professionals! This good news was reported this spring in CompTIA’s 9th annual Information Security Trends report. The report cited U.S. Bureau of Labor Statistics (BLS) research conducted in the spring of 2011, which also noted the unemployment rate at just under 4% for the IT industry overall. Clearly, skilled security professionals should have no trouble getting information technology security jobs right now.
But companies are having trouble filling those jobs. According to CompTIA’s survey of 500 IT and business executives in the U.S., conducted at the end of 2011, 40% of companies are having difficulty hiring IT security specialists.
During a recent conversation with Todd Thibodeaux, president and CEO of CompTIA, I asked him why companies are having hiring problems, and I expected his answer would relate to the need for more CompTIA certifications. Or perhaps he’d say companies can’t pay enough to hire the talent they need. But Thibodeaux’s response brought up another perspective on the hiring challenge. He believes organizations are having trouble hiring IT security pros in the U.S. partly because of depressed housing values.
“The challenge is recruiting within physical regions,” Thibodeaux said. “Organizations don’t want to outsource their security, and they certainly don’t want to off-shore their security. So they need to hire locally.”
Yet with many IT professionals’ homes underwater with their mortgages right now, would-be employees are not able to move to take new jobs. So even though hiring organizations are willing to pay good salaries, they are largely at the mercy of larger economic forces beyond their control.
This phenomenon is more noticeable in some parts of the country, Thibodeaux said. Areas with high concentrations of technology companies are fortunate enough to have a larger pool of IT professionals from which to hire. But for companies not located in high-tech regions, it appears hiring has stalled. Companies and employees alike are waiting for home values to rise so people can move to fill IT security job gaps.
Is the answer simply to wait out the housing market? Thibodeaux believes a better answer may lie in college education. “Many colleges want to teach, not train,” Thibodeaux said. “But companies need people coming out of college who have been trained in technical skills.”
Perhaps this unusual situation of low unemployment in IT security combined with low home values will motivate some U.S. colleges to beef up their IT security courses with more hands-on training. Sure, that will take time — at least four years if incoming freshmen start now. But with home values inching back up slowly, those four years may turn out to be the quicker fix.
Security experts have warned about the potential problems caused by military cyberstrikes. Experts say cyberwarfare is difficult to plan and worse, it puts innocent people at risk.
Stuxnet was part of a secret joint U.S.-Israeli cyberattack operation which began with approval by the Bush Administration and continued with the nod from the Obama White House, according to a detailed account of the attack written by David Sanger in a report published today in the New York Times.
To put the pieces of the Stuxnet puzzle together, Sanger conducted interviews with unnamed sources involved with the Stuxnet operation dubbed “Olympic Games.” While it confirms a lot of speculation about the nation-states behind the Stuxnet worm, it also raises a lot of questions about cyberwarfare and its use by a sitting president. Should members of Congress have been notified of the operation? Were any U.S. citizens put at risk?
Even well planned military cyberstrikes go wrong
A 2009 study by the nonprofit research firm RAND Corp. urged the United States not to invest in offensive cyberweapons. It is too difficult to predict the outcome of an attack, making strategic planning a guessing game, according to the report’s author, Martin C. Libicki. “Predicting what an attack can do requires knowing how the system and its operators will respond to signs of dysfunction and knowing the behavior of processes and systems associated with the system being attacked,” Libicki wrote. Indeed, according to the Times story, Stuxnet clearly caused some disruption, but it was anyone’s guess as to how far it set back Iran’s nuclear program.
Even worse, Sanger’s account of the operation detailed a major coding error that enabled the offensive malware to escape into the wild. This led to its detection and analysis by antimalware vendors. Indeed there were facilities in the United States using the Siemens systems that the worm could have sought out. While the threat was minimal – Stuxnet still would have to get through the buffer zone isolating a facility from the Internet – those quoted in Sanger’s story said it was easy to get through the Iranian facility’s buffer zone using a simple thumb drive. I’ve heard of penetration testers using this trick to great success: dropping thumb drives in areas throughout a targeted organization to see if any curious employees would insert the device into their computer. “It turns out there is always an idiot around who doesn’t think much about the thumb drive in their hand,” according to an unnamed official referring to how Stuxnet was planted at the underground uranium enrichment site in Natanz, Iran.
If that’s the case then the operation certainly could have put U.S. citizens at risk right here on our own soil. It also has the potential to fan the flames of retaliation or similar offensive cyberwarefare operations from our adversaries. We’ve already encountered reports that government agencies and even critical infrastructure facilities, such as power plants have been penetrated in some way.
Network security luminary Marcus Ranum, CSO of Tenable Network Security, told SearchSecurity about his concern over militarized cyberspace and even outlined the problem caused by the Stuxnet-like strikes.
Critical infrastructure protection
I wrote about a 2010 report by the Center for Strategic and International Studies (CSIS), which consisted of a global survey of more than 600 IT pros at critical infrastructure facilities. The main finding was that systems that run power plants, manage the distribution of hazardous chemicals and help monitor water treatment plants are in a dire need of stronger safeguards. The survey found that those facilities are under a constant barrage of attacks. A U.S.-China Economic Review Commission report last October cited a significant attack targeting U.S. Satellites. The examples go on and on.
But the problem goes beyond the potential threat to power plants and oil and chemical refineries. Earlier this year researchers demonstrated a theoretical attack targeting the systems that control the locking mechanisms at a prison. Imagine the chaos that would cause if cybercriminals were to target the prison system.
There is plenty of recognition of the seriousness of the problem, but very little transparency of where the nation stands on protecting critical assets, said Andy Purdy, chief cybersecurity strategist at CSC, and a member of the team that developed the U.S. National Strategy to Secure Cyberspace in 2003. In an interview I had with Purdy at the 2012 RSA Conference, Purdy cited some progress, but admitted that the lack of transparency leaves very little information for authorities to track the progress the nation is making in protecting critical systems. Purdy cited substantial federal funding being invested into SCADA system security, the progress of the Industrial Control Systems CERT and several plans and reports outlining the role of the public and private sector in protecting critical systems, digital identities for Internet users and the role ISPs should play in controlling customers with compromised systems.
Perhaps security luminary Dan Geer is thinking ahead to disaster recovery after a cyberstrike. He speaks incessantly at security conferences and summits about the need for system redundancy and manual processes to help lessen the disruption and chaos when Internet connected systems fail. Not only do we need redundant systems and manual processes, but we need skilled people who know how they function, Geer says.
Stuxnet details conclusion
The details about the planning operation behind Stuxnet should be a reminder that military action, whether physical or digital, needs to be thoroughly vetted or else innocent citizens could be inadvertently put at risk. It should be a call to action for stricter oversight of the security of critical infrastructure both publicly or privately owned. It’s amazing to me that despite all of the increased rhetoric about better protecting the nation’s critical infrastructure there has been very little evidence of progress. Just words.
After working hard to create sound security policies, it’s easy for enterprise information security managers to be dismayed when users ignore the rules and knowingly bypass security controls. When those rule-breakers are executives, it feels like salt on the wound. After all, who should understand the importance of protecting an organization’s assets better than its top executives? Yet, a survey at Infosecurity Europe revealed that, in 43% of organizations, senior managers and even the board of directors do not follow their organizations’ security policies and procedures.
The survey was conducted last month by security consulting firm Cryptzone Group. They asked 300 IT professionals who within their organizations is least likely to follow security policies and procedures. According to the Cryptzone report, Perceptions of security awareness (.pdf), 20% said senior managers are least likely to follow the rules, and 23% pointed their finger directly at the CEO or CTO.
The Cryptzone report didn’t dig into the reasons behind these perturbing findings, but I’d venture there are five primary reasons why executives disobey corporate security policies. (You’ll either laugh or cry about the last one.)
1. They are discreetly excused from taking security training programs;
2. They do not agree wholeheartedly with the security policy;
3. They believe the risks they are taking aren’t all that bad;
4. They are in a hurry;
5. They think IT will take care of things if something (like a data breach) occurs.
The antidote for all these reasons can, of course, be found in corporate security training. But because senior managers probably can’t or won’t take time out of their workdays to attend more training (see reason #4), security pros will have to keep finding creative ways to get the message out. Multimedia playing in the office kitchen, occasional text reminders sent to managers’ phones, and other friendly methods of interjecting bits of the security policy into managers’ minds must be a never-ending process in every organization.
A bane for U.S.-based cloud providers for several months now has been the assumption among cloud customers and service providers outside the U.S. – especially in Europe – that the Patriot Act gives the U.S. more access to cloud data than other governments. The idea, then, is that it’s safer to store your data with a cloud provider in a location free from such governmental access. A recent study debunked this Patriot Act cloud notion by showing that, in fact, other governments have just as much access as the U.S. for national security or law enforcement reasons.
The study, published by the global law firm Hogan Lovells (.pdf), looked at the laws of ten countries, including the U.S., France, Germany, Canada and Japan, and found each one vested authority in the government to require a cloud service provider to disclose customer data. The study showed that even countries with strict privacy laws have anti-terrorism laws that allow for expedited government access to cloud data.
“On the fundamental question of governmental access to data in the cloud, we conclude …that it is not possible to isolate data in the cloud from governmental access based on the physical location of the cloud service provider or its facilities,” wrote Christopher Wolf, co-director of Hogan Lovells’ privacy and information practice, and Winston Maxwell, a partner in the firm’s Paris office.
In a blog post, Dave Asprey, vice president of cloud security at Trend Micro, said the research “proves a bigger point; that your data will be disclosed with or without your permission, and with or without your knowledge, if you’re in one of the 10 countries covered.”
The only solution to this problem, he added, is encryption. But how encryption keys are handled is critical; encryption keys need to be on a policy-based management server at another cloud provider or under your own control, Asprey wrote. Now, Trend Micro has a vested interest here since it provides encryption key management, but it’s a point worth noting for organizations concerned about protecting cloud data not just from governments, but from cybercriminals.
For another examination of the Patriot Act’s impact on cloud computing, check out the article by SearchCloudSecurity.com contributor Francoise Gilbert. She looks at the rules for the federal government to access data and how they undercut concerns about the Patriot Act and cloud providers based in the U.S.
For years, the mantra of the security industry has been to get enterprises to look internally for weaknesses and activity that can raise a red flag to a malware-infected machine or an employee with malicious intentions. But how do you know how secure your partners and clients are?
It’s not difficult to see the security risks posed by a contractor taking care of payroll, a managed services provider, or the string of businesses that make up the supply chain. A breach at any of those businesses could have a serious impact your company’s security. An enterprise CISO or IT director has little control over the security of their partner networks. Managing business partner security risks has been left to putting in protections in service-level agreements. From a technology perspective, enterprises can review the logs to look for suspicious behavior if partners are given access to company resources.
Derek Gabbard and his team at Lookingglass Cyber Solutions aim to change all that. The company’s technology, which is being used by a variety of government and financial organizations, can map out the networks of partners and clients and apply a layer of threat intelligence data to determine if there are any potential compromises. The technology provides companies with third-party risk management capabilities.
Called ScoutVision, the technology can get information about a company’s business partner networks once the partner’s IP address range is fed into it. It bases its threat analysis on security vendor intelligence feeds licensed by Lookingglass, honeypots and other proprietary threat intelligence data.Lookingglass monitors communication in cybercriminal networks. It ties intelligence on botnets and malware attacks to trace a threat back to a network that has been penetrated.
The company boasts that nearly 40 separate distinct sources of threat intelligence data are used in the analysis. It looks at dark IP space and passive DNS data globally. The service can provide all the threat intelligence data it has about the entire network and describe, for example, if it has 20 to 30 bad hosts, Gabbard said. For example, if any Microsoft IP addresses have been communicating directly with a darknet, the company can characterize the nature of the communication to determine the nature of the threat.
Gabbard was CTO of network traffic analysis firm Soteria Network Technologies, a firm that appears to be synonymous with Lookingglass. Soteria has had a number of contracts with the Department of Homeland Security. He served as senior member of the technical staff at Carnegie Mellon University’s CERT Coordination Center. Gabbard told me that up until now companies have been focusing internally with little regard to the security of their partner systems.
I can’t find a company that is taking Lookingglass’ approach. SIEM systems such as HP Arcsight, and network appliances like RSA Netwitness or Solera Networks, don’t provide external network visibility in the same context, Gabbard said. The technology could eventually be integrated with a network appliance, he said. As CEO of Lookingglass, Gabbard is looking to extend ScoutVision to a broader set of customers.
So what does a company do with the threat data provided by Lookingglass?
Gabbard said he believes the information gleaned by the service can be actionable. The first commercial customers consisted of pilot projects conducted in 2010. So far the service has resulted in mainly reporting and phone calls to third parties. Some early adopters create reports and inform their partners of the potential security issues. Depending on their relationship, they’ll say “hey, your network’s messed up,” he said. “Clean it up or we’ll have to restrict access.”
The firm is gaining interest. In January, the fledgling company received $5 million in funding from Alsop Louie Partners, a firm that includes Gilman Louie, the founder and former CEO of In-Q-Tel – the investment arm of the Central Intelligence Agency. It will be interesting to watch if other security vendors attempt to take a similar approach with existing security appliances. The potential exists to apply the technology to companies with an extensive supply chain.
Ethical hackers hired by an organization to assess its vulnerabilities must always be careful to not “cross the line” and get themselves into trouble with the law. With all the computer security laws in the U.S., it can be a challenge for ethical hackers to ensure they are obeying all the laws.
But according to David Snead, an attorney in Washington D.C. who frequently represents IT security providers and consultants, it is possible to focus on just a handful of laws to avoid lawsuits and stay out of jail.
During a session at the Source Conference in Boston last month, Snead listed the overwhelming number of laws related to IT security in the U.S. But ethical hackers can focus on just three laws that are most likely to lead to litigation, according to Snead:
• Computer Fraud and Abuse Act (CFAA), which makes it illegal to access a computer or network without proper authorization.
• Wiretap Act, which can be applied to packet sniffing.
• Stored Communications Act (SCA), which can be applied to any email that was meant to be confidential.
Similarly, each state has different laws, and few organizations have the time or resources to ensure they are compliant in all 50 states. Snead recommended ethical hackers and security consultant assist their client organizations by ensuring they are compliant in just three states, at least initially. The three states should be:
• The organization’s own headquarter state;
• The state where most of the organization’s employees work;
• The state where most of the organization’s customers live or work.
In some cases, these three scenarios may point to just one or two states, making the consultant’s job that much easier.
In my view, Snead was bold to make these recommendations. Many lawyers, when asked which IT security laws their clients should obey, would probably say, “All of them.” But Snead’s advice comes from a real-world perspective, and it’s this kind of realistic advice that’s greatly appreciated by security practitioners — especially the many independent penetration testers out there — who are often grappling with their budgets.
Still, security pros must understand the risks of following this advice. As Snead explained, triaging the laws this way will avert most legal problems. But the pen tester’s client organization could still get tripped up by a lesser-known law if a creative prosecutor convinces the court it applies to the organization’s security practices.
It’s anyone’s guess how the FedRAMP cloud security initiative will pan out, but the pieces are coming together. Last week, the U.S. General Services Administration released an initial list of approved third-party assessment organizations (3PAOs).
Launched by the Obama administration in December, the Federal Risk and Authorization Management Program (FedRAMP) aims to set a standard approach for assessing the security of cloud services. The goal is to cut the cost and time spent on agency cloud assessments and authorizations.
3PAOs will assess cloud service providers’ security controls to validate they meet FedRAMP requirements. Their assessments will be reviewed by the FedRAMP Joint Authorization Board, which can grant provisional authorizations that federal agencies can use.
Here’s the list of accredited 3PAOs: COACT, Department of Transportation Enterprise Service Center, Dynamics Research Corp., J.D. Biggs and Associates, Knowledge Consulting Group, Logyx, Lunarline, SRA International and Veris Group.
If you’re wondering how these companies became 3PAOs, they had to submit an application demonstrating technical competence in assessing security of cloud-based systems, according to the GSA. They also had to meet ISO/IEC 17020:1998 requirements for companies performing assessments.
When I wrote about FedRAMP earlier this year, the program drew praise, criticism and cautious optimism. Will it get bogged down in bureaucracy? Will it become simply another paper-pushing compliance exercise? Will it help advance cloud security standards for the private sector? Hard to say how long it will take until we know those answers, but at least FedRAMP appears to be on schedule. With the release of the 3PAOs, the program moves closer its target of becoming operational next month.
I’m planning to speak with one of the 3PAOs tomorrow; hopefully I’ll have some additional information from that interview about the 3PAO process and FedRAMP in general. If I do, I’ll post it on SearchCloudSecurity.com.
Chief information security officers have a lot on their plate. Between data protection, malware detection, compliance regulations, social media security, mobile device management (MDM) and many more areas that fall into the realm of the security team, the chief information security officer (CISO) is obliged to wear many hats each day.
A recent survey by IBM highlighted this multitude of CISO responsibilities. In the report, Finding a strategic voice: Insights from the 2012 IBM Chief Information Security Officer assessment(.pdf), IBM said the ideal CISO must “assume a business leadership position and dispel the idea that information security is a technology support function. Their purview must encompass education and cultural change, not just security technology and processes. Leaders will need to reorient their security organizations around proactive risk management rather than crisis response and compliance. And the management of information security must migrate from discrete and fragmented initiatives to an integrated, systemic approach.”
That’s a tall order, and trying to accomplish it all could lead to CISO burnout. It’s not so much that there’s too much to do (although there is). The real problem causing CISOs to reach for the Pepto Bismol is there are too many conflicting demands coming at them from different angles.
But changes to the CISO role may be on the way, according to Jon Olstik, a security analyst at research firm Enterprise Strategy Group. Olstik believes the CISO function will naturally and of necessity divide into two roles: CSO and CISTO.
The chief security officer (CSO) will focus on the intersection of risk and business. The CSO will deal with compliance and legal issues, and be the person who goes before the board of directors to explain the expected return on a $1 million security investment.
The chief information security technology office (CISTO) will focus on IT security architecture and infrastructure. The CISTO will handle security controls, including monitoring and reporting the company’s defenses.
Olstik sums it up like this: CSOs create cybersecurity policies; CISTOs enforce them.
Allocating responsibilities in this way will probably be greatly appreciated by today’s overburdened CISOs. Training programs could focus on the two different career paths, and security professionals could aspire in the direction that best suits their personalities and skills.