When it comes to IT governance, it’s one thing to have staff completing compliance risk management processes; it’s quite another to be confident that everything is indeed in line and secure. Understanding your level of compliance and how it relates to business risk is more than simply asking IT staff: “How are things?” or “Are we secure?”
The best way to ensure that you’re getting good information surrounding compliance risk management is to trust but verify. Asking the right questions and getting involved with the security management process are sure ways to bring light to some issues that have been shrugged off or even undetected — sometimes for years. Here are some pointed questions you can ask of those responsible for day-to-day network and system administration to ensure that you’re not creating a monster by making high-risk assumptions:
1. What high-priority items were found during our most recent Web application penetration test? What’s the plan for fixing these issues?
2. What patches were missing during our last vulnerability scan?
3. Why are patches continually showing up as missing on our Windows servers and database systems?
4. How are we managing event logs and correlating potential security incidents? How long are these logs being kept?
5. Our passwords seem pretty secure for our main network logons, but what about for our Web applications, firewalls and all the random database servers scattered around the network?
6. Given our current configurations, what’s the business risk of someone losing a laptop or having their smartphone or iPad stolen?
7. What security incidents have been prevented over the past “X” number of months?
8. How do we know our traditional desktop antivirus software is actually keeping our endpoints secure?
9. What are we doing to proactively prevent data from leaking out of the network unnoticed?
10. Have you seen any protocol anomalies on the network recently when compared with your known baseline? Are any odd systems like workstations, smartphones and rarely-used servers showing up as top talkers on the network?
This is hardly an exhaustive list, but these are some of the major security oversights and risks I see on a consistent basis. If everything appears to be hunky-dory in IT, odds are you need to probe further. Complacency, poor time management and the desire for job security often get in the way of what’s really going on.
One of your main goals for compliance risk management should be to ensure you’re getting the right information at the right time so you, your peers and your executives can make the right decisions. Anything short of this will merely set your compliance program up for failure in the long term.
Federal governments all over the world have become increasingly hands-on with cybersecurity strategy and online privacy, but businesses have sometimes been critical of new rules that they say will hurt their bottom line.
Look at the controversy surrounding the U.S. House of Representatives’ Stop Online Piracy Act. The act would allow the Attorney General to seek injunctions against foreign websites that steal and sell American innovations and products, and would increase criminal penalties for individuals who traffic in counterfeit medicine and military goods. While these traits may sound like music to online businesses’ ears, a letter protesting the act (signed by representatives from names you may have heard of like AOL, eBay, Facebook, Google and Twitter) expresses concern that it poses a “serious risk to our industry’s continued track record of innovation and job creation, as well as to our nation’s cybersecurity.”
But in announcing new details that are part of its new £650m cybersecurity strategy, the U.K. government is trying to strike a balance between protecting consumers, online information and good business sense. Just look at the government’s tagline when heralding the initiative, which it calls “a new era of unprecedented cooperation between the government and the private sector on cybersecurity.”
The cybersecurity strategy is unique in that it sets up a joint public/private-sector cybersecurity “hub” designed to allow the U.K. government and the private sector to exchange actionable information on cyberthreats and manage cyberattack response. A pilot program surrounding this initiative will begin in December with five business sectors: Defense, telecommunications, finance, pharmaceuticals and energy.
The strategy is also encouraging industry-led cybersecurity standards for private-sector companies. Instead of just selling this as new mandatory regulations, the U.K. cabinet says the standards would give businesses a competitive edge by promoting themselves as certifiably cybersecure. The U.K. will also develop a program to certify cybersecurity specialists by March, with the ultimate goal to increase the skill levels of information assurance and cybersecurity professionals.
Minister for Cyber Security Francis Maude said a closer partnership between the public and private sectors is crucial to the success of the cybersecurity strategy, and this is what some of the U.S. efforts are missing. When working to strike this proper balance between the interests of cybersecurity and business, it’s obviously important to take into consideration the best interests of both parties. The U.S. and other countries could learn from the U.K.’s cybersecurity initiative. Working closely with the private sector will likely create a more congenial environment by demonstrating that the government is trying to help, rather than impose heavy-handed restrictions to secure online information.
Information risk management impacts each and every one of us both professionally and personally. Yet we still can’t seem to properly grasp managing information risk and put it into action. The problem is the bad guys — external hackers, organized cybercrime rings, malicious employees and the like — know what’s really going on.
They know that compliance is a joke in many enterprises. They know that security audits often gloss over the real issues. They know they have free reign and that the odds are in their favor. The reality is that many people don’t know which side of the risk equation they’re on. They assume they have the clarity, context and visibility they need for managing information risk. But in reality, they’re way behind the eight ball — and don’t realize it until it’s too late.
As IT professionals, we all have a choice of how information risk management is handled in our business. It really boils down to when we address the critical issues. We can do it before an incident occurs, which is not done often enough. We can do it during an incident, which is unrealistic because odds are we aren’t even going to know when it’s taking place. We can do it after an incident, which is still the most common effort I see. Finally, we can just ignore the problem and hope we don’t get bitten.
Savvy IT professionals who see the big picture and think long term choose the first option. They put the proper information risk management systems and processes in place to handle the issues immediately, before the going gets tough.
The essence of effective information risk management involves perspective and good old-fashioned common sense. It’s easy to get caught up in the minutiae and overlook the fact that information risk can be tied directly to business risk. The formula for making information risk management work is to highlight that this control satisfies this requirement or risk, and meets this business need. You have to use this in every IT and security-related decision you make — periodically and consistently over time.
The inability to stop doing things that are no longer working is the primary failure of information security. In IT security, you cannot change that which you tolerate. In most cases, there is no “right” or “wrong” way of managing information risk.
Every business and every situation is different. The key is to do whatever it takes to get the job done in your own environment based on your own circumstances. Taking a proactive information risk management approach is the only viable way to keep things in check over the long haul.
It was the shot heard round the social media world: This week, a Facebook spam attack resulted in pornographic and violent images showing up on users’ news feeds. Facebook has always prided itself on avoiding such attacks, and this was a big one. There are predictions that the site will lose some of its more prudish users because of the attack, which could hurt the social media juggernaut’s business model.
But who should really be held responsible for the Facebook spam attack? Do people using Facebook really not realize that they should avoid copying and pasting a suspicious-looking link from an unknown source into their browsers? I know a gift certificate to a themed chain restaurant is enticing, but come on. Facebook says it’s providing users with “educational checkpoints” to protect themselves. Is one of these points “Don’t be stupid?”
I think Helen A.S. Popkin said it best in the Technolog blog: “Viral scams persist on Facebook because Facebook users continue to click malicious links.” A study this week by the National Cyber Security Alliance and McAfee found that of 2,337 U.S. adults surveyed, 24% are not confident at all in their ability to use privacy and security account settings in their social networks. Another 15% of respondents have never checked their social networking privacy and security account settings and only 18% said the last time they checked their settings was in the last year.
These findings are just an example of the disconnect between the threats to everyday Internet users and what these users consider “safe and secure” Internet use. As more incidences like the Facebook spam attack occur, companies will no doubt try to comply with consumer protection rules and establish their own policies to protect customers. But perhaps users need to do a little more to protect themselves as well.
A few months ago, it was Google in regulators’ crosshairs. In the past couple of weeks, however, it seems that Facebook is regulators’ new focus, as they push for consumer data protection.
Facebook is close to a settlement with the U.S. government over charges that it misled users about its use of their personal information, according to The Wall Street Journal. The settlement — currently waiting for Federal Trade Commission (FTC) approval — reportedly would require Facebook to submit privacy audits for 20 years and to obtain users’ consent before making “material retroactive changes” to its privacy policies.
The report comes as the FTC and other global regulators continue their consumer data protection efforts. In March Google agreed to adopt a privacy program (which also included 20 years of privacy audits) in response to charges that it deceived users and potentially violated user privacy when it launched the social networking service Buzz. And today the FTC announced that the Asia-Pacific Economic Cooperation forum has approved an initiative to create cross-border data privacy protection among APEC members. Companies that wish to participate in the APEC privacy system will undergo a third-party review and certification process that will examine their corporate privacy practices.
The New York Times reported last week that the European justice commissioner is planning to insert wording into a revision of the European Commission’s Data Protection Directive law that would require non-European Union companies to abide by Europe’s rules on data collection or face fines and prosecution. The move could create a global commerce dispute surrounding Internet privacy, the Times reported. Facebook is also being examined by Ireland, Germany, Sweden, Finland, Norway and Denmark for potential violations of consumer data protection regulations.
Speaking of consumer data protection in the U.K., there was another noteworthy news item from the past couple of weeks: The U.K. Parliament’s Justice Select Committee has suggested jail terms for violations of the country’s Data Protection Act. Although fines are used to punish breaches of U.K. data protection laws, they provide little deterrent when the financial gain exceeds the penalty, Sir Alan Beith, the committee’s chairman, said in a recent report. “Magistrates and judges need to be able to hand out custodial sentences when serious misuses of personal information come to light,” he added. “Parliament has provided that power, but ministers have not yet brought it into force — they must do so.”
Although it seems Facebook is the prime target in these consumer data protection inquiries, perhaps it’s being used as a very high-profile example. If companies see their own vulnerabilities in the lapses of one with seemingly endless resources, they might start taking a long look at their own consumer data protection practices. They probably will soon have to anyway, as regulators increase their vigilance.
Early in my career I was influenced by the work of Christopher Alexander, an architecture professor at the University of California, Berkeley. Alexander and his team researched and cataloged patterns representing building, city and community construction best practices that had evolved over a considerable period of time. I used their seminal work, A Pattern Language, to guide the construction of my own home, and many of their principles to teach software engineering as a discipline.
Alexander, et al., note that, “Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”
Each of the architectural patterns includes a picture and a paragraph explaining how it works in context. Architectural patterns don’t constrain or inhibit creativity as much as they free designers to focus on the differentiations that have the greatest impact on the end user.
Twenty years ago, I documented some of my thoughts on software development patterns in an article titled “Systems Design: Lessons from Architecture.” I have been recently writing about the relationship between enterprise risk management and sustainability, and it occurred to me that GRC managers could benefit from taking a pattern-based approach to their work — especially for organizing their teams and system architecture.
Patterns are like musical forms — there are infinite varieties and parts to be created, but the overall structure is known to “stand the test of time.” We already have well-established sets of controls for GRC, such as COBIT and ISACA’s Risk IT. These are all important, but not an alternative to patterns because their intent is to support auditing rather than to provide a creativity framework. Instead, patterns should complement controls.
A GRC pattern language, like a programming language or even a natural language, would be a shared resource to enable faster and more reliable enterprise system development. GRC patterns should include all the key constructs needed to ensure best governance and compliance practices (in this context, controls would be embedded in each pattern).
They also must be flexible. For example, with governance we know that it’s alright to have exceptions as long as there is a repeatable, auditable process for justifying and documenting them. Given the pace of technological advancement that drives business model changes, any pattern repository must allow for rapid changes, too.
I believe we need a GRC pattern guidebook, similar in spirit to Alexander’s work but one that leverages a broad community supported by collaboration tools and assembled by a flexible process. Changes in the environment may lead to the identification of new patterns based on analytics, and pattern retirement when conditions change is equally important. In other words, we need a Wiki to capture, catalogue, review and update patterns as a community.
With that in mind, SIG411 LLC is launching an open source patterns project that will include GRC patterns contributed by practitioners and academics who will be recognized for their contributions. The scope of the project is broader than GRC, as it will include patterns for all aspects of sustainable enterprises and societies. But given my personal interest in the intersection of enterprise risk management and sustainability, GRC will be an early focal point. I encourage all interested parties to get involved and contribute, as well as use, the patterns from this Wiki.
Adrian Bowles has more than 25 years of experience as an analyst, practitioner and academic in IT, with a focus on strategy and management. He is the founder of SIG411 LLC, a Westport, Conn.-based research and advisory firm. Write to him at email@example.com.
National Cybersecurity Awareness Month has drawn to a close, but it’s clear that much still needs to be done to protect information online. One recent survey has found that small businesses – which likely don’t have the resources to bounce back from a major data breach — could be particularly vulnerable to cybersecurity threats.
The online survey of 1,045 small business owners, sponsored by Symantec Corp. and the National Cyber Security Alliance, found that 70% have no formal Internet security policy for employees and that of those, 49% do not have even an informal policy. In addition, 45% of the small business owners surveyed said they do not provide Internet safety training to their employees.
These findings are in stark contrast to SMBs’ apparently false sense of security. Eighty-five percent of the survey respondents said they believe their company is safe from hackers, viruses, malware or a cybersecurity breach; and 69% agreed that Internet security is “critical to their business’s success.”
It’s clear that the survey respondents aren’t following the main theme of this year’s Cybersecurity Awareness Month: the importance of educating everyone and making them aware that they need to do their part to protect their information online.
Other survey highlights (or lowlights, as the case may be):
- 56% of respondents have no Internet use policies to clarify which websites and Web services employees can use; 52% have a plan in place for keeping their business cybersecure.
- 67% have become more dependent on the Internet in the last year; 66% depend on it for day-to-day operations.
- 57% of respondents say a loss of Internet access for 48 hours would be disruptive to their business, and 76% say that most of their employees use the Internet daily.
- 37% have an employee policy or guidelines in place for the remote use of company information on mobile devices, and 36% have a policy outlining employees’ acceptable use of social media.
- 59% do not use multifactor authentication to access their networks.
- 50% report they always wipe data off their machines completely before they dispose of them; 21% never do.
The survey also found that SMBs are woefully unprepared to react after a data breach. Forty percent of respondents said they don’t have a contingency plan outlining procedures for handling and reporting a data breach or loss of information.
Ignoring the problem of cybersecurity threats can be very costly. Data released by Symantec shows that 40% of all targeted cyberattacks are directed at companies with fewer than 500 employees. In 2010, the average annual cost of cyberattacks to SMBs was $188,242. Business Insider reported in September that approximately 60% of small businesses will close within six months of a cyberattack.
What is it going to take for these small businesses to realize the impact of cybersecurity threats? They need to realize that lax cybersecurity measures, combined with their sparse resources, make them particularly vulnerable. It might be costly and time-consuming to shore up online security, but these businesses need to take these threats seriously, before it’s too late.
After hackers gained access to the personal information of more than 100 million user accounts last spring, Sony overhauled online security and created a chief information security officer (CISO) position. On Sept. 6, Philip Reitinger joined Sony as its senior vice president and CISO — and he’s already been busy.
In a post to the PlayStation blog last week, Reitinger said Sony detected attempts on Sony Entertainment Network (SEN), PlayStation Network (PSN) and Sony Online Entertainment (SOE) to test “a massive set” of sign-in IDs and passwords against the company’s network database. The attempts appeared to include a large amount of data obtained from one or more compromised lists from other companies, sites or other sources, Reitinger said.
“As a preventative measure, we are requiring secure password resets for those PSN/SEN accounts that had both a sign-in ID and password match through this attempt,” Reitinger wrote in the blog post.
Less than one-tenth of 1% of the PSN, SEN and SOE audiences may have been affected by the data security breach, and Reitinger assured users that credit card numbers were not at risk. This was a relatively low-risk data security breach, but perhaps Sony’s reaction was a case of lessons learned: After the April breach, Sony was criticized for waiting a week to notify customers that their personal information might have been compromised. In addition, it took more than two weeks to fully restore the network. Needless to say, Sony users (and federal regulators) were not impressed by what some viewed as a lackadaisical reaction.
There has been much public outcry over Sony’s data security breach, and those of other companies, in the past year. This likely influenced the SEC last week to mandate the “disclosure of timely, comprehensive and accurate information” surrounding cybersecurity risks.
Did Sony’s online security overhaul help detect this breach before it became another fiasco? Although critics have said Sony simply hired Reitinger as an insurance policy to pacify investors and customers after the April data security breach, he showed his value here. At least now the Sony brass and their customers have someone to go to for information about any further breaches — what happened, how it happened, how they are going to handle it in the future. (Unfortunately for Reitinger, it also gives them someone to blame.)
But if nothing else, the reaction to last week’s data security breach might be indicative of a new trend of taking a proactive approach and letting online customers know what they can do to protect themselves and their information. Judging by the comments made to Reitinger’s blog post, people are mostly happy with Sony’s reaction to the potential data security breach. Many praised Reitinger and Sony for keeping them informed.
Perhaps Sony and companies like them have learned their lesson about the futility of trying to keep a breach out of the spotlight, and know now that transparency is the best course of action. If the SEC’s recent mandate is any indication, federal regulators and customers are going to be watching companies closely to ensure cybersecurity is kept above board.
Compliance means different things to different people. Indeed, regulatory compliance requirements are — and should be — handled differently based on the unique needs of the business. The ugly reality is that there are so many assumptions being made about compliance that it often skews the perception of what’s really going on.
Here are what I believe to be the most dangerous assumptions we make about regulatory compliance requirements and how they can get us — and our businesses — into hot water:
1. We’re compliant, so our information is safe. The most common assumption is that compliance equals security. It doesn’t. Never has and never will. Your business may be “compliant” at the moment, but odds are you’ve still got tons of low-hanging fruit that needs to be fixed. It’s time to dig deeper.
2. Our lawyer is in charge, so all is well. Lawyers should have the final say-so, but they shouldn’t be calling all the shots. Compliance is much more complex than audit reports and contracts. There’s information risk assessments, vulnerability management, incident response, access controls, etc. All the right people across the board need to be involved throughout the compliance process.
3. It’s not worth the money to become — and stay — compliant. According to the Ponemon Institute, the cost of noncompliance is 2.65 times the cost of adhering to regulatory compliance requirements. Do what needs to be done, and you’ll save a tremendous amount of money and effort. As time goes on, you’re going to be forced into compliance eventually. Why not get started now?
4. We encrypt our PII — that’s the ultimate security control. Even though data is encrypted, there are numerous ways to exploit known flaws, especially if the encryption wasn’t properly implemented or isn’t being managed the way it needs to be. You need encryption — but don’t assume it’s working as intended.
5. Our tools are telling us that we’re compliant; enough said. Good network and security tools are essential for visibility and control, but you can never rely on them completely. Be it identity management, network monitoring, vulnerability management — you name it — canned reports from such tools most often do not reflect reality. You have to look closer and validate for yourself.
6. We’ve done everything required by the regulations, that’s all we need to do. Focusing on what’s required doesn’t mean you’ve covered all your bases. The minimum regulatory compliance requirements are often a baseline of suggestions, but it may not be what your business really needs. Furthermore, I can’t tell you how many times I’ve seen businesses “become” compliant without ever performing a single information risk assessment. You can’t possibly put the right security controls in place if you don’t even know what needs attention.
7. We had a breach and subsequent compliance sanctions, so we learned our lessons and are much more secure now. Humans often assume what other people are thinking and that others are taking care of what’s needed. A prime time for this to happen is after a breach occurs and we get back into our day-to-day work, then become complacent. Assuming that everything has been uncovered and fixed is a prime opportunity for people to let their guard down and for something else to go awry. Experiencing a data breach means you’ve got to up your game, big time — and stay on top of things without ever letting your guard down.
You cannot change what you tolerate. Fix your oversights and gaps surrounding regulatory compliance requirements now, before they bite when you’re least expecting it.
Kevin Beaver is an information security consultant and expert witness, as well as a seminar leader and keynote speaker at Atlanta-based Principle Logic LLC. Beaver has authored/co-authored eight books on information security, including The Practical Guide to HIPAA Privacy and Security Compliance and the newly updated Hacking For Dummies, 3rd edition. In addition, he’s the creator of the Security On Wheels information security audiobooks and blog.
October is National Cybersecurity Awareness Month, and the overarching theme this year is to spread awareness of every Internet user’s role in securing their information. In other words, YOU are the first line of defense in protecting your information, so pay attention to security vulnerabilities stemming from your devices.
But certainly not everyone who goes online is overly familiar with the persistent threats. Luckily, it appears some watchdogs are here to help. This was evident when the Android Police blog recently reported a “massive security vulnerability” in HTC’s Android devices.
Android Police researchers found that in recent updates to some of HTC’s devices, the company introduced a suite of logging tools designed to collect user information — way too much information, according to the researchers. Researchers found that on affected HTC devices, any application that requests a single Internet permission (normal for any app that connects to the Web or shows ads) can access:
· The list of user accounts, including email addresses and sync status for each.
· Last-known network and GPS locations, and a limited history of previous locations.
· Phone numbers from the phone log.
· SMS data, including phone numbers.
· System logs likely to include email addresses, phone numbers and other private info.
“If you, as a company, plant these information collectors on a device, you better be DAMN sure the information they collect is secured and only available to privileged services or the user, after opting in,” wrote Android Police blogger Artem Russakovskii when announcing the HTC device security vulnerability. “That is not the case.”
After the flaw was exposed by the Android Police, HTC confirmed that it found in its software a “vulnerability that could potentially be exploited by a malicious third-party application,” and that it was working on a fix. Customers will be notified of how to download and install the security fix, the company said. HTC also urged customers to use caution when downloading, using, installing and updating applications from untrusted sources.
At least HTC moved quickly to correct the problem and inform its customers of the vulnerabilities, right? Well, not so fast. After finding the security lapse, the Android Police contacted HTC on Sept. 24 and received no real response for five business days, after which the Android Police released the information to the public.
Perhaps HTC was waiting to tie in its response to the vulnerabilities with Cybersecurity Awareness Month.
The point to take from the story surrounding HTC mobile device security is that companies are not going to come out and announce when there is a huge risk to using their products — especially those designed for consumers. The problem is the average consumer is not going to know what to look for, and will trust that information is protected when using devices for everyday use.
As shown with the HTC mobile device security issue, this is not always the case. How many more security vulnerabilities are there in other mobile devices that have not been exposed yet? And it’s not just individual consumers that need to be concerned: The spread of personal devices (and their associated security risks) in the workplace make due diligence necessary. People can obviously no longer just assume that they’re protected.