November 7, 2011 7:01 PM
Posted by: KevinBeaver2
, enterprise risk management
Early in my career I was influenced by the work of Christopher Alexander, an architecture professor at the University of California, Berkeley. Alexander and his team researched and cataloged patterns representing building, city and community construction best practices that had evolved over a considerable period of time. I used their seminal work, A Pattern Language, to guide the construction of my own home, and many of their principles to teach software engineering as a discipline.
Alexander, et al., note that, “Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”
Each of the architectural patterns includes a picture and a paragraph explaining how it works in context. Architectural patterns don’t constrain or inhibit creativity as much as they free designers to focus on the differentiations that have the greatest impact on the end user.
Twenty years ago, I documented some of my thoughts on software development patterns in an article titled “Systems Design: Lessons from Architecture.” I have been recently writing about the relationship between enterprise risk management and sustainability, and it occurred to me that GRC managers could benefit from taking a pattern-based approach to their work — especially for organizing their teams and system architecture.
Patterns are like musical forms — there are infinite varieties and parts to be created, but the overall structure is known to “stand the test of time.” We already have well-established sets of controls for GRC, such as COBIT and ISACA’s Risk IT. These are all important, but not an alternative to patterns because their intent is to support auditing rather than to provide a creativity framework. Instead, patterns should complement controls.
A GRC pattern language, like a programming language or even a natural language, would be a shared resource to enable faster and more reliable enterprise system development. GRC patterns should include all the key constructs needed to ensure best governance and compliance practices (in this context, controls would be embedded in each pattern).
They also must be flexible. For example, with governance we know that it’s alright to have exceptions as long as there is a repeatable, auditable process for justifying and documenting them. Given the pace of technological advancement that drives business model changes, any pattern repository must allow for rapid changes, too.
I believe we need a GRC pattern guidebook, similar in spirit to Alexander’s work but one that leverages a broad community supported by collaboration tools and assembled by a flexible process. Changes in the environment may lead to the identification of new patterns based on analytics, and pattern retirement when conditions change is equally important. In other words, we need a Wiki to capture, catalogue, review and update patterns as a community.
With that in mind, SIG411 LLC is launching an open source patterns project that will include GRC patterns contributed by practitioners and academics who will be recognized for their contributions. The scope of the project is broader than GRC, as it will include patterns for all aspects of sustainable enterprises and societies. But given my personal interest in the intersection of enterprise risk management and sustainability, GRC will be an early focal point. I encourage all interested parties to get involved and contribute, as well as use, the patterns from this Wiki.
Adrian Bowles has more than 25 years of experience as an analyst, practitioner and academic in IT, with a focus on strategy and management. He is the founder of SIG411 LLC, a Westport, Conn.-based research and advisory firm. Write to him at firstname.lastname@example.org.
October 31, 2011 7:00 PM
Posted by: Ben Cole
Regulatory compliance training
, Risk management and compliance
, Vulnerability assessment for compliance
National Cybersecurity Awareness Month has drawn to a close, but it’s clear that much still needs to be done to protect information online. One recent survey has found that small businesses – which likely don’t have the resources to bounce back from a major data breach — could be particularly vulnerable to cybersecurity threats.
The online survey of 1,045 small business owners, sponsored by Symantec Corp. and the National Cyber Security Alliance, found that 70% have no formal Internet security policy for employees and that of those, 49% do not have even an informal policy. In addition, 45% of the small business owners surveyed said they do not provide Internet safety training to their employees.
These findings are in stark contrast to SMBs’ apparently false sense of security. Eighty-five percent of the survey respondents said they believe their company is safe from hackers, viruses, malware or a cybersecurity breach; and 69% agreed that Internet security is “critical to their business’s success.”
It’s clear that the survey respondents aren’t following the main theme of this year’s Cybersecurity Awareness Month: the importance of educating everyone and making them aware that they need to do their part to protect their information online.
Other survey highlights (or lowlights, as the case may be):
- 56% of respondents have no Internet use policies to clarify which websites and Web services employees can use; 52% have a plan in place for keeping their business cybersecure.
- 67% have become more dependent on the Internet in the last year; 66% depend on it for day-to-day operations.
- 57% of respondents say a loss of Internet access for 48 hours would be disruptive to their business, and 76% say that most of their employees use the Internet daily.
- 37% have an employee policy or guidelines in place for the remote use of company information on mobile devices, and 36% have a policy outlining employees’ acceptable use of social media.
- 59% do not use multifactor authentication to access their networks.
- 50% report they always wipe data off their machines completely before they dispose of them; 21% never do.
The survey also found that SMBs are woefully unprepared to react after a data breach. Forty percent of respondents said they don’t have a contingency plan outlining procedures for handling and reporting a data breach or loss of information.
Ignoring the problem of cybersecurity threats can be very costly. Data released by Symantec shows that 40% of all targeted cyberattacks are directed at companies with fewer than 500 employees. In 2010, the average annual cost of cyberattacks to SMBs was $188,242. Business Insider reported in September that approximately 60% of small businesses will close within six months of a cyberattack.
What is it going to take for these small businesses to realize the impact of cybersecurity threats? They need to realize that lax cybersecurity measures, combined with their sparse resources, make them particularly vulnerable. It might be costly and time-consuming to shore up online security, but these businesses need to take these threats seriously, before it’s too late.
October 18, 2011 1:32 PM
Posted by: Ben Cole
, Data Security
, Personally identifiable information
After hackers gained access to the personal information of more than 100 million user accounts last spring, Sony overhauled online security and created a chief information security officer (CISO) position. On Sept. 6, Philip Reitinger joined Sony as its senior vice president and CISO — and he’s already been busy.
In a post to the PlayStation blog last week, Reitinger said Sony detected attempts on Sony Entertainment Network (SEN), PlayStation Network (PSN) and Sony Online Entertainment (SOE) to test “a massive set” of sign-in IDs and passwords against the company’s network database. The attempts appeared to include a large amount of data obtained from one or more compromised lists from other companies, sites or other sources, Reitinger said.
“As a preventative measure, we are requiring secure password resets for those PSN/SEN accounts that had both a sign-in ID and password match through this attempt,” Reitinger wrote in the blog post.
Less than one-tenth of 1% of the PSN, SEN and SOE audiences may have been affected by the data security breach, and Reitinger assured users that credit card numbers were not at risk. This was a relatively low-risk data security breach, but perhaps Sony’s reaction was a case of lessons learned: After the April breach, Sony was criticized for waiting a week to notify customers that their personal information might have been compromised. In addition, it took more than two weeks to fully restore the network. Needless to say, Sony users (and federal regulators) were not impressed by what some viewed as a lackadaisical reaction.
There has been much public outcry over Sony’s data security breach, and those of other companies, in the past year. This likely influenced the SEC last week to mandate the “disclosure of timely, comprehensive and accurate information” surrounding cybersecurity risks.
Did Sony’s online security overhaul help detect this breach before it became another fiasco? Although critics have said Sony simply hired Reitinger as an insurance policy to pacify investors and customers after the April data security breach, he showed his value here. At least now the Sony brass and their customers have someone to go to for information about any further breaches — what happened, how it happened, how they are going to handle it in the future. (Unfortunately for Reitinger, it also gives them someone to blame.)
But if nothing else, the reaction to last week’s data security breach might be indicative of a new trend of taking a proactive approach and letting online customers know what they can do to protect themselves and their information. Judging by the comments made to Reitinger’s blog post, people are mostly happy with Sony’s reaction to the potential data security breach. Many praised Reitinger and Sony for keeping them informed.
Perhaps Sony and companies like them have learned their lesson about the futility of trying to keep a breach out of the spotlight, and know now that transparency is the best course of action. If the SEC’s recent mandate is any indication, federal regulators and customers are going to be watching companies closely to ensure cybersecurity is kept above board.
October 17, 2011 7:10 PM
Posted by: Kevin Beaver
, information risk assessment
, regulatory compliance requirements
Compliance means different things to different people. Indeed, regulatory compliance requirements are — and should be — handled differently based on the unique needs of the business. The ugly reality is that there are so many assumptions being made about compliance that it often skews the perception of what’s really going on.
Here are what I believe to be the most dangerous assumptions we make about regulatory compliance requirements and how they can get us — and our businesses — into hot water:
1. We’re compliant, so our information is safe. The most common assumption is that compliance equals security. It doesn’t. Never has and never will. Your business may be “compliant” at the moment, but odds are you’ve still got tons of low-hanging fruit that needs to be fixed. It’s time to dig deeper.
2. Our lawyer is in charge, so all is well. Lawyers should have the final say-so, but they shouldn’t be calling all the shots. Compliance is much more complex than audit reports and contracts. There’s information risk assessments, vulnerability management, incident response, access controls, etc. All the right people across the board need to be involved throughout the compliance process.
3. It’s not worth the money to become — and stay — compliant. According to the Ponemon Institute, the cost of noncompliance is 2.65 times the cost of adhering to regulatory compliance requirements. Do what needs to be done, and you’ll save a tremendous amount of money and effort. As time goes on, you’re going to be forced into compliance eventually. Why not get started now?
4. We encrypt our PII — that’s the ultimate security control. Even though data is encrypted, there are numerous ways to exploit known flaws, especially if the encryption wasn’t properly implemented or isn’t being managed the way it needs to be. You need encryption — but don’t assume it’s working as intended.
5. Our tools are telling us that we’re compliant; enough said. Good network and security tools are essential for visibility and control, but you can never rely on them completely. Be it identity management, network monitoring, vulnerability management — you name it — canned reports from such tools most often do not reflect reality. You have to look closer and validate for yourself.
6. We’ve done everything required by the regulations, that’s all we need to do. Focusing on what’s required doesn’t mean you’ve covered all your bases. The minimum regulatory compliance requirements are often a baseline of suggestions, but it may not be what your business really needs. Furthermore, I can’t tell you how many times I’ve seen businesses “become” compliant without ever performing a single information risk assessment. You can’t possibly put the right security controls in place if you don’t even know what needs attention.
7. We had a breach and subsequent compliance sanctions, so we learned our lessons and are much more secure now. Humans often assume what other people are thinking and that others are taking care of what’s needed. A prime time for this to happen is after a breach occurs and we get back into our day-to-day work, then become complacent. Assuming that everything has been uncovered and fixed is a prime opportunity for people to let their guard down and for something else to go awry. Experiencing a data breach means you’ve got to up your game, big time — and stay on top of things without ever letting your guard down.
You cannot change what you tolerate. Fix your oversights and gaps surrounding regulatory compliance requirements now, before they bite when you’re least expecting it.
Kevin Beaver is an information security consultant and expert witness, as well as a seminar leader and keynote speaker at Atlanta-based Principle Logic LLC. Beaver has authored/co-authored eight books on information security, including The Practical Guide to HIPAA Privacy and Security Compliance and the newly updated Hacking For Dummies, 3rd edition. In addition, he’s the creator of the Security On Wheels information security audiobooks and blog.
October 10, 2011 6:56 PM
Posted by: Ben Cole
, mobile device security
October is National Cybersecurity Awareness Month, and the overarching theme this year is to spread awareness of every Internet user’s role in securing their information. In other words, YOU are the first line of defense in protecting your information, so pay attention to security vulnerabilities stemming from your devices.
But certainly not everyone who goes online is overly familiar with the persistent threats. Luckily, it appears some watchdogs are here to help. This was evident when the Android Police blog recently reported a “massive security vulnerability” in HTC’s Android devices.
Android Police researchers found that in recent updates to some of HTC’s devices, the company introduced a suite of logging tools designed to collect user information — way too much information, according to the researchers. Researchers found that on affected HTC devices, any application that requests a single Internet permission (normal for any app that connects to the Web or shows ads) can access:
· The list of user accounts, including email addresses and sync status for each.
· Last-known network and GPS locations, and a limited history of previous locations.
· Phone numbers from the phone log.
· SMS data, including phone numbers.
· System logs likely to include email addresses, phone numbers and other private info.
“If you, as a company, plant these information collectors on a device, you better be DAMN sure the information they collect is secured and only available to privileged services or the user, after opting in,” wrote Android Police blogger Artem Russakovskii when announcing the HTC device security vulnerability. “That is not the case.”
After the flaw was exposed by the Android Police, HTC confirmed that it found in its software a “vulnerability that could potentially be exploited by a malicious third-party application,” and that it was working on a fix. Customers will be notified of how to download and install the security fix, the company said. HTC also urged customers to use caution when downloading, using, installing and updating applications from untrusted sources.
At least HTC moved quickly to correct the problem and inform its customers of the vulnerabilities, right? Well, not so fast. After finding the security lapse, the Android Police contacted HTC on Sept. 24 and received no real response for five business days, after which the Android Police released the information to the public.
Perhaps HTC was waiting to tie in its response to the vulnerabilities with Cybersecurity Awareness Month.
The point to take from the story surrounding HTC mobile device security is that companies are not going to come out and announce when there is a huge risk to using their products — especially those designed for consumers. The problem is the average consumer is not going to know what to look for, and will trust that information is protected when using devices for everyday use.
As shown with the HTC mobile device security issue, this is not always the case. How many more security vulnerabilities are there in other mobile devices that have not been exposed yet? And it’s not just individual consumers that need to be concerned: The spread of personal devices (and their associated security risks) in the workplace make due diligence necessary. People can obviously no longer just assume that they’re protected.
October 3, 2011 6:18 PM
Posted by: Ben Cole
October is National Cyber Security Awareness Month, and this year’s theme is meant to remind individuals of their role in securing information, as well as the devices and the networks they use. Failure to understand this relatively simple cybersecurity message can have embarrassing consequences, as banking giant The Goldman Sachs Group, Inc. learned earlier this week when hackers published the personal information of several employees, including CEO Lloyd Blankfein.
Goldman Sachs was not the only big name in the in cybersecurity news this week. After news surfaced that Facebook had been gathering information about the websites its users visited even after users logged out of the social network, two congressmen urged the Federal Trade Commission (FTC) to investigate the company’s practices.
In a letter to the FTC, Congressmen Edward J. Markey (D-Mass.) and Joe Barton (R-Texas) said tracking users’ behavior without their knowledge “raises serious privacy concerns.” Facebook says it is working to correct the matter, but Barton and Markey want the FTC to investigate and make sure the practice is stopped. Barton and Markey also urged the FTC to investigate the use of so-called “supercookies” that allow websites to capture personal data about consumers.
Daniel Conroy, CISO and global head of information security at BNY Mellon Corp., says organizations should make clear to employees their role in protecting their own — as well as the company’s — sensitive information. Conroy said protecting data starts with communicating to employees what is acceptable and what is not with regard to risk management. He suggests providing security awareness training to all employees.
“If employees don’t know what information is important to the company, how are they going to know what not to post?” Conroy asked during a presentation at the MIS Training Institute’s IT Governance, Risk and Compliance Summit in Boston last month.
Conroy focused on how the expansion of social media makes sensitive company information especially vulnerable — and noted that it’s important to establish a balance between satisfying business needs and mitigating risk when using such sites. He noted that avoiding things as simple as posting organizational charts at companies online could go a long way toward avoiding leaks of business info. Most importantly, he suggests companies anticipate the evolving risks as part of a cybersecurity strategy, and communicate these risks to all employees. Companies could go so far as to create a security awareness campaign using techniques such as posters, videos and email blasts to get the message out and encourage employees to participate.
The bottom line is that protecting information starts with the individual. With more people incorporating personal technology in their business activities, companies can be hurt when personal information is leaked. As a result, companies would be well served to show employees how they can protect themselves and the information they offer online … which will in turn help the protect the business.
September 26, 2011 7:17 PM
Posted by: Ben Cole
cloud computing standards
Recognizing the “significant opportunities” surrounding cloud computing, the Subcommittee on Technology and Innovation held a hearing last week to examine the benefits — and obstacles — of widespread cloud adoption. The hearing could be a first step to more exacting cloud computing standards.
Subcommittee members said cloud computing can provide users with increased computing capability, greater efficiency and lower energy and infrastructure costs. However, cybersecurity remains a major concern for many users, said Subcommittee Chairman Rep. Ben Quayle (R-Ariz.). Quayle pointed out that users must have confidence that their data and applications, as well as their privacy, will be protected. Quayle added that cloud service providers would need to offer users different tiers of security depending on the sensitivity of their data in order to alleviate these concerns.
Nick Combs, federal chief technology officer at EMC Corp., and Dr. Dan Reed, corporate vice president of the technology policy group at Microsoft, were among those testifying at the hearing. In response to Quayle’s concerns, Combs suggested cloud security be driven by a “flexible policy” aligned to the business or mission need, and that a common framework would be needed to ensure that cloud security policies are consistently applied. Reed added that clear policy goals surrounding cloud security are necessary, but regulators need to be careful to avoid rules that will hinder cloud innovation or quickly become outdated.
These cloud security concerns echoed statements recently made by Alan Barnes, director of risk and advisory at Services Assurant Inc., at a GRC training summit in Boston. Barnes noted that cloud computing creates additional third-party security risks, such as hacking, a lack of compliance standards and intellectual property vulnerabilities. Barnes added that the current lack of agreement on cloud computing standards ensures that cloud provider risk evaluation will remain inexact and inconvenient for the next several years.
The National Institute of Standards and Technology (NIST) is spearheading stakeholder efforts to develop cloud data security and interoperability standards, which witnesses at last week’s hearing said are critical to the cloud’s success.
“As an agency considers migrations to cloud computing, NIST must develop the appropriate consensus standards and guidelines to ensure a secure and trustworthy environment for federal information,” according to a statement from the Subcommittee on Technology and Innovation.
Developing such “consensus standards and guidelines” is an appropriate first step to alleviate concerns surrounding the mass migration to the cloud. But until these cloud computing standards are established and implemented, users need to remain cautious moving to the cloud.
September 19, 2011 4:08 PM
Posted by: Ben Cole
, Data Security
, online privacy
A few weeks ago in this space, I wondered if increased scrutiny of Google’s business practices was just the beginning of the federal government’s efforts to regulate the Internet. Judging by a handful of news stories and announcements last week, online data security and online privacy concerns have shot to the top of at least some lawmakers’ lists of concerns.
For starters, Sen. Richard Blumenthal (D-Conn.) introduced the Personal Data Protection and Breach Accountability Act of 2011. The legislation is designed to protect consumers’ personally identifiable information and improve online data security.
The bill would create a process for companies to establish appropriate online data security, and it would hold companies accountable for failing to comply with those plans. In what may be spurred by Sony’s slow response to a huge data breach earlier this year, Blumenthal’s bill also requires companies to promptly notify consumers after a breach has occurred, and to provide consumers with solutions to alleviate online security threats.
To help prevent future beaches, the bill encourages better information-sharing among federal agencies, law enforcement and the private sector to alert businesses of specific online security threats.
Also last week, an Op-Ed piece in The New York Times highlights an upcoming Supreme Court case that could have huge ramifications for online privacy concerns. But this time, it regards how much information the government should have access to.
The case, United States v. Antoine Jones, concerns a GPS device placed on the car of a suspected drug dealer without a warrant, which the man says was a violation of the Fourth Amendment.
“If the court rejects his logic and sides with those who maintain that we have no expectation of privacy in our public movements, surveillance is likely to expand, radically transforming our experience of both public and virtual spaces,” wrote Jeffrey Rosen, a law professor at George Washington University.
Rosen pointed out that technologies such as Facebook’s facial-recognition tool could be used by law enforcement to help identify criminals. Rosen also referenced a 2008 comment from a Google executive saying that, within a few years, public agencies and private companies could be asking Google to post live feeds from public and private surveillance cameras all around the world.
“If the feeds were linked and archived, anyone with a Web browser would be able to click on a picture of anyone on any monitored street and follow his movements,” Rosen wrote in The New York Times piece.
These news items were among a handful reporting on online data security regulations in the past week. Here are some others:
- The Federal Trade Commission announced it is seeking public comment on proposed amendments to the Children’s Online Privacy Protection Rule, which gives parents control over what personal information websites may collect from children under 13. The amendments are aimed at keeping pace with new technology and devices that give children Internet access.
- Connecticut Attorney General George Jepsen announced the creation of a task force to investigate Internet privacy and data breaches while educating the public and businesses about data protection.
- On Thursday, the House subcommittee on Commerce, Manufacturing, and Trade held the first of a series of hearings to address online privacy. The hearing examined the European Union’s privacy and data collection regulations and how they have affected the Internet economy. Some have expressed concern that limiting the tracking of Internet users (as is done in the EU) could dramatically hurt online marketing effectiveness.
Federal online privacy concerns and the increased government involvement in online data security may be warranted, at least according to a new PricewaterhouseCoopers survey of 9,600 security executives. The survey found that 43% of global companies think they have an effective information security strategy in place and are proactively executing their plans. However, only 16% of respondents say their organizations are prepared and have security policies that are able to confront an advanced persistent threat attack, creating more online data security concerns.
It appears that most people with a stake in the game are at least aware of the severity of online security threats. Perhaps a combination of legal regulations and private efforts surrounding online data security could have the movement heading in the right direction.
September 12, 2011 8:21 PM
Posted by: Ben Cole
, Dodd-Frank Wall Street Reform and Protection Act
The rollout of regulations under the Dodd-Frank Wall Street Reform and Protection Act has been pushed back until at least early 2012, according to the Commodity Futures Trading Commission (CFTC). The delay marks the second time the CFTC has put the brakes on new rules governing the over-the-counter derivatives market.
In June, the CFTC said the rules would be finalized by the end of 2011 — which already would have been six months past the original deadline.
The New York Times’ Dealbook called the derivatives rules delay “another setback for the sweeping overhaul passed in the aftermath of the financial crisis.” But CFTC Chairman Gary Gensler said the federal regulator is “not against a clock” to implement the Dodd-Frank regulations.
“No doubt, as this is a human endeavor, there will likely be changes to this outline down the road,” Gensler said at a public meeting Thursday. “We also will continue to reach out to other regulators, both here and abroad, for their input as we consider the many thousands of comments on these rules.”
To date, the CFTC has finished 12 final rules under Dodd-Frank, and the agency will host a full schedule of public meetings this fall. Compliance represents a “major step” in the CFTC’s efforts to make financial reform a reality and to protect the American taxpayer, Gensler added.
“We are looking to consider external and internal business conduct rules related to risk management, supervision, conflicts of interest, record-keeping and chief compliance officers,” Gensler said.
Gensler also said he supports a proposal to establish schedules to phase in compliance with previously proposed requirements, including one surrounding the swap trading relationship documentation. The proposal would provide greater clarity to swap dealers and major swap participants regarding the time frame for compliance, as well as give them an adequate amount of time to comply, he said.
As SearchCompliance.com Executive Editor Chris Gonsalves pointed out earlier this summer, delays such as the recent CFTC’s recent announcement could be only the tip of Dodd-Frank’s iceberg of problems. I understand federal regulators want to get things right under Dodd-Frank regulations, but a failure to implement hard and fast compliance rules in a timely manner could end up only further alienating consumers already mired in the current economic malaise. Stay tuned.