Ship captains have long started their days by initialing log entries. As a former senior security executive at a financial services firm with $500 billion in assets under management and over 20,000 employees, my day would start similarly. Each morning, I’d take responsibility for reviewing lists of accounts with privileged access to high-risk data.
What defines “privilege” in the world of security access is really the ability to “write” or alter a database. It also includes the ability to alter the audit trail documenting who has “write” access. “High-risk” data includes customer balances and transaction values, for example. This morning ritual of personally reviewing privileged access should be a part of a compliance program before you attempt database logging. Both are fundamental controls that everyone should have in place. Reports that document identities that have privileged access need to be designed and implemented. Operational procedures for review and follow-up on those reports need to be put in place.
Every morning, automated reports would appear in my inbox based on tightly defined criteria. I reviewed them, printed them, signed them, and had them filed. Auditors checked these randomly several times a year. Once a week, I reviewed similar reports signed by my subordinates, my VPs, reflecting use of emergency IDs, temporary IDs, vendor IDs, and privileged transactions. In other words, even before the Sarbanes-Oxley Act (SOX) required senior executives to take a more proactive role in security, I was starting my business day the same way, monitoring the list of those with the keys to the company’s crown jewels, so to speak.
My daily morning executive-level review of high-risk access should tell you a few things:
- Even at an enormous firm, the number of privileged IDs with access to high-risk data should be short enough for a busy executive to personally review
- It is both feasible and reasonable for senior executives to personally review this information and record that they have done so
- Anyone can expect this kind of review may be taking place in any major organization handling high-risk data, although it is not as universal as it should be
There are no specific standards or frameworks telling you how to create these reports or what to include. Don’t waste your time on a fool’s errand searching for detailed technical guidelines. COBIT and SOX frameworks indicate only that this type of review in general should be defined by each organization and put into place. Whether it is daily, weekly, or monthly, and what exactly it includes, will be up to each organization, compliance officer and CISO, depending on its businesses and risks.
Here are some general considerations for specifying these reports:
- The number of individuals with write access to this data should be zero. If someone needs regular access to unlock or fix operational issues, you should know those people by name very well and they should number no more than three.
- Revoke privileges after resolution. Anyone who was granted write access to resolve an issue should have had the privilege revoked after an issue was resolved. Thus, the only names showing up on your report would and should be individuals continuing to resolve issues which cross the timeframe of the running of the report, which should be timed around 3a.m. every day.
- Turn off audit switches in identities. Don’t forget to include identities in your review that have the ability to turn “audit” on and off for each database or account. Unless you include this privilege, individuals can turn “audit” off prior to access and turn it on again immediately afterwards. You will have no idea of any change. Which means:
- Include all changes to “audit” status in the prior 24 hours in the privileged transaction report: Was audit turned on or off?
- Review emergency access for IDs. Did anyone check out an emergency ID with high privileges? Was it checked back in? Does it correspond to a change management ticket reflecting a valid reason for the use of the emergency ID?
Please feel free to comment or write to firstname.lastname@example.org with any questions on these types of controls.
Great article [“Panels describe risks of noncompliance with Mass. data protection law“]. Numerous thought-provoking statements in this article and in the legislation itself. My first thought is that this regulation shouldn’t be so shocking, surprising and difficult to comply with. It’s all about doing the right things, as Rebecca Herold stated.
Information Security Officers, IT professionals and consulting firms have been telling the companies for whom they work to do this for years. But many firms, even those that are highly regulated, have traditionally taken a wait-and-see approach since they can’t seem to find the ROI. Locking down USB ports, encrypting hard drives and encrypting mail that contains sensitive data is just too “inconvenient” for them. I ask them, “What’s your reputational risk worth?”
This legislation goes hand in hand with the Red Flags Identity Theft Prevention rule that went into effect Nov. 1, 2008, for similar types of business. After a deeper look, it was determined that there were more than 10 million businesses throughout the country that would need to be examined. That’s nearly 10 million more than the number of examiners in the field to assess them.
While a great deal of the focus for Red Flags is certainly on the banking industry, in terms of governance and enforcement, my car dealer never heard of it. Neither has my attorney friend, who is the compliance officer at the insurance agency that wrote my general liability and errors & omissions policy and also provides my life insurance. They have no such program in place. And what about the gas station that still uses multipart forms to take my credit card information? I better ask the attendant how their efforts are going to comply with MA 201 CMR 17.00 before I fill up.
Legislation is great, if practical, but governance and enforcement is even better. I’d love to hear how the regulators plan to enforce it for those outside the banking sector, which at least makes an strong effort to comply and do the right thing. I also wonder about vendor management. Third-party providers must comply with the regulation by Jan. 1. Thus, it’s incumbent upon those who use third parties to ensure that those controls are in place at those third-party companies.
For the banking industry, the third key point of GLBA 501(b) requires oversight of service providers, meaning that even though you’ve assigned your risk by outsourcing a function or process to another company, you’re not relieved of your responsibility to ensure that controls are in place to protect sensitive data and systems. Heartland sound familiar? Hannaford sound familiar? TJX ring a bell? There are many others out there as well but just not as high profile. There’s always a box of tapes with a few hundred thousand customer names, account numbers and SSNs that’s been lost or misplaced or that fell off the truck. Or a dumpster that’s been raided for the sensitive info that employees have haphazardly discarded, despite policy for proper destruction and disposal.
A formal vendor management program is a requirement! And the banking sector has seen tighter and tighter regulatory scrutiny and examiner focus in this specific area over the past year or two, but there’s still a long way to go. There are very specific components to a sound and compliant vendor management program. These include vendor inventory, status tracking, periodic monitoring, due diligence, contract review, risk rating, reporting and policies and procedures. This is a long haul for those not in the heavily regulated banking sector. So, again, it will come to being all about governance and enforcement and the penalties for noncompliance to make this legislation effective.
And my final thought is that Massachusetts should at least be commended for taking a stand. I’ve read countless critiques of the legislation but haven’t seen anyone state in writing that MA should be commended for doing something to try to protect the consumer. Any time you stick your neck out, you’re bound to get slapped.
Let us know what you think about our stories. Email email@example.com.
There is a big difference between being PCI DSS compliant and being “certified” as PCI DSS compliant, says e-commerce expert Evan Schuman of StorefrontBacktalk.com in this edition of the IT Compliance Advisor weekly podcast. Because audit results can sometimes be subjective, the results could mean that some retailers may not really be compliant even though someone says they are, he says.
The PCI DSS specification is under fire for enabling such ambiguity. The House Committee on Emerging Threats, Cybersecurity and Science and Technology recently held a hearing on PCI and concluded that it has been inadequate in stopping credit card transaction data leakage. The administration of PCI DSS by credit card giant Visa is one reason, Schuman says. Find out more in this podcast.
On April 10, 2009, 10,868 Social Security numbers at Penn State Erie, The Behrend College, were compromised by a detected intrusion. Last October’s data breach of 17 million records at T-Mobile, Deutsche Telekom ranks amongst the largest breaches in history, occurring almost two years after the infamous TJX breach. Given the nearly daily reports of data breaches, ensuring data privacy and preventing identity theft is at the top of the compliance project list for security and IT professionals and businesses everywhere.
These incidents have shifted a great deal of focus onto three types of IT initiatives:
Two of the most fundamental types of detection and control strategies, however, are often overlooked:
- Database logging and its partner control,
- Privileged access.
Database logging is the practice of creating a record of direct access to high-risk data in high-risk databases. It excludes access through user interfaces, so it accurately filters out client or user access. Instead, it records all identities that directly access the data. This would include database administrators, possibly system administrators and likely anyone else who has been granted write privileges into your database.
Are you aware of everyone in your organization who has write access to your high-risk data in your high-risk databases? Do you doubt anyone could possibly have direct access to the banking deposit, withdrawal transactions or trading buys and sells in your firm’s Fortune 500 database?
Let me assure you, they do.
Every firm has individuals to whom this access has been granted. No firm could function without them. Senior business officials often react with outrage and tough talk about firing anyone granting such access to their IT staff. But the reality is that these senior officials should look more closely at both themselves and any budgeting choices that may have denied database upgrades that could have precluded the need for such access. Many institutions have not adequately invested in their applications and database upgrades. As a result, some hapless DBA is often left tasked with daily, high-risk manual database “fixes” to keep business running. The DBA is then blamed if problems crops up as a result.
There are many reasons firms can and do grant write access to IT staff. The primary reasons include:
- Legacy databases that “freeze” daily and have to be manually unlocked.
- New transaction types that aren’t adequately handled by applications, resulting in inaccurate data that require a manual “fix.”
- Access temporarily granted under an emergency change control and never revoked.
Monitoring privileged access is a fundamental compliance practice. Such monitoring that includes a daily automated report personally reviewed by the Information Security Officer (ISO) and signed off with his or her initials should be in your firm’s repertoire. Yes, a daily signature. This report alone will likely raise many useful questions. Once monitoring access is addressed, the daily database logging report should similarly be placed in front of the ISO’s daily for personal review and sign off.
In my next post, I’ll give specifics on how to build these reports so they actually capture the violation information you need. Nothing worse than the false security of a violation report that does not actually capture the required information. Your auditors will know the difference. So should you.
This is a guest post by Laurence Anker, engagement manager, technology risk management, at Jefferson Wells International Inc.
The only constant in information technology today is change. The changes are broad and rapid across the domains of hardware, system software, application software, databases and data, telecom, networks, to name just a few. How well you manage and control change can be the difference between success and failure. In fact, the change management processes present significant and potentially costly risks to organizations. In a recessionary economy where decreases in IT spending and investment, combined with personnel reductions, are a fixture in the landscape, an efficient and effective mechanism surrounding your change management is more important than ever.
The fact that change management is a critical control does not mean that it needs to be complex. To the contrary, simple, well-designed controls are much more effective, and more likely to be performed consistently, than a complex, overengineered control. Regardless of whether your shop follows ISO, COBIT, ITIL or other guidance to control your change management process, it boils down to initiation, assessment, decision, execution and tracking and reporting. Let’s look at an example.
The client did not have a consistent change management process in place for a major program that utilized 150 resources. With multiple paths to request changes, both formal and informal, the organization was unable to maintain a comprehensive list of all requested changes. In turn, this impacted how their resources were utilizing their time and prioritizing their assignments. To further exacerbate the problem, key individuals supported the production environment and were hijacked for production issues, significantly impeding progress and schedules.
The organization had a rapidly growing backlog of requests, assigned projects were running late, resources were frustrated by the conflicting directions they were receiving, and the business community was unsatisfied with the level of service that IT was delivering.
To staunch the bleeding, the organization undertook a significant shift by establishing a Change Control Board (CCB) to oversee the change request process. While everyone was still allowed to initiate a request, it had to flow through the CCB for approval. The CCB would evaluate the cost, benefit and time estimates, as well as assess the risk to the organization (both by moving forward on the project and rejecting the project), and the potential impact to other projects that are already in process. The decision to approve, reject or postpone the request was now an informed decision based upon sound business logic. Approved projects would be given a budget and assigned the resources to move forward following the organization’s Project Life Cycle through build, test and promotion. To log, track, monitor and report the status of requests, the organization implemented Rational’s ClearQuest.
I will leave you with three key points to think about when instituting a change management process. First, the procedures, tools and formality will need to be “right-sized” for the size and culture of the organization. Second, tools are facilitators, not the solution. Organizations that expect to acquire and implement a tool or a Change Management Database as the silver bullet quickly learn that without the process and procedures that surround the tool, they are no better off at controlling and managing the change within the organization. And third, people are still the keystone to success. Communication and collaboration amongst the constituents throughout the organization are critical to making sure the right people have the right information at the right time to be able to make the right decision.
Laurence Anker has more than 30 years of experience supporting organizations’ IT requirements, addressing audit, control and security objectives, risk identification and mitigation, and business requirements definition. Anker led the insurance industry practice for Ernst & Young’s New York Information Systems Assurance and Advisory Services Group, was a senior manager at KPMG, and served as the EDP audit manager of North American operations for Swiss Reinsurance.
Most visitors to websites arrive and leave relatively anonymously. But as e-commerce evolves, businesses are using the Web to invite in specific users, in order to offer special services to them or participate in a study such as a clinical trial.
Steve Ross, a director in the Security & Privacy practice of Deloitte & Touche LLP, has some thoughts in this IT Compliance Advisor podcast about the privacy and compliance risks associated with bringing in these “vetted” users.
Ross, a former international president of ISACA and IS Security Matters columnist for the ISACA Journal, explains to SearchCompliance.com Executive Editor Scot Petersen what constitutes a vetted user, what are the compliance risks that come with a vetted user, and what are some best practices for ensuring privacy of the vetted user.
April 17 is the deadline for Melissa Hathaway to put on the president’s desk the comprehensive 60-day U.S. cybersecurity review Obama mandated on Feb. 8. That was the day he also invented her current title, “Acting Senior Director for Cyberspace” for the National Security and Homeland Security councils.
Hathaway is a person about whom we will be hearing a lot more, due to the seriousness with which the Oval office is taking cybersecurity threats. We care because, in addition to new requirements stemming from the soon-to-be-released report, her policies could influence the implementation of the new Massachusetts data protection law and existing data breach regulation. Both may have significant compliance effects on your business.
A former consultant with Booz Allen Hamilton, Hathaway has a reputation for concern about privacy. That was not a popular position under the Bush administration, where she had been working until Inauguration Day. Greater concern for privacy is good news, in general. How far she goes in mandating controls over data to ensure privacy will be the big question for organizations that must implement those controls.
Within the Bush administration, she was senior advisor to the director of National Intelligence and cyber coordination executive. She chairs the National Cyber Study Group, a senior-level interagency body that was instrumental in developing the Comprehensive National Cybersecurity Initiative (CNCI), aimed at improving the ability of the country to secure and defend its cyber infrastructure. In January 2008, Hathaway was appointed the director of the Joint Interagency Cyber Task Force, which coordinates and monitors the implementation of the broad portfolio of activities and programs that comprise the CNCI.
Compliance officers and infosec professionals will be especially amused by what Kurt Leafstrand at Clearwell Systems worked up: “Government launches bold new recovery effort.” Here’s the demo:
Kurt and his compatriots put some time into this effort. Here’s the faux press release:
SEEKING NEW AVENUE FOR COST-CUTTING, GOVERNMENT LAUNCHES BOLD NEW RECOVERY EFFORT
WASHINGTON — Senior Administration officials today took the wraps off of their latest effort to stabilize the American economy: The nationalization of the electronic discovery industry. According to a senior official who declined to be identified, “Even before the beginning of the current turmoil, everyone acknowledged that electronic discovery costs were out of control. Now, with litigation accelerating and corporate earnings plummeting, something had to be done. Without this action, a significant number of leading American corporations would be in danger of shutting their doors due to the overwhelming burden of e-discovery.”
Effective immediately, all electronic discovery projects are being centralized under a single authority, the National Electronic Record Discovery Institute (NERDI). The Institute will be launching a nationwide electronic discovery portal on April 1, 2009 at www.ediscovery.gov. The site will build upon the recent success of the government’s economic recovery accountability site, www.recovery.gov. Said one Institute official, “Just drop the ‘r’ and insert a ‘dis’, and you get eDiscovery. It really is the next logical step in the government’s efforts to help the country in a time of profound need.”
Industry experts initially expressed skepticism about the government’s ability to make electronically discoverable information available in an efficient, expedient, and secure manner. Early plans had the government using the U.S. Postal Service and the network of I.R.S. tax return servicing centers as the logistical backbone for managing the collection and processing of documents. However, after negotiations with the National Security Agency, this step was eliminated from the process. Instead, all electronically-generated information in the United States will be instantly processed and made available through the ediscovery.gov site. Commented an NSA spokesman, “We have all the information anyway; why not make it easily accessible, instead of pretending it’s not here?” As for security, officials stated that “individuals can expect the same level of security and identify protection they’ve come to expect from their financial institutions and credit card companies, along with the additional protection and responsiveness they’ve come to expect from the Federal government.”
Nicely done, folks. We look forward to a briefing from NERDI later today, as we’ve heard a global NERDI initiative may be undertaken in 2010.
This post is the second in a two-part series. The first post, “review policies and standards,” addressed the first step in preparing for the auditors. -Ed.
When we last left our hero and heroine, the lone IT operations manager, he or she was about to get a visit from the compliance auditors. Sound familiar? Only, unlike in that big upcoming squash or tennis match, you’re not sure which rule book they will be using to score the games. Unsure against what standard your IT operation would be judged, I advised:
Step 1: Get ahold of your company’s IT policies and standards.
Step 2: Reality check. Do they represent TODAY’s state of your IT operation?
For example, I pointed to your access control policy. Does it say, I asked, “Terminate access rights for all users within 24 hours of employment termination?” Is that really happening, 365 days a year, I queried? And pointed out seven common ways the operation can miss that 24-hour window.
But here’s the good news: If your IT policy said, “Terminate access rights for all users within one week (instead of 24 hours) of employment termination,” you’d get an A on the audit.
So, take Step 3: Revise your IT policies and standards to reflect TODAY’s reality. Don’t let staff companywide get in the habit of tolerating noncompliance with your policies because they are too ambitious in relation to your current compliance level. While you may be trying to set a higher standard to aspire to, there are better ways to do that. Instead, you are just setting yourself up for a BAD AUDIT.
“Sarah, how can you recommend a one-week standard for access termination?” I can just hear you say. The point is, of course I recommend you tighten up your operation and get that one week down to 24 to 48 hours. Just, don’t put it in your policies and standards until it is THERE. If you insist on doing this, it will only get you an “F” on your audit. And there’s nothing in COBIT dictating the time frame. You can determine your own time frame based on a series of factors. I’ll go over those another time, but they give you more leeway than you’d think. If you’ve followed these steps, take Step 4: Sleep easier at night.
If you have any questions about this strategy, let me know in the comments.
Leslie Stahl’s segment on 60 Minutes on the danger of the Conficker worm releasing a massive DDoS attack or other malicious action on April 1 has received widespread attention in the public eye and expressions of doubt from around the blogosphere, particularly in the security community. If you missed Stahl’s segment, there is an excellent demonstration of a hacker compromising and then mirroring her system, along with a discussion of the dangers that a global infection could pose. You can watch the “Is the Internet Infected?” 60 Minutes segment at CBSNews.com.
When asked this morning about the likelihood of the Conficker worm setting off a nasty surprise , SearchSecurity.com’s Rob Westervelt noted both the lack of sourcing for the story and the FUD that has surrounded the worm in the media. Citing both and independent security experts, Westervelt suggested that patched, protected systems should have nothing to worry about on Wednesday. Robert McMillan of PC World, for instance, feels that fears of a Conficker meltdown are greatly exaggerated.
What can be done, if you are still worried? Eric Ogren wrote at SearchSecurity.com that the Microsoft Conficker worm offers attack prevention lesson and suggested the standard response to Web security threats: Run AV software and update patches. Microsoft has also provided a resource page for IT administrators, “Help Protect Windows from Conficker.”
Michael Horowitz, over at Computerworld, recommended the following steps to combat the Conficker worm:
- disabling Autorun for protection from infected USB drives
- using the free Windows Malicious Software Removal Tool from Microsoft to scan your PC
- using OpenDNS to prevent the worm from communicating
- employing DropMyRights to run software in restricted mode, protecting Windows XP user
- trying antivirus program AntiVir from Avira or Malwarebytes’ Anti-Malware.
Good luck out there. If concerns over the Conficker infection prove justified, it could be an ugly week in the IT world.
UPDATE: Westervelt also reported that the Conficker flaw has yielded a new tool for detection.
“Security researchers have developed a new tool that can scan the company network and remotely detect machines infected with the Conficker worm.
A proof-of-concept scanner was released by the Honeynet Project, a nonprofit security research organization. The tool is also being made available on many network scanning vendor tools: Tenable (Nessus), McAfee/Foundstone, Nmap, nCircle and Qualys.”
You can download the Honeynet Project’s scanning tool from Honeynet.org.