I’m going to assume that you have some baseline knowledge about the DSS, the 12 areas of coverage, different Tier Levels and other requirements for compliance. If not, visit here and bone up.
There is a lot of pro and con going on in the blogosphere right now about the “value” of PCI.
And the circle of blame amongst merchants blaming VISA blaming the banks blaming the merchants is certainly ongoing.
Now we have card processing hardware being easily hacked, it’s all just getting more interesting.
First, I’d like to say I trained at VISA and passed the QDSP exam. Second, I’ve performed three Tier 1 merchant audits. Third, I happen to like the DSS. It has specifics, as opposed to other “standards” that operate at the 10,000 foot level. In other words, one of the requirements is to have a firewall, have rules for the firewall documented, and access to the firewall logged. Nice. Easy to do, easy to test. All the technical standards are based on best practices, and they focus on the credit card data.
The enforcement and compliance requirements, on the other hand have no clothes. (See the Emperor? Doesn’t he look great?) Let’s make it a little more solid:
1.) All Tier 1 merchants should have their compliance audited and signed off on by an outside firm, just like the service providers. Letting merchants sign off on their own security makes me visualize foxes and henhouses.
2.) Outside firms should not be permitted to do any remediation work. Again, foxes and henhouses. It ought to be just like a SOX audit, where the attesting auditor cannot “fix” any problems found.
During my VISA class, I listened to my security vendor classmates press the instructors about “minimum requirements.” They were rather obviously looking for ways to get their clients off the compliance hook. The instructors weren’t pleased.
3.) Outside firms should be penalized if their auditee merchant is breached. It will certainly make them more vigilant when their pocketbooks are involved.
4.) In the race to the bottom, many merchants pick the lowest outside firm bid for assessing compliance. If running a scan and doing a canned report is an assessment, I should go back to PC service and support. Both the merchant AND the outside firm should be ashamed of themselves. And the acquiring bank should be slapped for accepting it.
5.) In the standard, you are either compliant or you’re not. TJ Maxx was not compliant. They had a “plan” to upgrade their wireless in the next year or so. Why was that acceptable to VISA and the bank?? Were there any compensating controls? Obviously not, since there was no firewall between the stores and corporate.
6.) Publish the names of the Tier 1 and 2 merchants who are not compliant. (I can hear the screams now.) But implement the previous rules first.
P.S. Compliance Does Not Equal Security. But, as my Maine Yankee father-in-law would say, “It sure beats snowballs.”
In Part 1, I discussed what “synthetic” identity is, and why it is not easily discovered.
The primary problem ( in addition to all the other ones!) is the algorithms that allow for variance in the credit reporting agencies. The folks at ID Analytics are promoting a business service called “ID Intelligence.” They’ve created a network and (I think) an enormous database from credit reporting agencies and Fortune that allows them to “see” all the subfiles. They call it the nation’s only real-time, cross-industry compilation of identity information.
So now we have a database to monitor all the other databases…..
Most of the current options for addressing identity theft focus on the individual victim. We use credit freezes, fraud reports to the FTC, free credit reports and credit monitoring.
But if “pieces” of my information were stolen, how would I know? My address, perhaps, or my birth date? Or one credit card number?
We don’t have good information about this type of fraud. Most of the statistics we have are taken from the reports of victims. Victims do not always know how the theft happened, or all the places where pieces of their information might have been used. Lending institutions (banks, credit card companies, etc) are not required to disclose statistics about identity theft. They have not provided this information because it could cause embarrassment and could attract unwanted regulatory attention.
There’s a good paper here about why statistics are so bad and what we could do about it.
Federal Regulations about the term “identity theft” define it as “a fraud committed using the identifying information of another person, subject to such further definition as the [Federal Trade Commission] may prescribe, by regulation.” (These quotes come from the Fair Credit Reporting Act.) But what if different pieces from different people were combined? That’s what we’re talking about here, and it is new territory for regulators.
The FDIC defines it as: “Unlike typical identity theft fraud where a fraudster steals the identity of a real person and uses it to commit fraud, a synthetic identity is a completely fabricated identity that does not correspond to any actual person.”
In synthetic identity theft, the fraudster creates a fabricated identity using some information from a victim’s personal information. For instance, the impostor may use a real Social Security number, but a falsified name and address. Since this synthetic identity is based on some real information, and sometimes supplemented with artfully created credit histories, it can be used to apply for new credit accounts.
If the thief has your bank account number and social security number, for instance, he can reference those accounts to create a new account without ever “touching” your information.
Why does this work? Because credit reporting companies and lending institutions have algorithms that allow for variations in input. So if you “fat-finger” your Social Security number on a credit card application, it will still “find” you. But synthetic ID fraud creates subfiles at the credit bureaus. (The term subfile, says Evan Hendricks, author of “Credit Scores and Credit Reports,” refers to additional credit report information tied to a real consumer’s Social Security number, but someone else’s name.)
Because the identifying information contains some data that’s already linked to a particular consumer, the subfile gets associated with the consumer’s main file, or “A” file. So if someone runs a query “just” on your Social Security number, those “subfiles” will pop up – and your credit rating can tank. But until that query is run, the information remains hidden.
Synthetic identity theft is invisible to victim-based tracking because individuals whose information was used may never become aware of the crime. The “fabricated identities” are typically based on a real Social Security number, but with a fake name and address. As a result, because “the combination of the name, address and Social Security number do not correspond to one particular consumer, the fraud is unreported [by a victim to a bank] and often goes undetected…financial losses stemming from synthetic identity fraud are difficult for organizations to label as fraud when the approved account becomes delinquent and eventually charges-off as a loss.
According to ID Analytics, synthetic fraud is quickly becoming the more common type of identity fraud, surpassing “true-name” identity fraud, which corresponds to actual consumers. In 2005, ID Analytics reported that synthetic identity fraud accounted for 74 percent of the total dollars lost by U.S. businesses to ID fraud and 88 percent of all identity fraud “events” — for example, new account openings and address changes.
“True-name identity fraud was the prevalent identity theft mode about five years ago,” says Steve Coggeshall, chief technology officer of ID Analytics. “Synthetic identity fraud is the dominant mode now.”
Can you tell I got behind on my hardcopy reading? I just caught Rebecca Herold‘s fine article in the Computer Security Alert of 2/2008 (a CSI monthly newsletter well worth getting, bye the bye, for the quality of the articles) concerning one of the aspects of medical identity theft: breach notification.
California is the first state in the nation to include “medical information” AND “health insurance information” in their updated state law on privacy breach notification. Since California was also the first state to implement a privacy breach requirement in state law, we can hope that other states will follow suit in this as well. The updated law, S.B. 1298, came into effect in January 2008. Here’s the relevant section:
(e)For purposes of this section, “personal information” means an individual’s first name or first initial and last name in combination with any one or more of the following data elements, when either the name or the data elements are not encrypted:
(1) Social security number.
(2) Driver’s license number or California Identification Card number.
(3) Account number, credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account.
(4) Medical information.
(5) Health insurance information.
(f) (1) For purposes of this section, “personal information” does not include publicly available information that is lawfully made available to the general public from federal, state, or local government records.
(2) For purposes of this section, “medical information” means any information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional.
(3) For purposes of this section, “health insurance information” means an individual’s health insurance policy number or subscriber identification number, any unique identifier used by a health insurer to identify the individual, or any information in an individual’s application and claims history, including any appeals records.
This law will have an impact on any entity doing business in the state of California, and addresses the fact that HIPAA regulations contain no requirement for breach notification. “Ooops, we lost your medical information, but whew, we don’t have to tell you!”
These regulations also affect health care technology companies, including companies like Google, or Microsoft (think Health Vault) who want to hold your information for you:
This bill would apply the prohibitions of the Confidentiality of Medical Information Act to any business organized for the purpose of maintaining medical information to allow an individual to manage his
or her information, or for the treatment or diagnosis of the individual.
You can read the full legislative act here.
The bill does exempt organizations that encrypt their Personally Identifiable Information. And I suspect this bill will have a bigger impact on health care in terms of compliance .
A recent story in Government Technology magazine educated me on exactly what “medical identity theft” is and what the risks are. Although the article focused on Medicaid and Medicare fraud, the statistics and risks made for scary reading. And it started me thinking about MY medical data.
In a nutshell, medical identity theft involves the use of patient identification numbers and/or physician identification numbers, both used to bill for services and obtain payment.
The FTC estimated, based on overall identity theft statistics, that medical identity theft cases numbered 3 percent of all identity theft cases. That’s about 250,000 cases a year, at a conservative estimate.
The FTC is not responsible for addressing medical identity theft, the Department of Health and Human Services is. Nor is there an ability to use FACTA (Fair Credit Reporting Act) to remove fraudulent medical records.
According to the World Privacy Forum and Blue Cross Blue Shield Association, at least 1 percent of fraud is estimated to be medical identity theft: that’s $600 million per year. Ouch.
For individual patients, the theft of their medical identification numbers presents an even more difficult scenario to resolve than “regular” identity theft. Their medical history gets changed, along with erroneous information about allergies, medications and procedures done. With HIPAA protecting medical records, it is much harder to change the records that list the “bad” information.
And imagine trying to get insurance with a false “pre-existing condition” created by fraud? Not to mention dealing with hospitals and other medical organizations trying to get payment.
Another interesting (and scary) statistic from the WPF:
Cost, on the street, for a stolen Social Security number? $1.
Cost, on the street, for stolen medical ID information? $50.
Medical identity data sitting in our HR databases is more valuable than Social Security numbers. Has it occurred to anyone else besides me that our medical ID numbers are often our Social Security numbers?
Bankrate has noted that since HIPAA has no enforcement mechanism, data security is not a high priority issue for health care facilities. The penalties are there in the legislation, but there is no inspection or reporting mechanism to ensure compliance. We are, in essence, trusting our medical providers and billers to keep our personal information secured.
Given the state of security in the majority of our business networks today, would that give you a warm fuzzy?
Next: “Synthetic” Identity Theft
The year 2007 was a banner one for personal data theft, especially credit card info (think TJMaxx) and individual personal data being lost all over the place. Big and small, the number is in the millions. The Identity Theft Resource Center estimates the number of lost or stolen personal information records to be 79 million, up from 20 million in 2006.
The bad guys are getting data off of laptops, phishing emails etc, but that’s petty numbers. The real motherlode of data is inside databases.
Where do you think the TJMaxx thieves got their 90 million credit card records? Not from sniffing wireless transactions. Oh no. They got into the network, then into the servers, then into the database(s) holding that data, which were, I betcha, unencrypted. And the only reason they got caught was because the “mules” for the thieves got sloppy about purchasing large amounts of products in stores to exchange for cash. TJMaxx wasn’t watching their databases (or anything else, seemingly).
So when people ask me why I care about database security during an IT Audit, there’s my answer. And the fact that internal data theft is a significant percentage of the overall numbers.
Who has access to your HR, payroll and client information? The temp? The CEO’s secretary? The guy in accounting? If you were losing data, how would you know? Those bad guys don’t want to be found.
Is your payroll database on the same server as the database accessed by your web server? (Saw that one last year)They’ll get your client data and all your employee information, too.
If I had to choose between the network engineer and the DBA to guard my personal data, I’d be choosing the DBA.
Next: Medical Identity Theft
I finished an IT audit not too long ago with an organization that did not have any policies. They had an employee handbook, that had some declarative statements that employees signed off on during their first week on the job. They are a small company growing into a medium-sized one, and part of their business maturity model was to standardize and document the structure of their organization. Having corporate policies is a critical element in business growth. Why?
Every regulatory requirement and/or compliance standard I’ve seen requires them. SOX, PCI, HIPAA, GLBA, FFIEC, COBIT, ITIL, etc. So in order to grow your business, at some point you will run into this requirement. And as an IT Auditor, I’m required to read them.
So I get to read a lot of policies – and a lot of them are bad.
An article from Anton Chuvakin highlights five basic mistakes, and I’d like to add five more (I like things in tens; you know, ones and zeros!) So here’s his five:
1. Not having a policy
2. Not updating the policy
3. Not tracking compliance with the security policy
4. Having a “tech only” policy
5. Having a policy that is large and unwieldy.
These are good points, and it’s a great read, so it got me started thinking about what I come across for policies and have been concerned upon seeing; so here’s my five:
6. Having a policy not mandated and approved by the “top of the house”
If no one from upper management has reviewed and approved these policies, they are just your opinion, or your mandate for your particular department. They do not cover the organization as a whole and provide no legal protections (enter the obligatory “I AM NOT A LAWYER” here). If the management doesn’t stand behind the policies enough to mandate and promote them, they are toothless and the employees will figure it out. So will their lawyers.
7. Having a policy that tries to incorporate standards and procedures
Quick, what’s the difference between policies, standards and procedures? (If you’re planning on taking the CISA or CISSP exams, better know this one). Go here for an answer from the FFIEC.
8. Not keeping employees educated and requiring an annual signed confirmation
Putting rules in an employee handbook that gets read (if that) during the first days of employment sends the message that security policy is not terribly important outside of HR – kinda like signing up for direct deposit and health insurance…. Keep the policy updated and make sure everyone reads and agrees annually. CYA.
9. Borrowing something you got off the Internet to make the auditors happy
Certainly my personal favorite. You may think you don’t have time to really craft a policy, but if it has been approved by management, you will be held to it in a court of law. Don’t borrow something you can’t possibly do and claim it’s your policy; when that policy is tested, you will most certainly flunk in a particularly public way. Ouch to your career.
10. Not taking ownership of the policy
Leaving security policies up to management, or internal audit….anybody but you so that you can complain about how terrible it all is and how much work you have to do in order to support it.
Consider that if you craft the policy, you can create a document that will address the needs of your environment. If it’s a realistic policy, you can build a set of standards and procedures you can incorporate into your workload. You can use these to generate statistics for getting more staff to monitor compliance and implement security. If you write it, you own it. Make it yours, make it real, it will be worth the time it takes to make it right.
A comment from Dr Chuvakin reminded me of how long I’ve been thinking about “checkbox security.” As an auditor, I am certainly familiar with checkboxes, in fact, for my firm, I’ve written a number of them.
When I am going over doing an IT Audit with a new auditor, having a method for examining the environment is vital. Heck, having a method for pen-testing is vital. But it seems that so many people get caught up in thinking that the method IS the solution. If everything is checked off in the methodology, the environment is secure, right?
No. A thousand times. No. I’m sure TJMaxx had a bunch of checkboxes filled in for somebody. Didn’t do them a darn bit of good.
A few years ago I did an internal pen test for a company, and discovered that their use of a web proxy required that the user log in via HTML each time they went to the Internet. Long story short, Cain and Abel easily decrypted their casual hash for me and I was very shortly inside the network, up to admin level AND the CFO’s password. (Geez that was fun; but I digress…)
I asked my engagement manager, who was also doing a SOX 404 audit for them, if their SOX audit would have found this issue. No, of course not! Auditors don’t run Cain and Abel! (Maybe they should, eh?)
So where would that have left that company? SOX “compliant,” but still easily broken into by anyone with a simple tool. Not good. So much for checklists, checkboxes, and methodologies. The difference was, the company cared enough to pay for a quality pen test, not just someone coming in to run a scan. They changed their proxy, and now this issue no longer exists. They’re proud of their security, and they should be.
But if we are not thorough and specific, we can miss the obvious “low-hanging fruit.” In my mind, that’s all an auditor can really hope to do. And even that seems to be a full time job.
So, for those folks who say, “we’re compliant!!!” it doesn’t mean you are supporting a secure environment. It means you’ve gotten all the little boxes checked in someone’s methodology. It’s a “Gentleman’s C.”
What would an “A” look like? More on that later.
Visa, in conjunction with the US Chamber of Commerce, has published an alert that identifies the leading causes of data breaches. Full details can be found at the Chamber’s website. The five leading causes of card-related breaches are:
1) Storage of mag stripe data
2) Missing or outdated security patches
3) Use of vendor supplied default settings and passwords
4) SQL injection
5) Unnecessary and vulnerable services on servers
Why tear my hair out? Numbers 2, 3, and 5 shouldn’t be on this list. We’ve had how many warnings and regulations and requirements about patches, default settings and unnecessary services? And business wonders why it needs regulatory requirements. Because these bad business practices happen routinely. Because too many business owners don’t want to spend the money to secure their systems.
Just four months ago I did an audit of two online MSSQL databases, only to discover their administrative SA IDs had been left in the default configuration of “no password.” Why do we keep dropping the ball? Crooks are dumb, but they’re not that dumb.
Last year I interviewed a VP development of an online marketing software for a clothing retailer. When I asked him what steps he was taking to address SQL Injection, he replied, “What’s SQL Injection?”
Well, I’ve used up my italics for the day. Sigh.
But the Chamber website has some really nice papers and templates for those looking to get started with security policies and procedures. Good for them!
One of the junior members on my audit team likes to rag me about how often I harp on patching at various client sites. He started out by calling me “Captain Patch,” but I pointed out that I like “Kernel” much better. Why have just a nickname when you can make a really good pun with it too?
It’s easy to say, “Patch your servers.” Beware of the auditor that lays that one on you and walks off; it’s NOT a one step process. Patches can break things and breaking things in production, especially if you’re running custom software on those servers, can be a disaster of large proportions.
Some years ago at a bank I was working at, our outsourced network team started patching servers over the weekend. Unbeknownst to them, the patch updated a driver for the disk array on our Compaq servers with one that didn’t work. Install patch….no server. A server won’t boot if it can’t find the disk to load the software!
About 25 servers were “patched” before they realized the problem. Since they were in Texas, a whole lot of us got out of bed in Boston and headed to the Bank to try and discover the cause of massive server failure. That was a very long weekend.
You gotta have a plan! And it needs to be a fast one, because bad guys start reverse-engineering the code the minute the patch is released. And the plan has to test those patches to make sure everything works, before deploying to production.
“We don’t have test boxes.” Of course not. I used development boxes, (announcing it first), then after 24 hours, if nothing had broken, it went to the backup production boxes across the continent. If all went well, changes could go into production for highly critical patches in less than 48 hours. Use secondary Domain controllers, file servers, database test servers, etc. The most critical server gets patched last, but fast.
If you’re running a critical application with outsourced software, write it into the contract with the vendor that they will test patches quickly and update you so that your servers can be patched. If you sign a contract without this requirement, shame on you!
Decided not to apply a patch for Media Windows v.exty xx? OK, but who made the decision to bypass certain critical ones? You’ve got to document what didn’t get patched and why. Otherwise, you could be the one called on your vacation by a furious boss with a broken server. Or, God forbid, a hacked one.
No excuses. Figure out a plan, draw up a procedure, and save yourself major headaches.