The new PCI (Payment Card Industry) Data Security Standards, Release 1.2 came out in October, and are worth taking a look. They’ve added some updated recommendations (like getting rid of WEP entirely by 2010), and I especially liked some of the following features:
Compensating controls must be reviewed, documented and validated by an assessor annually. A compensating control worksheet must be completed for each compensating control. These changes emphasize the council’s intent to make it more difficult for merchants to use compensating controls rather than meeting the requirements of the DSS.
I like this; a control is a control – a compensating control is not meant to be a permanent solution.
Monitoring sensitive areas: Version 1.1 required the use of video cameras to monitor sensitive areas. The new version offers the possibility of using other access control mechanisms to monitor access to sensitive areas. This can include card keys or biometric access controls that would provide a date and time stamp upon access to sensitive areas. In addition, the clarification in version 1.2 expands the scope of video monitoring into areas that contain paper files. Many companies contain storehouses full of paper files, which now may require video monitoring as well.
It’s worth noting that standard keys and keypads do not meet the new requirements, as they do not provide the ability to monitor access to sensitive areas. About time this limited control was tossed. It’s not enough to lock the door; you need to know who is accessing the room, and when.
Version 1.2 provides more detailed requirements when dealing with service providers (including shared hosting providers) that have access to cardholder data. Businesses must maintain a list of all their cardholder service providers and ensure that the service providers are PCI DSS compliant. This includes monitoring their compliance, maintaining a written contract with the service provider stating that they are responsible for the cardholder data, and establishing a vendor review process when selecting service providers in order to perform due diligence. These requirements force businesses to work closely with their providers and be aware of their service providers’ PCI DSS status. Nice.
Information about consumer purchases, habits and history have become multi-billion dollar treasure troves for businesses to sell and mine for others.
Specialized, targeted information from consumer databases held by banks and other financial institutions are being used to develop business lines. Advertisements, mailings, telemarketing and other targeted ads are the marketing tools of choice to offer services to the targeted market of financially stable consumers. But it’s no accident that identity thieves use the same tools the marketers do. It’s the same information, put to illegal ends. Identity thieves have become experts at following the money.
I recently received a letter in the mail, with the words “Important Information!!!” emblazoned on the front, (along with the bank’s name and return address) and inside, along with my name and address, was the entirely unsolicited offer to let me cash out on the equity in my house via a six figure loan.
What would have happened to that information if it had been intercepted and I had never seen the offer? Or tossed it in the trash, to be opportunely reviewed by whoever took an
interest? How long would it have taken for me to find out a loan had been taken out on the equity in my house?
Mailing products have become the favorite hunting grounds of small-time identity thieves. One thief can ruin several dozen credit histories and move on before a consumer can react.
Consider the arrival of checks. Those pleasant brown boxes the bank orders for you are distinctive AND contain important nuggets of information, such as address, phone number, bank account and routing number. Some folks have had their driver’s license number printed on their checks. By providing this service via regular mail, with no safeguards to confirm arrival to the proper party, retail institutions invite theft.
Banks and financial institutions that send unsolicited checks in the mail (the kind used to withdraw from CDs, for instance) are also providing opportunity. Those long white envelopes from business addresses are becoming noticeable by their very anonymous return postal addresses.
Banks looking to expand markets often run credit checks on potential customers in order to offer tailored services (witness my home equity loan offer). But by doing so, banks open themselves to legal risk when data is lost or stolen, and angry consumers demand to know why their information was revealed.
Businesses can run credit checks with any of the “Big Three” credit bureaus (Experian, TransUnion and Equifax) by acquiring a business account and password. Once they log on, all they need to obtain a credit record is a name and Social Security number. That means those access codes are digital gold for would-be thieves who also happen to be employees.
Such temptation sometimes proves too much: 7,300 customers of Marchese Auto World settled a class action suit in May 2004 for $2.45 million. The charges were criminal and civil, based on customers of Marchese Auto World whose credit information was accessed by the defendant who used the information to take out loans in their names without their permission. The suit stated that the general manager of B.J. Marchese Auto World, Limerick, PA, illegally obtained credit reports for the victims and obtained more than $4 million dollars in unauthorized auto loans that were never purchased or leased. The plaintiffs were unaware of the loans, and suffered from credit damage and invasion of privacy.
It will be difficult for business organizations that depend on credit reviews to resist marketing campaigns that have provided profit before. Banks and businesses have been willing to absorb “acceptable levels” of fraud loss as part of the cost of doing business. The cost of this form of fraud is becoming extremely expensive when class action lawsuits take place.
If you want to experience pain in the corporate wallet, I invite you to go to the Data Loss Cost Calculator. Plug in some numbers and look at the costs in the different regulatory penalties, attorney fees, investigation costs, etc. I recently completed a SMALL forensics exam that cost the client in the six figures without crisis management/client notifications.
A survey conducted by the Ponemon Institute (you need to give up info to access the study, unfortunately) found that 58% of respondents who had received notification that their personal information had been compromised by a data breach had lost confidence in the company and that 31% planned to cease doing business with the company. The cost of a data breach is estimated at $197.00 per record.
The actual cost to the consumer (you and me) is usually estimated based on identity theft statistics. Not every data breach results in identity theft. But the potential for identity theft automatically exists for every data breach. This is what business is forced to address, and rightly so. We have to endure the inconvenience of changed credit card numbers, and other minutia for data breaches. The cost to consumers for identity theft is much larger.
Best case estimates are that it takes between 25-40 hours of the consumer’s time (you and me) and a cost of $5720.00, according to PrivacyRights.org. But consider also that the consumer (you and me) may be dealing with the trail of the identity theft for up to 10 years or more. What fun. No wonder they’re suing.
Those of us working in small organizations often think we are somehow “immune” from data theft. It’s kind of like planning for your own funeral – no one wants to think about it. But when it happens, what’s your plan? Are bits and pieces inside your Disaster Recovery Plan and/or your Incident Response Plan? Has your company done an impact analysis?
Keep in mind that many smaller companies do not recover from data breaches; if you lost 31% of your business, would the company survive?
A business impact analysis of the cost of damage recovery should include the following:
• Investigation costs
• Remediation costs
• System updating
• Outside forensic consultant fees
• Downtime related costs:
Loss of productivity
Employee downtime or overtime
• Legal fees, court costs
• Replacement and/or retraining of employees
• Loss of intellectual property
• Possible replacement of equipment
• PR costs to recover reputation
• Regulatory fines
It’s better to plan the funeral and hope you survive the service. Having a plan will keep you out of the unemployment line.
The core requirements for committing the kind of data theft that leads to identity theft are ability, motivation and opportunity.
Ability means having the skills to do the actions required. Start-up costs for data theft are low, with information readily available, computer equipment purchased, leased or rented and high profit potential. Stealing someone’s mail is free.
Thieves can spend many years honing their skills in order to capture large aggregate data for bigger money. By breaking into servers accessible from the Internet that are not configured correctly and monitored daily, thieves create a springboard for attacks into the heart of the corporate network. Further, it is not the cracker who defaces your website and announces it to his IRQ peers that you have to worry about, it’s the cracker who doesn’t want to be seen. The thief wants to be in and out of the corporate databases with the information he/she needs quickly and quietly.
Any kind of personally identifiable information or proprietary institutional information being stolen leaves a business vulnerable to legal, operational, financial and compliance risks. And if the institution’s IT systems and administrative controls are not secure, there are grounds for a successful legal case.
Motivation. Any of a number of events can provide a “reason” to steal information and sell it: a disgruntled, overworked employee seizes on information as a way to receive compensation he feels entitled to; another employee becomes desperate when medical bills overtake her. The common denominator here is that the ability to acquire money pairs itself with a reason, no matter how badly manufactured in the mind of its creator. The reason becomes compelling when there is no oversight.
If the employer does not have controls in place to monitor access to the databases of personally identifiable information, it becomes impossible to prove who did access the information, except in an indirect fashion, such as a process of elimination or admittance of guilt. What does it say about the employer to the victim if such safeguards were not in place? Lawyers point to such lack of safeguards as negligence. Just ask Countrywide how much the illegal access to their customer databases is costing them.
Respondeat superior is the legal doctrine making an employer or principal liable for the wrong of an employee or agent if the wrong was committed within the scope of the employment or agency. This doctrine has been applied to a wide variety of computer crimes, and is likely to be used in a class action suit.
Just as the negligence doctrine could be used to impose liability for inadvertently spreading a virus, an organization may be held liable under the respondeat superior doctrine for an employee’s act of stealing and selling confidential customer information if: (1) the act occurred within the employee’s scope of employment, such as providing access to customer information to its employees; and (2) the employer knew or should have known that the employee was creating copies of confidential data and disseminating the data to inappropriate parties. Did Countrywide know? Nope. The FBI had to tell them.
Some might argue that employers would not know who exactly had stolen the information if it were taken in the course of normal duties, and this is entirely accurate. However, by logging and reporting on who has had access to the information the employer can rule out suspected internal thieves and narrow the focus of investigation. Better yet, the organization will have shown due diligence and sound business practice in addressing the risk of a lack of access controls for confidential data.
Opportunity Having access to information or materiel that can be exchanged for money is the primary goal. Proving due diligence in protecting information from outside crackers and monitoring employee access are important pieces of legal protection for our companies.
Customer service and support after the fact of theft will not let business off the legal “hook” if the institutions themselves have given the thieves unmonitored access to the information. The number of class action suits against organizations that have had data breaches is rising rapidly.
The most significant financial impact of identity theft has yet to be examined. I believe that the risks to business and other institutions now include legal, reputation, financial and compliance risks that cannot be transferred.
Victims of identity theft are looking to recoup their financial losses and punish those people or institutions that enable identity theft to happen. The average arrest rate (according to law enforcement) is under 5% of all reported cases. Thieves do not have the resources to repay their victims by the time (or if ever) they are caught. Business does. If business organizations are providing the opportunity for identity theft to occur, they will be sued. We should make it our job to see that we are not among the defendants.
According to the Identity Theft Resource Center, (An outfit that I happen to respect a lot because they are very specific about their statistics and criteria of what a “breach” actually is), As of November 11, 2008 there have been 574 breaches, with a total of 33,593,557 records exposed.
You can download the report at their site. It’s painfully interesting.
Here’s how it breaks down, keeping in mind that we’re not done with 2008 yet:
Number of breaches: 66
Number of records: 17,231,057
Overall % of breaches: 11.5 (2007? 7%)
Overall % of records: 51.3% The fewest breaches, but the most loss of data. Thieves are not stupid.
Number of breaches: 202 The most number of breaches. We need to get much stronger here
Number of records: 5,705,628
Overall % of breaches: 35.2% (2007? 29.3%)
Overall % of records: 17%
Number of breaches: 120
Number of records: 761,303
Overall % of breaches: 20% (2007? 24.7)
Overall % of records: 2.3%
Number of breaches: 100
Number of records: 2,656,407
Overall % of breaches: 17% (2007? 24.5%)
Overall % of records: 7.9%
Number of breaches: 86
Number of records: 7,239,162
Overall % of breaches: 15% (2007? 14.5%)
Overall % of records: 21.5%
Why do these statistics matter? Because, one way or another, every business and every person is affected.
We all know them. During my Help Desk tech support days, we called them the “Bermuda Triangles.” Everyone in the department dreaded them. If you looked at the Documents and Settings directory, you would see the login names of every single tech. Administrators and tech types bemoan the users who use their CD players as coffee holders and don’t know how to turn their computers on. Rants are all over the Internet discussing the clueless user (“luser”) that plagues Help Desks everywhere.
And somehow, the higher level the employee, seemingly the less technically savvy they are. I’ve lost count of the number of organizations I’ve audited that exempt the C-level executive from having to change his/her password, or disable complexity of passwords, because “they can’t seem to remember it.” And these are the people running the company? Amazing.
So, what is the solution to these folks? To Suzie in Accounting, who calls the Help Desk every 30 days after she changes her passwords? The call center employee who puts his post-it password on his monitor? The executive who calls every time he goes on a road trip because he forgets how to use the VPN?
Fortunately for us, Cisco has released a study that documents our difficulties. The study documents that 39 percent of IT professionals worldwide were more concerned about the threat from their own employees than the threat from outside hackers. But here’s the significant statistic for data loss:
Of those employees who reported loss or theft of a corporate device, 26 percent experienced more than one incident in the past year.
So, how do we help the “Triangles” of our company? First, we have to find them.
It’s true that every Help Desk has a list, but it’s usually a mental one, someone that “everyone knows.” We need to translate that information into actionable data. Not to punish, but to resolve a recurring cost to the IT department.
Management and business LOVE information that tells them how much something costs. If you can document how much time those folks are costing IT, you can make a business case for getting more user education to those employees, or better systems to resolve consistent problems. After all, it means they get more of IT time spent on projects, rather than Help Desk issues. That could be a big seller.
Use your Help Desk tickets to document and correlate spikes in service times and specific users that call in for the same problem. If someone keeps losing hardware, maybe it’s time to change the corporate Hardware Policy. If users keep forgetting their complex passwords, maybe it’s time for keyfob or RFID logins.
There are always options!
I can hear the collective eye-rolling from here. But guess what! New federal regulations are requiring security education from organizations as part of compliance:
SEC regulations for financial institutions http://www.sec.gov/index.htm
NERC regulations for utility organizations http://www.nerc.com/files/RSAW-CIP-004-1-060608.doc
According to a study just finished by Cisco, “Data leakage often results from risky behavior by employees who are unaware that their actions are unsafe. Some of this problem can be attributed to a lack of corporate policy or inadequate communication of corporate policies to employees. In other cases, IT professionals simply expect some degree of professionalism, security awareness, and common sense precautions on the part of employees-and don’t get it.
• 43 percent of IT professionals said they are not educating employees well enough.
• 19 percent of IT professionals said they have not communicated the security policy to employees well enough.”
The SEC regulations affect publicly traded companies, so if you regularly undergo SOX audits, this will definitely be part of the package. PCI has also had a requirement for quite some time. So, in short, you cannot escape. And besides, I suspect there are some things YOU can do to improve the understanding of your users. They are a very important part of YOUR network.
Who does your information security training? Have you taken a look at it lately? Is it any good, or just “CYA” material? See any improvements after training on the part of your user base? If not, maybe it’s time to change it.
How “user-friendly” is your organization/department for employees that want to ask computer-related security questions?
Are chronic problem users tracked, and their managers notified? (I love this idea…)
There is a rising tide of studies confirming that internal data theft and loss is far more costly to business than external attacks. All it takes is one user clicking on one phishing email to compromise company information (even a corporate email list is important). A monthly email from you explaining a topic, and inviting questions might result in a LARGE saving of YOUR time dealing with infections and information compromise.
And hey, you’ll be compliant! Auditors love you!
A very well written article (rather unusual, in USAToday) on corporate espionage and data theft caught my eye today. I’d highly encourage you to take a look, even though it may make you nervous. It made ME nervous, but then, I’m supposed to be.
The article is on security researchers reporting the cybercrime shift from identity theft (the market has become saturated – enter dryly ironic comment of your choice here) to targeting anything they can get from corporate networks for selling at a later date.
If your company holds copyrighted material, patents, bids for proposals, financial planning for clients, business plans – all of these are targets for break-in artists. One PC can yield a treasure-trove of email corporate addresses so that targeted emails can be sent with specific payloads.
And because most of us have HTML-enabled email, those messages can have code never seen by the reader, which is executed when the email is opened – in the preview window.
(P.S., I know it’s pretty, but PLEASE turn HTML email off).
Consider where all that information is, and who has access to it. How do you know? This is the most common auditing question I ask. These thieves work very hard not to be found.
How could you catch these people?
1. Monitor your outbound firewall traffic – they have to deliver their data somewhere!
2. Block servers that don’t need to go to the Internet
3. utilize proxy servers for Internet access – for EVERYBODY (don’t exclude IT staff)
4. Utilize internal firewalls and secured subnets
5. Designate critical servers for Host-based intrusion detection agents
Make them work for it, or better yet, make it impossible.
The word is out in InfoSec circles that a practical attack method against WPA – enabled wireless access points has been announced and is to be presented at PacSec in Tokyo this week.
It used to be that only a dictionary attack against WPA-encrypted packets using a weak pre-shared key (PSK) was available; if you had a PSK of more than 8 characters, you could be reasonably assured that you were secure. Now, Erik Tews will be presenting his attack method, which uses a combination of protocol weaknesses and cryptographic weaknesses to compromise TKIP encryption. The attack lets the attacker inject seven packets into the network, per decrypt window.
There’s far reaching ramifications to this attack, but in short terms, this presentation means the days of WPA are numbered. Some of the attack code is known to be already available.
The attack focuses on TKIP encryption, and you may think that with AES enabled, you are safe. Not, however, if your router defaults back to TKIP to enable older clients to connect. Not all routers allow you to disable this feature, either. On some equipment AES is called WPA2 and TKIP is WPA. The WPA spec leaves support of CCMP(AES) optional while the WPA2 spec mandates both TKIP and AES capability.
What to do today (and believe me, I’m checking my home router, and will be auditing routers to this effect in the future; best believe that PCI will update their requirements quickly, as well)? Check your APs (access points) as follows:
Use only AES
Disable Negotiations to TKIP from CCMP(AES).
If you must use TKIP, rekey every 120 seconds.
Interestingly, the amount of time he is estimating is 15 minutes to crack WPA.
What to do going forward? Plan on upgrading your wireless access points sooner rather than later. It won’t be long before some joker is using this attack to break into businesses.
Per my previous post, it seems that there is suddenly a lot of discussion in the security blogosphere about cloud computing and the security (or lack) thereof. Seems a number of people have taken note of Microsoft’s entry (Azure) into Data Center business development. A lot of really good questions are being asked.
How are these environments going to be secured? I have yet to see anything solid provided. Evidently vendors are content to “wait” until businesspeople tell them what they want. What if they never ask? Where is there a baseline for systems? Access controls? Dare I say “secure software development lifecycle?”
For some painful laughter, try reading a poetic critique of cloud computing here from Christopher Hoff.
Follow that up with a dose of reality as to the real origin of “cloud” computing from Reuven Cohen:
I hate to tell you this, it wasn’t Amazon, IBM or even Sun who invented cloud computing. It was criminal technologists, mostly from eastern Europe who did. Looking back to the late 90’s and the use of decentralized “warez” darknets. These original private “clouds” are the first true cloud computing infrastructures seen in the wild. Even way back then the criminal syndicates had developed “service oriented architectures” and federated id systems including advanced encryption. It has taken more then 10 years before we actually started to see this type of sophisticated decentralization to start being adopted by traditional enterprises.
and you begin to see the general take on cloud computing as it is currently being described. I like “thin client” computing. You can put a lot of controls in place that allow a user to have a desktop of their own AND not allow any malware in beyond the next reboot. It makes me nervous to think about some big corporation holding all my data, but banks do it all the time with mainframe applications. That’s where Metavante and Jack Henry, for instance, make their money.
But how do we audit these clouds? It still comes down to WHO has ACCESS to WHAT.