Back from a lovely vacation on a lake (where there are no computers or TVs allowed) I am struck once again by a terrible case of whimsy. Thus the title of this entry, which I truly could not resist.
There is an odd marketing marriage of some “security” terms. I put security in quotes because it’s so hard to identify the real security issues from the marketing of the latest security product. How often have security software vendors come up with a new “issue” or “risk” only to follow that up with the product that will address it?
This problem has been around for a long time. It could be called “industrial espionage,” “data theft,” or “poor data management,” or even a lack of data classification. “Leakage” just sounds newer, and, well, more catchy. It all comes down to good security practices, which are less catchy, but just as effective.
If you know what your confidential data is, where your confidential data is, who has access to it, and when they accessed it, you are halfway to your own “data leakage prevention system.” Then, implement hardware policy controls (i.e., external drives, CDs/DVDs) and Internet access. Not to mention that a good Information Security Policy that is reviewed and signed off by your employees annually emphasizes your corporate due diligence. The Policy needs to be very clear about confidential data and what employees can do with it.
Still, people will send out data in email, won’t they? (I’m thinking of doctors, lawyers and professors) Good email filtering with appropriate filter keywords can capture a great deal. But ultimately, it also includes education.( It’s still absolutely amazing what gets put on corporate web servers.)
All these activities are a LOT more work, but much more valuable than putting in a system that is fundamentally a detective control. rather than a preventive set of practices.
I do, however, prefer “data leakage prevention” to “extrusion prevention system.” What an expression!
I ran across a mention of this tool in a SANS newsbite.
Scrawl latest version requires information
Scrawlr, developed by the HP Web Security Research Group in coordination with the MSRC, is short for SQL Injector and Crawler. Scrawlr will crawl a website while simultaneously analyzing the parameters of each individual web page for SQL Injection vulnerabilities. Scrawlr is lightning fast and uses our intelligent engine technology to dynamically craft SQL Injection attacks on the fly. It can even provide proof positive results by displaying the type of backend database in use and a list of available table names.
You can also access the 1.0 version at Softpedia if you are reluctant to give HP more marketing fodder. Softpedia does check it’s offered downloads for malware, which I always appreciate.
Given that SQL injection is the most common form of data breach, it might be worthwhile to try it, if your website has less than 1500 pages.
Some dozen websites have the words “SAS 70″ as part, or all of, their domain name on the web. Given the departure of the SAS 70 audit by 2011, I commented recently that they must not be having any fun. An anonymous reader (“CPA”) wrote in to chastise me, to wit:
Does anyone think that… any of these firms didn’t realize that a change in the standard would eventually come? Seems to be small minded writing IMO.
“Eventually” doesn’t exist in too many business plans.
For many of these firms, their core business comes from SAS 70s, and their page ranking on search engines is critical. The shift from SAS70.com-type names to SSAE16.com-type names has to have a huge impact for such firms. That’s the “fun” part.
I checked out www.ssae16.com It’s “under construction,” and owned by a guy in Pennsylvania. He bought the web name in February 2009. I guess he realized a change in the standard would come!
Most amazingly, last week a small regional CPA firm in Maryland told my team that no such standard existed. They changed their minds later.
It is being replaced (of course!) by the ever-so-easy to say acronym: SSAE 16. (Statement on Standards for Attestation Engagements No. 16, Reporting on Controls at a Service Organization.) What a mouthful!
In April of this year, the AICPA (American Institute of Certified Public Accountants) released the new standard, which will become effective June 15th, 2011. Since this new standard still documents the previous year activities, you’ll have a year or so to get ready for the new requirements.
Why the changes?
International business requires a global standard. One reason for the change is the economic landscape has become more global. SAS 70 was a U.S. standard that was often being applied on an international basis. The International Auditing and Assurance Standards Board (IAASB) recognized this growing problem and issued International Standard on Assurance Engagements No. 3402 (ISAE 3402) in December 2009. SSAE 16 is substantially similar to the international standard, including the effective date of June 15, 2011.
SSAE 16 reports will meet the needs of a wider audience. A SAS 70 is designed to be an auditor-to-auditor communication. With an increased awareness and emphasis on controls and control assertions (because of Sarbanes-Oxley), more companies are requesting SAS 70 reports. Regulators, government agencies, internal Audit Boards, and end users of financial reports are the new audience for service organization reports such as SSAE 16 and ISAE 3402.
That being said, there are some changes in the requirements that will impact everyone who has an annual SAS 70, as well as any company that must require a report from their third party vendors. So, either way, read on:
1.Under a SAS 70, a service organization is responsible for providing a description of controls. There is no guidance on what needs to be included in that description so it could be limited to the relevant aspects of the internal control framework. Under [SSAE 16], a service organization provides a ‘description of its system’ as designed and implemented. While the term ‘system’ has many different definitions, a common and useful definition is “The controls, procedures, people, software, data, and infrastructure organized to achieve a specific objective.”
2.Also, under SSAE 16, the description must include significant changes to the system that occurred during the audit period (Type 2 audit). Service auditors will now opine on the implementation of the description for the audit period.
3. SSAE 16 introduces the concept of “suitable criteria”. Management is responsible for specifying the criteria used to prepare its system description. The service auditor is responsible for using suitable criteria for assessing that management’s description is fairly presented.
4.Under SSAE 16, management must identify the risks that threaten the achievement of the stated control objectives and assess whether the identified controls sufficiently address the risks. In essence, management is responsible for ensuring controls are in place to address each risk. The risk assessment is not included in the report but management must assert that it is effective.
5.Management will now be required to prepare a written assertion that will be included as a required component of the SSAE 16 report. Management will assert to its’ responsibilities, the fair presentation of the description of the system, the suitability of the design of the controls, and in the case of a Type 2 report, the operating effectiveness of the controls. The assertion needs to be independent of the work of the service auditor. Sub service organization must also provide an assertion when the inclusive method is used.
There are cosmetic changes as well, but those won’t impact anyone but the report writer.
P.S. – All those companies that have “SAS 70″ in their company name and web presence must not be having ANY fun.
And no, there is no “certification” for the SSAE 16 either.
After a nice vacation in the north woods of Maine, I returned to the excitement of my first “cloud computing” audit event.
In doing a SAS 70 for a client, I discovered that they had outsourced a new application. No news there. When data is hosted by the provider, along with the application, all well and good. The AUDIT part has to do with what data the provider is storing.
This often results in my reading a SAS 70 from the third-party provider verifying controls are in place over their general environment. The environment should include the systems (i.e., servers, routers, firewalls) that directly store the data along with the application, of course. I’ve read at least one SAS 70 that only tested the office environment, not the production network. That was a finding…needless to say.
The new issue was that the provider is using a ‘cloud computing” model. OK. I requested policies, procedures, documentation, anything I could get.
I got four documents, all generic. “Your data is secure with us! We use SSL!” I’m trying to dig out the contract for the provider, but that’s not necessarily going to give me anything I need.
If you do a search on google you get over a million results. When I refined it down to “auditing “cloud computing’” (notice how quotes eliminate results for “cloud and or “cloud computing” or “computing.”
The results narrowed to 449,000. Much better.
Problem #1: Every vendor has a slightly different version of what “cloud computing” actually is. That’s marketing; so how do you choose the right vendor? It makes for a tedious review process with no hard parameters.
Problem #2: When they were designing this concept, nobody cared about security. Developers and marketers hate security – it slows down “time to market.” As a result, the concept of securely managing confidential data was never addressed until after the product was released. Thus it became a “client problem” that they could market another product to address, or just ignore it.
Problem #3: What assurances that the data is monitored and secured can be obtained?
In any case, I’ve gotten no solid information yet. The client could always send us out to Texas to do a specific assurance audit, but somehow, I don’t think that’s going to happen.
Well, it finally happened: I got asked to audit information that is stored in a cloud by a third-party vendor.
I’ve acquired the controls, such as password polices, presented in a browser to my client. Several questions came immediately to mind:
1. Given that web browsers are still fundamentally insecure, how does the vendor address such issues? SSL is likely the easy answer here. Let’s hope so. Is data transmitted in the clear after the login? Let’s hope not.
3. Given the prevalence of phishing Trojans, how are vendors and clients going to address illegal and invisible capture of credentials? Do we had this to the vmware profile? That may be the only option. With VMWare Reader being free, one image can go on a lot of desktops (M$ may not like this much).
4. Where’s the confidential data sitting? Is it encrypted? Who has the keys? How is the vendor managing employee access?
I’ll keep you posted.
I was supposed to title this entry “anti-malware and registry hunting,” but perhaps I should just call it: “Ate My Lunch. All of it.”
After running gmer, malwarebytes and Symantec in both Safe Mode and fully booted OS, I felt hopeful. All three products had found different things and cleaned them out in the course of 12 reboots. (At least)
My apps started acting a little odd, so I remained watchful. Then IE couldn’t connect anymore, even though Firefox was working fine. When I ran a diagnostic, I discovered that IE was trying to connect to Hotmail. Oops.
I gave up. Off to the IT guys for a re-format. They tell me it happens about once a week.
I did manage to capture some .dll files, just out of curiosity. I can look at them in VMWare to see what I can find out – if anything. But I do note that the version I “acquired” was much more virulent than the references I saw on the web. More trojans installed, more registry entries, and attempts to send off email/spam.
Lesson learned – just reformat.
Last week I came up against a piece of malware that is still “eating my lunch.” And I don’t know where I got it.
I was researching a DNS problem I have, going through Google and reviewing various topics. So I can tell you somewhat where I went, but I got too busy too fast to identify the website culprit.
My screen starting showing pop-ups about how I have a nasty virus and need to install a product. The name of the thing goes under “Anti virus-AR,” but it could have any name, all things considered.
It began with pop-up alerts, screens to install that I could not shut down. It shut down Task Manager, told me any command I tried to run was infected, could not be closed, and redirected my browser. And started to send out spam in the bargain. My Norton Symantec crashed/was shutdown.
After reboot, it did it all again (of course). It also shut down my antivirus functions entirely. Most sites list it as “fearware,” designed to get you to their website and collect your credit card information for buying their fake product. From what I could see, it’s a bonus if they get that.
In the interests of education, I decided to try and eradicate it, instead of going to our IT Guys, who have already ragged me endlessly. It has been a painful education so far.
None of the descriptions of this particular one that I’ve read about indicate how virulent the install is. This is much MORE than just fearware. It’s a multiple trojan spambot.
This ##$$%%*** changed at least 50 files of all types, wrote over a dozen changes to my Registry, and installed over a dozen pieces of Trojan software.
Six hours later, I’m STILL not sure I’ve gotten it all out.
Part 2: Anti-malware and registry hunting
We get a lot of information about what security issues are important from various sources on the Internet. Most of them we know about from one source or another.
But here’s one that jumped right out at me:
According to the Privacy Rights Organization, of the top 10 data breaches in 2009, 93 percent of compromised records were stolen as a result of malicious or criminal attacks against Web applications and databases.
This tells us where we are still vulnerable – web-facing applications, and the databases they talk to. For many medium to large organizations, keeping up with maintaining web applications through OS patches, application upgrades and database patches is more than a full time job.
It’s time to focus on those applications, and the people who develop them. In the “rush to market” mindset, security is a very low priority. This is where the problem begins. Sooner or later, customers are going to take their money elsewhere. But right now, companies are still content to put up applications without adequate testing.
It’s a matter of where the budget goes, isn’t it?
“Most of the largest and recent data breaches to date have been a result of attacks against Web applications,” explained Jeremiah Grossman, WhiteHat founder and CTO. “To address today’s real cyber threats, companies must shift their security strategy – and budgets – from being predominately infrastructure-based and prioritize the data and applications directly.”
Time to do some redirection – looked at your web-facing apps lately? Checked your databases? How many applications are still using an ID that gives way too much access by default?
Nobody “likes” government regulations. But imagine what it would be like to live without them. What if there were no banking regulations – who would check to see if my money was safe? The bank?
I’ve worked in banks. The answer would be “no.” Not without oversight. Banks have internal auditors, but a fresh eye can often see something significant. The external regulators come in and check things periodically.
Nobody likes being audited. Auditors look for what’s wrong, and report on that, rather than what’s right. But I can count on my fingers the number of organizations I’ve audited who are working hard and and have buy-in from the to exceed the “Gentleman’s C” of simply being compliant.
Far more organizations seem to be simply covering their tracks or ignoring the issue of protecting the data they’ve acquired due to “costs of implementation.”
Since we seem, as a country, unable to pass a national law addressing protection of data that has teeth, individual states are now passing them.
The resulting publicity from frequent data breaches has both good and bad elements: on the one hand it highlights bad company practices; on the other hand, drowning in data breach reports builds in a certain level of public “overkill.”
The far better strategy, in terms of cost and performance, is to acquire best practices and implement them. It’s been proven over and over, but it seems that asking companies to police themselves isn’t working. At all.
Is that good? Not sure, but “It beats snowballs in summer,” as my father-in-law likes to say. Use free tools, make nice with your auditors, use their input to get it out in front of management, don’t resort to FUD (that’s the job the vendors do).