So you have this report from the company you’ve outsourced a critical financial service to, and it looks like a lot of boilerplate with a chart of sorts at the end. What are all those sections for, and why should you care?
First, determine that the company performing the report is a certified public accounting firm. This is the only legal entity permitted to perform a SAS 70 type audit, regardless of whether it is a Type 1 or a Type 2. Other firms can perform SAS 70 “readiness assessments,” but not the SAS 70 exam itself.
The first page can tell you whether it is a Type 1 or Type 2 audit. The subtitle:
Report on Controls Placed in Operation
and Tests of Operating Effectiveness
Prepared in Accordance with
Statement on Auditing Standards No. 70
indicates a Type 2 by virtue of the statement “Tests of Operating Effectiveness.” If you’ve read a previous column, you know that a Type 2 looks at controls and tests those controls. That’s a Good Thing.
The next thing you should see on the first page is an indicator of when the controls were tested. The date range is commonly a year, but it can also cover a six or nine-month period.
This means the auditors have tested controls over that time period to see if they were actually in place and effective.
Consider how long ago that date range was. Some organizations will attempt to use a SAS 70 report that is two or three years old. Regretably, some auditors will take 4-6 months to issue a report – which can mean that what you’re looking at has limited value. The longer the period from the actual test of controls, the less value the report has, because it cannot report on the current state of controls.
When I do an audit and request that my client give me SAS 70 reports from his/her critical financial vendors, I am often amazed (or appalled) at what I get to read.
My team performs about 20-25 SAS 70 Type IIs every year, and maybe 2 SAS 70 Type I exams. Why the big difference? Type II exams actually test the controls the service organization says it has in place. In a Type I, all we test is that the company says it has controls, and that those controls appear adequate. BIG difference. It’s also a big difference in price for the service organization, so that companies try to get a Type I if they can.
Sometimes a service company will start with a Type I, planning to go to a Type II. I’m inclined to recommend getting a SAS 70 readiness assessment, then completing a Type II – it saves money and makes clients happier. More on this later.
Also, Type I exams only look at what control procedures are in place at the time the service auditor comes to visit (called “Point in Time,” appropriately enough). They can throw out the controls the day after that. So this type of SAS 70 has limited value to clients (your company).
For Type II exams, we test over a period of time, say the six months, or nine months, or 1 year previous, to ensure that the controls were in place and effective. The downside is that it is previous time testing, so if the controls fall apart three months after the test, I can’t report that until (or if) I come back for the next test. But it does give considerably more assurance that controls are in place, if you can see that in the previous year what we have tested is in place.
SAS 70 exams must be signed off on by a certified public accountant, even if CISAs are doing the testing on site. Make sure the company that did the test for your outsourced service is exactly that. Otherwise, the report is not legal in a court of law.
I have seen proposals (just two weeks ago, from a very big service company, as a matter of fact) that announced they were “doing” a SAS 70 as part of their security; number one, they can’t “do” one, and number two, a SAS 70 isn’t a “security” exam.
It’s an exam to provide reasonable assurance to the client company’s internal financial auditors.
So, when reading the report, you’ll want to pay attention to the sections that describe what they’re doing to protect YOUR data. If your company is using a specific application over the web, what are they doing to provide safeguards for your data on that web server or database?
A little over a year ago I reviewed a report on exactly this issue; the report tested the office Windows Domain for good control practices but never addressed any controls over their application web server: a Linux box. (Scary, isn’t it?)
There’s generally several sections to the SAS 70 report, and it’s worth knowing what to look for in each section. We’ll touch on that next.
There seems to be a lot of mis-information about what a SAS 70 report is – just today I came across a post that referenced being “SAS 70 – compliant.” There is no such thing. There is no pass/fail aspect to a SAS 70 because the Control Objectives and Control Procedures are designed by the client. It’s hard to flunk a test you designed for yourself (although I’ve seen lots of companies that do it).
A Statement on Auditing Standards #70 is used exclusively by service organizations that provide a critical financial service to their client businesses.
For instance, if your company outsources health care management to another company, your company will want a SAS 70 report from the health care management company. Why? (For starters, it’s good to know your health care mgt company takes good care of your money and personal health information.) Because your internal financial auditors are going to demand you get one from them. Health care management costs a lot of money and can have a big impact on your company should they not have good practices in place.
SOX regulations require that companies that outsource services that provide a critical financial function have a SAS 70 from that company. Banks are required by the FDIC to have SAS 70s from any service that provides a critical financial function.
So, your internal financial auditor is asking because he/she must meet regulatory requirements. Any time your company outsources a service that is deemed a critical financial service to the company, they should be asking for a SAS 70. And not just any old SAS 70.
In the course of many audits and pentests, I can’t tell you how many times I have found flaws and openings based on bad development practices. It’s downright painful. And yet software keeps coming out with the same problems. I know WHY this is happening, but I can’t stop it. YOU can.
Have you ever been in the position of having a software vendor say: “We’re not going to test that patch yet, you’ll have to wait for the next software release from us. If you patch it, we won’t support it.”
Or finding a security flaw in the application, reporting it to the vendor, and having them say they will charge your company to fix it as a “feature request.”
Or examining roles and rights in the database, and finding out everyone is sysadmin. Or better yet, the developer hardcoded his ID into the application.
I bet you have, and you know I have. Once the software is installed and in production, they have you over a barrel and they know it. Time to build a better barrel.
Time after time, I’ve found software applications that don’t secure the application user inside the database (giving that user rights to EVERYTHING). Why? Because it’s easier to code. You don’t have to spend time finding out what broke and fixing it when you lock user rights down. Some applications hardcode usernames and passwords right into the software so that it can never be changed (unless, of course, you pay for an upgrade). Even worse, I’ve seen it when the ID is hardcoded with a blank password. Why? It’s fast, cheap and easy.
How do YOU change it? Two ways:
First, raise management awareness so that you are at the table with the software salespeople to ask some hard questions. Is security part of their SDLC (Software Development LifeCycle)? Management can often be “wowed” by a product without ever looking under the hood. Ask how their product is secured, especially since it will probably be holding important data. Don’t be wowed by application level controls – get some hard answers on how the data is accessed.
Second, be there at contract time. This is the most important. Make sure it is written into the contract that they will fix all security flaws found in their product within 30 days. Make sure that they are responsible for testing OS patches quickly and reporting to you if it is OK to patch. Pick a timeline you can live with. After all, you’re paying them for a service.
If not, you’ll have to live with buggy code and I’ll have to audit it. We’re in this together.
From Craig Wright comes this riveting post:
I have a Jura F90 Coffee maker with the Jura Internet Connection Kit. The idea is to:
“Enable the Jura Impressa F90 to communicate with the Internet, via a PC.
Download parameters to configure your espresso machine to your own personal taste.
If there’s a problem, the engineers can run diagnostic tests and advise on the solution without your machine ever leaving the kitchen.”
Guess what – it can not be patched as far as I can tell It also has a few software vulnerabilities.
Fun things you can do with a Jura coffee maker:
1. Change the preset coffee settings (make weak or strong coffee)
2. Change the amount of water per cup (say 300ml for a short black) and make a puddle
3. Break it by engineering settings that are not compatible (and making it require a service call from the Internet!)
Craig goes on to reverse engineer the software, with predictable results: Coding with no security. The details are painful.
The connectivity kit for the coffee machine installs software that uses the connectivity of the PC it is running on to connect the coffee machine to the Internet. This allows a remote coffee machine “engineer” to diagnose any problems and to remotely do a preliminary coffee service. Be still my heart – a remote coffee machine ENGINEER. (A NEW acronym:RCME)
It seems the software allows the “RCME” (can you say “attacker?”) to gain access to the Windows system it is running on at the level of the user. For most of us, that would be administrator.
Compromise by Coffee. Whoo HOO. Can’t wait to see this come up in an audit.
And you can buy it for only $1798.00 at Amazon.
What’s surprising is that this thing has been on the market since September 2006, and it seems to have just now hit the press.
And Jura’s response?
“Jura is well aware of these articles which it clearly qualifies as misinformation. “ So Jura says security researchers are wrong. A coffee maker company knows best! OOOKay.
“The internet Connectivity Kit which can optionally be acquired for only one device (IMPRESSA F90/F9) And this makes insecure software better how?
will at no times connect the coffee machine to the world wide web. Except the software allows a remote coffee machine ENGINEER to access the machine from the web. OOOKay, again, this is secure how?
“Its settings can therefore only be changed by the machine’s rightful owner.” And if a remote coffee machine ENGINEER is allowed to run diagnostics on the machine – is this statement accurate? What else can the remote coffee machine ENGINEER do while he/she is running those diagnostics?
I’m feeling a caffeine buzz already. Is this a high risk vulnerability? No. Is it a good idea? NO.
The study from Verizon had some interesting (and scary) information about the growing worldwide market for stolen data. For example, attacks from Asia, particularly in China and Vietnam, often involve application exploits leading to data compromise. – Folks over there know about coding, automating attacks and have the motive of acquiring confidential information to use.
Defacements frequently originate from the Middle East – no surprise, given the hotheads there.
Internet protocol (IP) addresses from Eastern Europe and Russia are commonly associated with the compromise of point-of-sale systems. (Can you say “Hannaford?”)
Those folks are in it for the money.
One area not overly referenced in the report is the fact that banking hacks often originate from South America – looking for the really BIG money.
Retail, food and beverage industries account for more than 50% of the cases studied. Small and medium-sized businesses are still struggling to keep up data security- especially with credit card information. Eighty percent of the data stolen was payment card information.
The other source in small companies is theft of employee/client personal information often found in HR/payroll databases and client GL (General Ledger) information. With little or no segregation of duties, providing oversight into who accesses that information is very difficult. The second highest type of data stolen (32%) was PII – personal information.
Which accounts for why so many businesses (70%) had breaches that were discovered by an outside party.
Here’s some of Verizon’s recommendations for the Enterprise:
# Align process with policy. In 59 percent of data breaches, the organization had security policies and procedures established for the system, but these measures were never implemented. Implement, implement, implement.
# Create a data retention plan. With 66 percent of all breaches involving data that a company did not even know was on their system, it’s critical that an organization knows where data flows and where it resides. Identify data and prioritize its risk to the organization.
# Control data with transaction zones. Investigators concluded that network segmentation can help prevent, or at least partially mitigate, an attack. In other words, wall off data when and where appropriate.
# Monitor event logs. Evidence of events leading up to 82 percent of data breaches was available to the organization prior to actual compromise. Data logs should be continually and systemically monitored and responded to when events are discovered.
I know I’m an IT Auditor, and we should eat acronyms for breakfast, but it seems as if the focus on “achieving compliance” has brought out the worst in us. “We’re Compliant!” has become the holy grail of corporate management, and IT has jumped on the bandwagon because they can get funding for security products that way.
Round it off with the security vendors changing their market strategy to mindlessly follow this trend and you have an endlessly generated collection of “marketspeak.” Anton Chuvakin has jumped in to promote “GRC,” Governance, Risk, and Compliance. After that he used “IT GRC,” “Unified GRC,” and who knows what vendor will jump in with another riff off of that.
The latest one? “We have to get DLP.” (Data Leak Prevention) Please. Dr. Chuvakin redeems himself on this one, calling it by it’s true name: “content monitoring and filtering.”
How about “SaaS?” Cute lettering, isn’t it? Can you say: “Thin client?” along with “cost more?” Sigh. Until we can build enterprise software that incorporates security into the development lifecycle and patch our servers yesterday, getting the next new security product is water over the dam. The real thin client/virtual desktop is something I’ve seen in action, and I think it’s a pretty nifty idea. But SaaS is death by nickels and dimes.
Using the phrase “The Cloud” for the Internet is something else I find annoying. It’s incentivizing me, if you get my drift.
And “Web 2.0.” What the heck was Web 1.0 and why do we need 2.0? We can’t even agree on what “2.0” is.
Or “IPS.” Intrusion “Prevention” that we had to turn off because it was stopping so much legitimate traffic….yup, that was preventing intrusion all right.
I hope I’m not turning into Dvorak (the classic Internet curmudgeon), but I can certainly get cranky with all this nonsense.
Let’s hear YOUR favorites.
A Boston Globe article caught my eye. Although it’s not news to me (or probably you), here is more than anecdotal evidence that many medium and small businesses are still not making inroads into security issues.
The article reports on a study performed by Verizon Communications analyzing 500 data breaches since 2004, with a total of over 230 million compromised records. Also included are five of the biggest breaches ever reported.
63% had at least two months go by before the breach was discovered. In 70 per cent of cases, a third party discovered the breach and contacted the organization. That’s seventy percent of hacked businesses that did not know they had been broken into.
It’s a report that is well worth reading, unlike many vendor-based papers, and it provides some deeply interested points to consider. I’ve added my conclusions in bold face:
“# Most data breaches investigated were caused by external sources. Thirty-nine percent of breaches were attributed to business partners, a number that rose five-fold during the course of the period studied.”
Segment and monitor your vendor and third-party access points.
“# Most breaches resulted from a combination of events rather than a single action. Sixty-two percent of breaches were attributed to significant internal errors that either directly or indirectly contributed to a breach. ”
Control and monitor user access rights.
# Of those breaches caused by hacking, 39 percent were aimed at the application or software layer. Attacks to the application, software and services layer were much more commonplace than operating system platform exploits, which made up 23 percent.
Ensure the software your company purchases has a strong security portion of their SDLC (Software Development Life Cycle) and a commitment to test and report/fix OS patches in a timely manner.
# Fewer than 25 percent of attacks took advantage of a known or unknown vulnerability. Significantly, 90 percent of known vulnerabilities exploited had patches available for at least six months prior to the breach.”
(BIG no brainer) Patch your servers, especially those facing the Internet and database servers, quickly.
#Only 18 percent of breaches were attributed to insiders (although when the culprit was an insider, the consequences of the breach were generally greater, exceeding the size of external breaches by ten to one)…In the case of insider attacks, IT administrators were by far the biggest culprits, accounting for 50 percent of attacks.
Monitor your users with administrative access. Insiders still carry the highest risk.
A few days ago I went with my partner to the local drugstore (all the big chains have these machines) to print out a jpeg to send with a card for Father’s Day. The picture was on a thumb drive for easy transport, and I was along to provide technical support (I try to at least appear useful).
Imagine my HORROR when, after plugging in the drive as the machine requested, I saw the machine begin reading everything on the thumb drive, including financial spreadsheets, letters, family photos and lots of confidential stuff. Turns out she was using the same thumb drive she backs up all her critical documents with to transport the photo to the drugstore.
Needless to say, it was too late to recall, and my poor partner could only say, “I didn’t know!” at my yelp of despair. We printed the photo and left, with me mumbling under my breath about what a good column THAT was going to make.
So, how long before some poor minimum wage guy working behind the counter and hacking on weekends says, “Hmmm. Look at all that interesting data along with all those dumb pictures.” There is no warning or indicator on the machines that we should think about what we’re giving away on those thumb drives along with pictures of junior and his new fishing rod. Perhaps they’re assuming we know better. (ROTFL)
More likely, it has not occurred to the designers nor the drugstore management that those machines should only be reading for .jpeg, .tiff, .bmp, .raw and other illustration files, not ALL files. Although the information was not printed, it was acquired. Even if there is no hard drive (which I highly doubt) the files would remain in memory. Where is all that information sitting? Who has access to it? Am I nervous? You betcha.
I can only wonder how long will it be before we get something in the news about these machines.
I noticed a recent post on the boards questioning the value of SAS 70 Reports. Given that I do about 15 a year, I thought I’d venture an answer to that question.
First, it’s important to understand what a SAS 70 is NOT:
It’s not a checklist;
It’s not a certification;
It’s not a security assessment;
In fact, it doesn’t do a thing for your network security, except, perhaps, inadvertently. It does not directly attest to the quality of your network security, either; that’s not its’ function.
And only a certified public accounting firm can do one, because a certified public accountant must sign off on the report.
So what CAN such a report do for your organization, and why? Are your customers constantly asking for one? Are you losing business because you don’t have one?