Data vs. perimeter vs. network security

Application security
Current threats
Digital certificates
Disaster Recovery
Host-based IDS/IPS
human factors
Identity & Access Management
IDS/IPS management
Incident response
Instant Messaging
Intrusion management
IT architecture
Managed security services
Microsoft Exchange
Network security
Network-based IDS/IPS
PEN testing
Platform Security
Product evaluation
Risk management
Secure Coding
Security management
Security products
Security Program Management
Security tokens
Service and support
Signature updating/Management
Single sign-on
Software vs appliance
vulnerability management
A short time ago, author Wes Noonan wrote some tips for about deperimeterization. He explained how security is always pitted against business needs, and perimeters have become porous because businesses require traffic from SMTP, HTTP or VPNs to pass through the firewall. He then offered techniques for keeping data safe in spite of the activity at your perimeter.

I realize you have a variety of options when it comes to choosing a Windows line of defense, but I'm trying to get a sense of how many people actually lock down Windows at the data level. Do you invest most of your protection efforts at the data, perimeter or network level? What measures do you take to keep your Windows data secure even if the perimeter is compromised? Do you have data protection plans or products in place?

Another issue is that networks and applications are often treated as separate entities that never interact. This may be because they have different people maintaining them, unique security policies, etc. Is this the case in your shop?

I'm collecting this information for possible technical tips or a trends article on

Thanks for your time and attention. I hope to hear from you soon.

Best regards,
Robyn Lorusso

Answer Wiki

Thanks. We'll let you know when a new response is added.

Ah! A subject near and dear to my heart. Since this is likely to be lengthy, I’ll reply on separate occasions for each general subject.

First of all, perimeters get porous only partially because of business demands. What is more common is that few organizations have a formal firewall exception and review policy – and/or FAIL to follow up on it.

To wit: One place I worked for had a checkpoint firewall-1 with over 200 rules. To cope with the increasing load on the firewall, the initial attempt was to shut down logging on most rules. When I noticed this, I suggested that we (Ok, me and the firewall administrator) go through all of the rules and analyze them for current need, duplication, reality check (rules that applied to long-gone objects). This was an iterative process – too hard to do in one fell swoop – but, the result was to cut the number of rules in half.

The other part of this was the lack of a formal policy of who gets to specify rules, and what their expected lifetime might be. I’ve gotten management at a number of places to abide by (and support) the following policy:
1) All requests for a firewall “hole” must be accompanied by a business justification, and be approved by a director-level manager or above – who will be listed as the responsible business owner for that rule.
2)All requests must include the technical owner’s name and phone number.
3) All requests must include an estimated closure date for the rule.

Rule implementation must include the business/technical owners’ name and the expiration date of the rule.

Rule implementation must include a search for relevant similar rules so as to group similar functions under the same rule (example: 1 rule for outbound SSH, all users fit in there)

There must be a semi-annual or annual formal review of all rules – supported by, and participated in by management.


Discuss This Question: 1  Reply

There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.
  • Astronomer
    My experience is that policies vary widely depending on the organization. In the intel early access services lab we tightly controlled what could go where. The only thing we had open that bothered me was FTP to the shared net. When ISS came in to evaluate our setup they were surprised that we knew precisely what we were allowing. They said it was common for them to set up an environment like ours and have security gradually deteriorate until the net was mostly open within a few years. The situation is entirely different at the college I work at now. They didn't have a firewall before I came. I have characterized my firewall ruleset as swiss cheese. On the other hand, I try to limit as tightly as possible what each rule allows. For example, one client uses net meeting to a server in the state government. I opened all of the required ports between just those two specific addresses. Until we had an FTP proxy I refused to open the required ports except to specific external addresses. When we got the proxy running, these rules were all removed. We are in the process of moving public services to a DMZ. All incoming email and proxied web access goes thru a virus/spam scanner. Given my choice, I would use a different brand of virus scanner on the clients and servers. On the inside, I plan on partitioning up the network so each group can reach the central servers and the internet but not each other. Given our budget, we will have all of the servers on a single subnet and limit access with active directory. This clearly isn't optimal but this is the best we can do under current budget constraints. As best we can, we limit directory and file access to just what people need. This is done using active directory permissions. rt
    15 pointsBadges:

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

Thanks! We'll email you when relevant content is added and updated.


Share this item with your network: