is there a limit to the file size used in an ACL under squid?

15 pts.
Tags:
Linux
We are trying to block myspace from our network. I blocked the 6 class C ranges owned by them on the firewall but in a few days students were again reaching myspace by using open proxies. To address this problem, I came up with a way to grab open proxy lists from the internet and process them into a form usable by squid. This file is now called by an ACL entry in squid to block these IPs. It seemed to work at first but now students are able to reach open proxies on the list. Is there a size limit on how big a file squid can process? If so, how do I increase it? The file has reached over 50K. The IP that came to my attention as not blocked is near the end of the list. Thanks. rt
ASKED: June 7, 2006  7:21 PM
UPDATED: June 8, 2006  1:51 PM

Answer Wiki

Thanks. We'll let you know when a new response is added.

There is no size limit in Squid in how lists it can accept in it’s dst or dstdomain ACLs.

Discuss This Question: 7  Replies

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when members answer or reply to this question.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Astronomer
    Thanks for blowing that theory out of the water. Any suggestions on what actually is going wrong? rt
    15 pointsBadges:
    report
  • Henriknordstrom
    Not really. Should work. Make sure the IP is formatted correctly, no complaints from "squid -k parse" etc.
    0 pointsBadges:
    report
  • petkoa
    Hi, astronomer I don't know about any ACL-file limit in squid, but AFAIK, squid processes these files once, at startup: adding new entries doesn't make them available immeadeatly to squid (not like in cron/crontab :o(( ). You have to reload squid configuration (squid -k reconfigure) to activate new IP's in the ACL file. BR, Petko
    3,120 pointsBadges:
    report
  • Preytell
    Depending on what capabilities your firewall/proxy has you can block the ip's or the domain by name, or easier yet and far less overhead is to set your internal DNS resolver as authoritive for myspace.com and return 127.0.0.1 for all lookups. That way it doesn't matter. This won't help the outside proxy problem but that is a behaviour problem best solved with policy and enforcement. Losing access because you cannot follow the rules is stronger then denying a list of proxy address that you know about, there are far too many to block them all. Jerry
    0 pointsBadges:
    report
  • This213
    Why bother using squid to block them? Use iptables instead and drop any outgoing connections to those IPs. You should already have some rules as such: # loopback interface is valid. iptables -A OUTPUT -o lo -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT # anything outgoing on remote interface is valid iptables -A OUTPUT -o $EXTIF -s $EXTIP -d 0.0.0.0/0 -j ACCEPT Just put your outbound rules before these, as such: # stop any outbound traffic to ip 123.123.123.123 iptables -A OUTPUT -o $EXTIF -s $EXTIP -d 123.123.123.123 -j DROP # loopback interface is valid. iptables -A OUTPUT -o lo -s 0.0.0.0/0 -d 0.0.0.0/0 -j ACCEPT # anything else outgoing on remote interface is valid iptables -A OUTPUT -o $EXTIF -s $EXTIP -d 0.0.0.0/0 -j ACCEPT This is just a generalization. I don't know what firewall front end you're using, what OS you're on or if you've even configured your firewall, so...
    0 pointsBadges:
    report
  • Astronomer
    I found the problem. I feel a little foolish. I debugged the configuration on the library proxy. When I configured it on the main proxy I mistyped the path. I only discovered it when I was answering your questions. Squid is running on a windows server, not linux. This was a political decision. You can only fight so many battles. I didn't get any error messages. Sometimes I really miss unix/linux. We have tried to get the instructors and staff to police usage but this has not been successful to date. They want us to fix the problem. If anyone is interested, here is how we did it. One of the instructors wrote a java program that pulled the IPs out of my web captures then sorted them and removed duplicates. We also use this system to throttle student downloads. Here are the relevant config lines. acl openproxy dst "C:/squid/etc/ipaddsort.txt" acl day time 07:00-16:00 http_access deny openproxy http_access allow our-src-net http_access deny all delay_pools 2 # we now have two pools delay_class 1 3 # pool 1 is a class 3 pool delay_class 2 1 # pool 2 is a class 1 pool # allow throttled pool for (our net) AND (during day) delay_access 1 allow our-src-net day delay_access 1 deny all # allow less throttled pool for (our net) AND (NOT daytime) delay_access 2 allow our-src-net !day delay_access 2 deny all delay_parameters 1 300000/300000 -1/-1 7200/256000 delay_parameters 2 400000/400000 # allow ~3Mbit pipe after hours "changed to 4" Ipaddsort.txt is just a text file with an IP on each line. If anyone wants a copy of the java code, let me know. Thanks for all of the responses. rt
    15 pointsBadges:
    report
  • Astronomer
    We have two layers of firewalls. The outer and in my opinion, more capable firewall consists of a failover set running openbsd. The inner firewall is a cisco pix. In my experience, the bsd firewalls are much more flexible and configurable but everyone else hates the command line interface. As a consequence, the outgoing rules that get changed on a regular basis are done on the pix with a GUI. I went with the squid solution because it was clear there was no way for the pix to handle this many rules. As usual, open source software is more capable and more flexible. It was fairly easy to teach the techs how to generate new lists each month and implement them. If we were running linux instead of a pix I would have checked out how to do it there. Our long term solution for this will be a packeteer. They claim to be able to see the reference to myspace in the packets going to the open proxies and drop them. It's too bad there isn't an open source package for this. Thanks for the suggestion. rt
    15 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

To follow this tag...

There was an error processing your information. Please try again later.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thanks! We'll email you when relevant content is added and updated.

Following