Recently I’ve been looking more in-depth into Snort. I’ve had it in use for my business for a little while now, but I wanted to see how far down the spec-chain I can get it to run on. I already had it running fine in a virtual machine environment with 4GB of RAM, so I worked from that machine. While this proved to be quite interesting (who wouldn’t love to run a Snort sensor on a Raspberry Pi?), it also proved to be a little stressful.
The documentation on how to get Snort to run on low-spec’ed machines was for the most part out of date. Most of them would say to add ‘low-mem’ to the ‘configuration detection’. With Snort 2.9.3, I found this wasn’t exactly a working solution as I kept receiving the error that ‘low-mem’ is not a valid option. Another thing I was to told to try is to change the search-method option to something like ‘lowmem-nq’, which in conjunction with what I found to be the answer is can help, but you still have to dig a little bit deeper.
What I found I had to tweak is actually the ‘max_*’ settings for the stream5_global preprocessor. When I was trying to run Snort, I would always receive an error saying that the flowbits could not be allocated in stream5_global.c, which I dug for about an hour trying to figure out what was actually going on. Since this I have also learned there’s a ‘config flowbits_size’ config option (commented out by default), but I did not want to mess with that as I’m not sure what it would do.
Instead, here’s what my preprocessor looks like on a 4GB virtual machine:
preprocessor stream5_global: track_tcp yes, track_udp yes, tracp_icmp no, max_tcp 262144, max_udp 131072, max_active_responses 2, min_response_seconds 5
Not having the preprocessor track packets you’re not interested in (i.e.: no icmp if you don’t care about those) will reduce the memory usage as well, but what you have to actually focus on is the max_* settings. These tell the preprocessor how many sessions at a given time it can keep track of for each protocol, which in short terms also leads it to allocating enough memory to handle such a work load. I disabled tracking icmp and udp as my server only permits TCP anyways, and reduced the max_tcp to a very small value of 1024 to see if it would run. Low and behold, it ran without issues and I can monitor traffic just fine!
max_tcp has to be within the range of 1 and 1048576 (max_tcp, udp and icmp have different ranges). If you set the value higher than what your VM can handle, you’ll receive an error similar to:
ERROR: snort_stream5_tcp.c(949) Could not initialize tcp session memory pool.
Fatal Error, Quitting..
As TCP likes to have everything in multiples of 32, I’m a fan of sticking with such multiples, but anywhere within the region mentioned earlier will be fine. If you have 256MB of RAM, I’ve found that the highest setting (for me at least) that works is 9999 for max_tcp. Which for a small network it should be just fine, if not overly abundant.
Also, please note that limiting the rules, preprocessors, etc… that are running will also reduce the memory footprint as well, so this is by no means a de-facto standard of how to get it to run, but it’s definitely a step in the newer right direction.