Wednesday, May 13, 2009

IP Blacklisting for Snort 2.8.4.1 available


After a discussion on the Snort-users mailing list last week regarding using standard Snort rules to implement Reputation-based IP blocking in Snort (and how badly the performance sucked) I decided to write some code to do it the "right way". The result is the "iplist" preprocessor, a module that supports IP Blacklisting and whitelisting via user-provided lists of known hostile IP addresses.

The internals of the system use the Patricia Trie code from the Snort 3.0 code tree to provide the primary address lookup mechanism. Currently I'm only supporting IPv4 addresses although the P-Trie code supports IPv6 addressing too.

This patch has been applied against Snort 2.8.4.1 only. I've tested
builds on OS X, Ubuntu and Fedora so far. It requires libdnet (or
dumbnet-dev for those of you on Debian-based distros) to build
properly. Check the README file that comes with it for instructions
on patching it into your codebase. It supports inline blocking and
alerting but not Flexresp-style TCP reset session shootdowns.

Have a look and let me know what features you'd like or bugs you find.

This code is purely EXPERIMENTAL, this is just me spending some of my
spare time doing a fun coding project so if your machine sprouts legs
and refuses to work until it receives part of the TARP bailout it's
not my fault.

Here's the link:

http://www.snort.org/users/roesch/code/iplist.patch.tgz


Technorati Tags:
, , , ,


Labels: , , , , ,

12 Comments:

At 7:51 PM, Anonymous Anonymous said...

Awesome, now when will this available on the SF gear :O)

 
At 9:16 PM, Blogger Martin Roesch said...

Seeing as I just cranked it out without really talking to anyone about it I think the answer will be "a while". :) To make it into an enterprise-grade capability will require all the usual documentation and interaction with marketing/sales/engineering to "do it right". This is a usable proof of concept that will help get the feature into the product eventually, but I'm pretty sure it won't be in the current product cycle.

 
At 4:34 AM, Blogger Russell said...

this is great Marty, but it won't stop me asking for more :) Many of us use lists of IP:Port numbers to detect traffic to known bot controllers. Using the ports cuts down the FPs greatly but means one rule per IP. Having a preprocessor to handle these would be a boon.

 
At 7:51 PM, Blogger Martin Roesch said...

Ok, I can add port checking as well, it'll take a bit to get it in there as I'm going on vaction for a few days. I'll let you know when the update is available.

 
At 11:35 PM, Blogger Edward said...

Have a great vacation and thanks for the code, it's very much appreciated.

 
At 11:06 PM, Anonymous Anonymous said...

Now all we need is the ability to update these lists in real time and we'll be all set...maybe with the new super powers of snort 3.0

 
At 10:35 PM, Blogger Martin Roesch said...

It'll get there eventually....

 
At 6:07 AM, Blogger Edward said...

As it happens I was wondering about the realtime thing myself only the other day. Apart from a hope that security policy could be more joined up between ISPs, NOCs / data centers and hosts for example - I know there's a skeleton there already but it's not working very well.

In terms of a realtime community model I pondered how micro-blogging tools (RSS/Atom based) could be implemented - an example being a hash-tagged post on Twitter: http://twitter.com/esdaniel/status/1826879428

'If' we had ISPs, DCs and hosts sharing the metadata i.e. these 'alerts' then it's highly likely a points system could be used to log rogue IPs across the ecosystem and use some common sense as to what would get auto-banned and what would not.

Thus in the same vein as twitter, probably using that or Laconica's API to micro-post then we'd follow the security firm or pro's live alert feeds via automated clients and then when an attack is registered by a trusted source it is almost immediately banned across the digital ecosystem that is connected to these alerts - common sense (parameters/thresholds/constraints/lookups) required to handle false positives.

I'm sure this has been done though do not know of a current service today, if not would be fun to get it done - what do you think?

 
At 10:09 PM, Blogger Martin Roesch said...

Hi Edward,

I think that sounds pretty cool but maybe a better vehicle for the communications is something like an instant messaging mechanism rather than a broadcast medium like Twitter. I'm just thinking about keeping the bad guys from knowing what we know, that sort of thing. Otherwise I think it's a cool idea!

 
At 9:57 AM, Blogger Edward said...

"knowing what we know"

Of course what we know is valuable when the 'enemy' does not know we know this, one would expect this to give strategic advantage though maybe this specific debate would make an interest article from you in the future?

If we are being attacked then what do we lose by raising the alarm promptly and in doing so alert our enemy to the fact we've detected them on the basis they have not yet gained access? I'd expect the alert should a) discourage their moral as they are no longer able to work covertly, b) we have helped re-enforce other network entry points that may later become under attack from the same enemy (the IP). Forewarned is forearmed and so on.

Yes, we lose something but do we not gain more - a bit like the argument for open / transparent security as opposed to closed security or security through
obscurity, unless I'm mistaken.

I think if one gave the community the option to openly share real-time(-ish) alerts we'd see quite a few people collaborate as well as have a handy way to further visualize/understand attacks without the need to seek vendor or academic research data for such distributed intelligence, it'd be community intelligence.

I wonder whether there's a business case similar to the Snort subscription for something that would aggregate, clean, score (think AI rules to remove false positives) and publish the blacklists. What I do know is we have the tools today to make this a very easy task to achieve, the more challenging aspect will be agreeing and deploying. Hence the idea of a disruptive model that empowers the community to share and the benefits that brings.

Admittedly ahead of a bolder move it is probably worthwhile for larger players with budget constraints to see how this could be used on IT estates with multiple physical network locations and perhaps encourage their partners to consume this data and publish back theirs - I'll show you mine / You show me yours - as a first step.

I like the way we could loosely-couple the community (i.e. just share pertinent log data in realtime) so that we have both the intelligence of the Snort rules coupled with a real-time analysis of the threat activity as an extra layer of protection. The code to hack the snort_alert2post could easily include the APIs to facilitate one-time IMs to Jabber, MSN and/or Skype as well as to an Atom feed.

Atom helps with the asynchronous nature of things better than email as you're letting the data be pulled; as opposed to pushing it and dependent on an active 3rd party endpoint to receive this message unless you're running your own Jabber application server infrastructure.

Reminds me when people were talking back in 2003 about how RSS would displace email, here's a perfect example.

 
At 10:27 AM, Anonymous Anonymous said...

It looks like i've already hit the limit.
I tried converting all ip adresses in the emergingthreats BLOCK list to this preprocessor.

But now it complains
VAR/RULE too long
:(

and i can't get all the rules to work...

 
At 10:53 AM, Blogger Mark Linton said...

I think if your going to distribute the blocklist information publicly then I think twitter or the like would be ideal. The bad-guys are going to find out one way or another, and twitter makes it so easy to blast out updates.

 

Post a Comment

<< Home