Monday, June 08, 2009

IP Blacklisting Version 2 for Snort available

I found myself with 9 hours to kill on an airplane ride this weekend so I coded up the two features I've been hearing the most for the original IP Blacklisting patch I wrote. The first new feature was to be able to associate a name with a blacklist and have that name produced in the event that Snort outputs. The second feature was to be able to load blacklists from external files so that very large blacklists could be maintained without having to modify the snort.conf file.

Both of these features are now available in version 2 of the patch. Direct loading of the IP address lists from the snort.conf preprocessor directive is no longer supported, you have to use the external files.

Here is a sample directive for snort.conf:

preprocessor iplist: blacklist dshield /etc/snort/dshield.blacklist \
blacklist sourcefire /etc/snort/sourcefire.blacklist \
whitelist /etc/snort/default.whitelist

And here is a sample blacklist file:

# This is a blacklist file, there are many like it but this one is mine
# Comments are supported # I can do inline comments too and put
# multiple CIDR blocks on one line # Whatever you like

As per usual, bug reports and feature requests can be sent directly to me. I still haven't done any performance testing of this code so your mileage may vary. I'd be interested to hear of any comparisons of the performance of this code vs the Emerging Threats blacklist.

Tested on Ubuntu, Fedora and OS X only so far.

You can get the patch here:

Technorati Tags:
, , , ,

Labels: , , , , ,

Wednesday, May 13, 2009

IP Blacklisting for Snort available

After a discussion on the Snort-users mailing list last week regarding using standard Snort rules to implement Reputation-based IP blocking in Snort (and how badly the performance sucked) I decided to write some code to do it the "right way". The result is the "iplist" preprocessor, a module that supports IP Blacklisting and whitelisting via user-provided lists of known hostile IP addresses.

The internals of the system use the Patricia Trie code from the Snort 3.0 code tree to provide the primary address lookup mechanism. Currently I'm only supporting IPv4 addresses although the P-Trie code supports IPv6 addressing too.

This patch has been applied against Snort only. I've tested
builds on OS X, Ubuntu and Fedora so far. It requires libdnet (or
dumbnet-dev for those of you on Debian-based distros) to build
properly. Check the README file that comes with it for instructions
on patching it into your codebase. It supports inline blocking and
alerting but not Flexresp-style TCP reset session shootdowns.

Have a look and let me know what features you'd like or bugs you find.

This code is purely EXPERIMENTAL, this is just me spending some of my
spare time doing a fun coding project so if your machine sprouts legs
and refuses to work until it receives part of the TARP bailout it's
not my fault.

Here's the link:

Technorati Tags:
, , , ,

Labels: , , , , ,

Sunday, April 19, 2009

RSA 2009

I'll be out at RSA in San Francisco all this week. I'm speaking at the America's Growth Capital conference tomorrow at 2:30 on "Snort and The Future of Real-Time Adaptive Network Security". Wednesday I'll be doing a Peer2Peer talk at RSA on "The Future of Snort", primarily focusing on Snort 3.0 and upcoming changes to

For those of you attending RSA I hope to meet you out there!

Labels: ,

Thursday, April 02, 2009

Snort 3.0 Beta 3 Released

It's been quite a while since the last Snort 3.0 beta and yesterday we released Beta 3. The reason that it's taken so long to get out the door is that we decided to start doing performance analysis of the Snort 2.8.x analytic engine that was ported over to run on top of SnortSP and the results were... interesting.

When I started developing and designing the Snort 3.0 architecture one of the assumptions that I based my design around was that multi-core computing environments were going to be the norm rather than the exception in the platforms we'd target moving forward. With that in mind and knowing the typical packet analysis load we place on machines with Sourcefire's applications I was looking to utilize CPU cycles more efficiently by performing common processing that happened for every packet (acquire/decode/flow/stream/etc) once and then spreading the analysis (Snort/RNA/etc) across the cores in separate threads. This would allow us to perform parallel processing across analytic engines while only having to perform the common processing tasks they all have to do once.


This is great in theory and seemed like we'd see some real performance gains but when we started looking at the Snort 2.8.3.x analytic engine that we put out in the original Beta release we saw that performance was not where we wanted it to be relative to the monolithic Snort 2.8.3.x architecture. Initially we thought that the performance discrepancies were due to inefficiencies that were a byproduct of porting the monolithic Snort 2 packet processing engine to run in a threaded environment due to the incorporation of things like thread-local storage (TLS). Lots and lots of analysis was done over several months, tons of optimizations were made by Russ Combs and Ron Dempster, Snort's run-time processing cycle was studied at length and at the end of the day the performance still wasn't where we wanted it.

Eventually we arrived at the conclusion that the performance issues we were seeing were stemming from the way that modern Intel CPUs use and synchronize cache memory. If you'd like to see some more in-depth discussion on the Intel CPU caching architecture you should take a look at Gustavo's site and check out the articles on caching. The performance hangup we ran into really manifested itself as we tried to distribute traffic across cores on multiple physical dies, the overhead incurred by the data transfers and cache coherency operations required by the cores was costing us lots of CPU cycles.

What has become apparent from performing our analysis and extensive experimentation is that data spreading across the current generation of multi-core Intel CPUs is not something that works well for real-time applications like IPS. Intel CPUs really seem to favor the more traditional load balancing approach that's been used successfully with the Snort 2.x code base for years where independent processes are locked to separate cores and flows are unique to each respective process.

While we were exploring the performance envelope of the Snort 3 code base we looked at a number of different data distribution architectures to move data from the data source subsystem of SnortSP to the analytic engines (i.e. the Snort 2.8.3 engine module). A model that we've found to work well is what we've come to call the "stacks model". The stacks model works a lot like the Snort 2.x preprocessor subsystem but on a somewhat larger scale. Instead of running several analytic engines in separate threads with each thread locked to a CPU, the stacked model runs the engines in the one thread and calls them sequentially, passing the packet stream from engine to engine. The stacks model is included in Beta 3 and is a compile-time option for running the system.


One model that we haven't benchmarked extensively is what I'll call a "single core multithreaded model" where we run one thread per analytic engine but lock them all to the same CPU core, eliminating the cache coherency and sync overhead while paying the price of heavier loading on the individual CPU cores. This will be an area of further research down the road.

We are planning on standing up a public CVS server in the near future to host the Snort 3 code so that we can foster better interaction with the open source community.

The next big hurdles to get over with Snort 3 are development of the TCP stream management subsystem and the Snort 3.0 analytic engine module. Stay tuned for more (and more regular) releases as we get rolling on these subsystems.

Labels: , , ,

Monday, February 16, 2009

Saving the data on an iPhone in Recovery Mode

I'm posting this because I couldn't find this information succinctly produced in any corner of the web and I wanted people who had the same problem to have a place to go to find answers.

Here's the scenario. A loved one drops their 1st gen iPhone into a body of water (sink/tub/toilet/lake) and, effectively, bricks it. When it dries out enough to boot back up it goes into Recovery Mode and asks you to plug it into iTunes so it can be reimaged and reset to the factory defaults erasing all the data on the phone in the process. Normally this is no big deal, assuming of course the phone has been sync'd recently.

What if it hasn't? What if there's precious data on there like pictures of your children that your wife took and loves dearly that she never backed up off the phone for whatever reason?

You, my friend, have a problem.

If you search the web for "iPhone recovery mode save data" or whatever you're going to get back a pile of results that basically say stuff like "take it to the Apple Store and let a Genius look at it". When you do that, they'll tell you you're hosed, but of course they'll be cool about it and you'll have the opportunity to buy some stuff while you wait in line to talk to them.

You can also call a data recovery service. They can bust open your iPhone and try to extract the media from its memory for the low low price of ~$900+. Generally speaking, that's not going to be an option for almost anyone.

Shouldn't a hacker have had this problem and solved it by now?

As a matter of fact, at least one has. If you find yourself in this situation and you want to be all heroic and stuff then go get yourself ZiPhone. Among the tricks it can perform on your iPhone is the extremely handy trick of taking it out of Recovery Mode and back to Normal Mode. Just launch ZiPhone and select the "Reboot in Normal Mode" option from the menu bar. Once you've done that you can just sync the phone as normal and save off all your data. Once you've done that you can then turn to your wife and receive your accolades and just desserts.

Anyway, I hope someone finds this post more helpful than the usual pile of useless results Google returns on the topic.

Labels: ,

MacBook Pro and the slow-motion beachball of death crash

I've been using 15" Mac laptops since about 2002 and I love them dearly. As general purpose computing platforms for all of the various things I have to do (coding, presentations, communications, etc) they are by far the best blend of power and functionality I've ever had. Having said that, I've been pulling my hair out lately because my latest machine, a ~1 year old 2.4GHz machine with 4GB of RAM and a hand upgraded hard drive, has been crashing.

Generally what's happening is the machine will operate fine for some period of time, several hours to several days. At some point I'll have 2-7 apps open and one of them, usually Firefox, will beachball. When this happens I can switch away to another app but then within 30 seconds or so that app will go down, then the menu bar becomes unresponsive, I can't launch any new apps and that's all she wrote. It's reboot time.

This has been going on for about a month now and I've tried a variety of solutions in terms of running repairs and all that jazz with no luck. At this point I'm ready to throw in the towel. In addition to being a machine that I write code on this is also a machine that I use to do presentations in front of rooms full of people. A lockup during a presentation would be an unacceptable embarrassment so tomorrow I'll be taking this machine back to Sourcefire IT and getting a new MBP. Hopefully they can figure out what the problem is or send it to Apple and let them fix it.

Labels: ,

Tuesday, November 25, 2008

Daemonlogger 1.2.1 Released (and, oh yeah, 1.2.0)

Daemonlogger 1.2.1 is available at its usual place. This release is a cleanup that allows compilation to work properly on systems which don't support the BSD TAILQ macros (like some Linux's).

I also neglected to announce the release of Deamonlogger 1.2.0. That release changed the default pruning mechanism for ringbuffer mode back to the original default of pruning the oldest file in the logging directory and allows you to do per-run pruning by setting the -z switch at the command line. It also repaired a bug with size-based rollovers not working properly.

Get Daemonlogger here.

Labels: , ,

Friday, September 12, 2008

And now, some astronomy

We had some beautiful clear skies here over Labor Day weekend so I took advantage of it and managed to get my telescope out 3 nights in a row to do some astrophotgraphy. The results are my best yet but certainly just "decent" by advanced amateur standards these days. I'm getting better at it but I've still got a long way to go.

Anyway, here's the pictures. For those of you interested in such things the gear used to collect the photons was a TEC-140 Apochromatic Refractor, SBIG ST-10XME CCD camera, and an Astro-Physics 900GTO mount. I'm also using a Starlight Instruments Digital Feather Touch focusing system.


Labels: ,