tag:blogger.com,1999:blog-316724722024-03-21T18:25:36.377-04:00Security SauceEvangelism and thoughts on security, platforms, programming and other geekery.Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.comBlogger31125tag:blogger.com,1999:blog-31672472.post-26618031441406319022009-06-08T11:46:00.002-04:002009-06-08T12:15:46.232-04:00IP Blacklisting Version 2 for Snort 2.8.4.1 available<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2KrNqh4VaVSPQoz8n3EIxKqkl1uji1_ytpNjjzrnOugTONxuxWzK_xY_fG7jYihm_dOWrKYFyu1Ljt9FUfepRpWCuP_iYwrKnWKTgM3Unqfz-fbCnbgDVF7oGKcwxDaYSkWlmVg/s1600-h/blacklist.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 200px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2KrNqh4VaVSPQoz8n3EIxKqkl1uji1_ytpNjjzrnOugTONxuxWzK_xY_fG7jYihm_dOWrKYFyu1Ljt9FUfepRpWCuP_iYwrKnWKTgM3Unqfz-fbCnbgDVF7oGKcwxDaYSkWlmVg/s200/blacklist.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5344984946587481154" /></a><br />I found myself with 9 hours to kill on an airplane ride this weekend so I coded up the two features I've been hearing the most for the original IP Blacklisting patch I wrote. The first new feature was to be able to associate a name with a blacklist and have that name produced in the event that Snort outputs. The second feature was to be able to load blacklists from external files so that very large blacklists could be maintained without having to modify the snort.conf file.<br /><br />Both of these features are now available in version 2 of the patch. Direct loading of the IP address lists from the snort.conf preprocessor directive is no longer supported, you have to use the external files. <br /><br />Here is a sample directive for snort.conf:<br /><br /><pre><br />preprocessor iplist: blacklist dshield /etc/snort/dshield.blacklist \<br /> blacklist sourcefire /etc/snort/sourcefire.blacklist \<br /> whitelist /etc/snort/default.whitelist<br /></pre><br /><br />And here is a sample blacklist file:<br /><pre><br /># This is a blacklist file, there are many like it but this one is mine<br /># Comments are supported<br />10.1.1.0/24 192.168.0.0/16 # I can do inline comments too and put<br /> # multiple CIDR blocks on one line<br />172.16.16.17/32<br />172.16.15.14/32 # Whatever you like<br /></pre><br /><br />As per usual, bug reports and feature requests can be sent directly to me. I still haven't done any performance testing of this code so your mileage may vary. I'd be interested to hear of any comparisons of the performance of this code vs the Emerging Threats blacklist.<br /><br />Tested on Ubuntu, Fedora and OS X only so far.<br /><br />You can get the patch here:<br /><br /><a href="http://www.snort.org/users/roesch/code/iplist.patch.v2.tgz">http://www.snort.org/users/roesch/code/iplist.patch.v2.tgz</a><br /><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/cybersecurity" rel="tag">cybersecurity</a>, <a href="http://technorati.com/tag/open%20source" rel="tag">open source</a>, <a href="http://technorati.com/tag/snort" rel="tag">snort</a>, <a href="http://technorati.com/tag/sourcefire" rel="tag">sourcefire</a>, <a href="http://technorati.com/tag/tools" rel="tag">tools</a><br /></p><br /><!-- Technorati Tags End --><br />Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com10tag:blogger.com,1999:blog-31672472.post-54146342386542485052009-05-13T15:45:00.004-04:002009-05-13T15:48:18.874-04:00IP Blacklisting for Snort 2.8.4.1 available<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuUwuUS7lSPFUYb292mYbvRH4FJQD_vTSuw_QvW5qaP0jvzGpyQn1TxaA9MER0MDL-8PBaVItocfHcovdrFFPA_bCDKLJrS4-i61o_GpCTOCPd7tiYekPoimuHrK5ZSblc3F4QNw/s1600-h/blacklist.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 150px; height: 200px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhuUwuUS7lSPFUYb292mYbvRH4FJQD_vTSuw_QvW5qaP0jvzGpyQn1TxaA9MER0MDL-8PBaVItocfHcovdrFFPA_bCDKLJrS4-i61o_GpCTOCPd7tiYekPoimuHrK5ZSblc3F4QNw/s200/blacklist.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5335397495554955458" /></a><br />After a discussion on the Snort-users mailing list last week regarding using standard Snort rules to implement Reputation-based IP blocking in Snort (and how badly the performance sucked) I decided to write some code to do it the "right way". The result is the "iplist" preprocessor, a module that supports IP Blacklisting and whitelisting via user-provided lists of known hostile IP addresses.<br /><br />The internals of the system use the Patricia Trie code from the Snort 3.0 code tree to provide the primary address lookup mechanism. Currently I'm only supporting IPv4 addresses although the P-Trie code supports IPv6 addressing too. <br /><br />This patch has been applied against Snort 2.8.4.1 only. I've tested<br />builds on OS X, Ubuntu and Fedora so far. It requires libdnet (or<br />dumbnet-dev for those of you on Debian-based distros) to build<br />properly. Check the README file that comes with it for instructions<br />on patching it into your codebase. It supports inline blocking and<br />alerting but not Flexresp-style TCP reset session shootdowns.<br /><br />Have a look and let me know what features you'd like or bugs you find.<br /><br />This code is purely EXPERIMENTAL, this is just me spending some of my<br />spare time doing a fun coding project so if your machine sprouts legs<br />and refuses to work until it receives part of the TARP bailout it's<br />not my fault.<br /><br />Here's the link:<br /><br /><a href="http://www.snort.org/users/roesch/code/iplist.patch.tgz">http://www.snort.org/users/roesch/code/iplist.patch.tgz</a><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/open%20source" rel="tag">open source</a>, <a href="http://technorati.com/tag/sourcefire" rel="tag">sourcefire</a>, <a href="http://technorati.com/tag/tools" rel="tag">tools</a>, <a href="http://technorati.com/tag/snort" rel="tag">snort</a>, <a href="http://technorati.com/tag/cybersecurity" rel="tag">cybersecurity</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com12tag:blogger.com,1999:blog-31672472.post-77354615314175297612009-04-19T14:00:00.001-04:002009-04-19T14:00:01.449-04:00RSA 2009I'll be out at RSA in San Francisco all this week. I'm speaking at the America's Growth Capital conference tomorrow at 2:30 on "Snort and The Future of Real-Time Adaptive Network Security". Wednesday I'll be doing a Peer2Peer talk at RSA on "The Future of Snort", primarily focusing on Snort 3.0 and upcoming changes to snort.org.<br /><br />For those of you attending RSA I hope to meet you out there!<br />Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com0tag:blogger.com,1999:blog-31672472.post-5367186106908383272009-04-02T16:24:00.002-04:002009-04-02T16:26:36.679-04:00Snort 3.0 Beta 3 Released<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL0T4Eu4mtroIZ8XeoipZ2nBREBzpOL-r1S_yRYo-xzHbJFwjYE280T61MHckG-ezJUoGLrPvPGoh8_tiW3Q3PzyOzm5qy235dHNWB-HqRWbPep9uuxyMRm10KAG6weFdNBaeKOQ/s1600-h/snort_saved_my_bacon.gif"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 200px; height: 164px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiL0T4Eu4mtroIZ8XeoipZ2nBREBzpOL-r1S_yRYo-xzHbJFwjYE280T61MHckG-ezJUoGLrPvPGoh8_tiW3Q3PzyOzm5qy235dHNWB-HqRWbPep9uuxyMRm10KAG6weFdNBaeKOQ/s200/snort_saved_my_bacon.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5320193123952696354" /></a><br />It's been quite a while since the last Snort 3.0 beta and <a href="http://www.snort.org/dl/snortsp/">yesterday we released Beta 3</a>. The reason that it's taken so long to get out the door is that we decided to start doing performance analysis of the Snort 2.8.x analytic engine that was ported over to run on top of SnortSP and the results were... interesting.<br /><br />When I started developing and designing the Snort 3.0 architecture one of the assumptions that I based my design around was that multi-core computing environments were going to be the norm rather than the exception in the platforms we'd target moving forward. With that in mind and knowing the typical packet analysis load we place on machines with <a href="http://www.sourcefire.com">Sourcefire's</a> applications I was looking to utilize CPU cycles more efficiently by performing common processing that happened for every packet (acquire/decode/flow/stream/etc) once and then spreading the analysis (Snort/RNA/etc) across the cores in separate threads. This would allow us to perform parallel processing across analytic engines while only having to perform the common processing tasks they all have to do once.<br /><br /><img src="http://lh4.ggpht.com/_zfioUiQd7Ug/SdTkjXJO8SI/AAAAAAAAAbM/PCYoCORXXb4/Snort_3_Arch_Threaded.png?imgmax=800" alt="Snort_3_Arch_Threaded.png" border="0" width="586" height="449" /><br /><br />This is great in theory and seemed like we'd see some real performance gains but when we started looking at the Snort 2.8.3.x analytic engine that we put out in the original Beta release we saw that performance was not where we wanted it to be relative to the monolithic Snort 2.8.3.x architecture. Initially we thought that the performance discrepancies were due to inefficiencies that were a byproduct of porting the monolithic Snort 2 packet processing engine to run in a threaded environment due to the incorporation of things like thread-local storage (TLS). Lots and lots of analysis was done over several months, tons of optimizations were made by Russ Combs and Ron Dempster, Snort's run-time processing cycle was studied at length and at the end of the day the performance still wasn't where we wanted it.<br /><br />Eventually we arrived at the conclusion that the performance issues we were seeing were stemming from the way that modern Intel CPUs use and synchronize cache memory. If you'd like to see some more in-depth discussion on the Intel CPU caching architecture you should take a look at <a href="http://duartes.org/gustavo/blog/">Gustavo's site</a> and check out the <a href="http://duartes.org/gustavo/blog/post/getting-physical-with-memory">articles</a> on <a href="http://duartes.org/gustavo/blog/post/intel-cpu-caches">caching</a>. The performance hangup we ran into really manifested itself as we tried to distribute traffic across cores on multiple physical dies, the overhead incurred by the data transfers and cache coherency operations required by the cores was costing us lots of CPU cycles.<br /><br />What has become apparent from performing our analysis and extensive experimentation is that data spreading across the current generation of multi-core Intel CPUs is not something that works well for real-time applications like IPS. Intel CPUs really seem to favor the more traditional load balancing approach that's been used successfully with the Snort 2.x code base for years where independent processes are locked to separate cores and flows are unique to each respective process. <br /><br />While we were exploring the performance envelope of the Snort 3 code base we looked at a number of different data distribution architectures to move data from the data source subsystem of SnortSP to the analytic engines (i.e. the Snort 2.8.3 engine module). A model that we've found to work well is what we've come to call the "stacks model". The stacks model works a lot like the Snort 2.x preprocessor subsystem but on a somewhat larger scale. Instead of running several analytic engines in separate threads with each thread locked to a CPU, the stacked model runs the engines in the one thread and calls them sequentially, passing the packet stream from engine to engine. The stacks model is included in Beta 3 and is a compile-time option for running the system.<br /><br /><img src="http://lh3.ggpht.com/_zfioUiQd7Ug/SdTkfmGUDdI/AAAAAAAAAbI/xthcJKxoUfI/Snort_3_Arch_Stacked.png?imgmax=800" alt="Snort_3_Arch_Stacked.png" border="0" width="583" height="526" /><br /><br />One model that we haven't benchmarked extensively is what I'll call a "single core multithreaded model" where we run one thread per analytic engine but lock them all to the same CPU core, eliminating the cache coherency and sync overhead while paying the price of heavier loading on the individual CPU cores. This will be an area of further research down the road.<br /><br />We are planning on standing up a public CVS server in the near future to host the Snort 3 code so that we can foster better interaction with the open source community.<br /><br />The next big hurdles to get over with Snort 3 are development of the TCP stream management subsystem and the Snort 3.0 analytic engine module. Stay tuned for more (and more regular) releases as we get rolling on these subsystems.Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com3tag:blogger.com,1999:blog-31672472.post-67251873932722482002009-02-16T21:35:00.002-05:002009-02-16T21:44:53.789-05:00Saving the data on an iPhone in Recovery Mode<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhprjMWwtIQSc1_ktQHaOr591-nMSPoH_ic6VOsAESNQ1qrMf1l_nq4QqVqbD-6vKz9oyHZ1GKbbIqdeHFml4to2bcCj3XcqIk4zmzMetpnS0SM8VpWiMJ1dN8n_t8MzDAGeEJqig/s1600-h/ziphone_resize.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 134px; height: 200px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhprjMWwtIQSc1_ktQHaOr591-nMSPoH_ic6VOsAESNQ1qrMf1l_nq4QqVqbD-6vKz9oyHZ1GKbbIqdeHFml4to2bcCj3XcqIk4zmzMetpnS0SM8VpWiMJ1dN8n_t8MzDAGeEJqig/s200/ziphone_resize.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5303591701665524834" /></a><br />I'm posting this because I couldn't find this information succinctly produced in any corner of the web and I wanted people who had the same problem to have a place to go to find answers.<br /><br />Here's the scenario. A loved one drops their 1st gen iPhone into a body of water (sink/tub/toilet/lake) and, effectively, bricks it. When it dries out enough to boot back up it goes into Recovery Mode and asks you to plug it into iTunes so it can be reimaged and reset to the factory defaults erasing all the data on the phone in the process. Normally this is no big deal, assuming of course the phone has been sync'd recently. <br /><br />What if it hasn't? What if there's precious data on there like pictures of your children that your wife took and loves dearly that she never backed up off the phone for whatever reason?<br /><br />You, my friend, have a problem.<br /><br />If you search the web for "iPhone recovery mode save data" or whatever you're going to get back a pile of results that basically say stuff like "take it to the Apple Store and let a Genius look at it". When you do that, they'll tell you you're hosed, but of course they'll be cool about it and you'll have the opportunity to buy some stuff while you wait in line to talk to them.<br /><br />You can also call a data recovery service. They can bust open your iPhone and try to extract the media from its memory for the low low price of ~$900+. Generally speaking, that's not going to be an option for almost anyone.<br /><br />Shouldn't a hacker have had this problem and solved it by now?<br /><br />As a matter of fact, at least one has. If you find yourself in this situation and you want to be all heroic and stuff then go get yourself <a href="http://download.ziphone.org/">ZiPhone</a>. Among the tricks it can perform on your iPhone is the extremely handy trick of taking it out of Recovery Mode and back to Normal Mode. Just launch ZiPhone and select the "Reboot in Normal Mode" option from the menu bar. Once you've done that you can just sync the phone as normal and save off all your data. Once you've done <em>that</em> you can then turn to your wife and receive your accolades and just desserts. <br /><br />Anyway, I hope someone finds this post more helpful than the usual pile of useless results Google returns on the topic.Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com44tag:blogger.com,1999:blog-31672472.post-50798942831938222632009-02-16T21:07:00.002-05:002009-02-16T21:43:58.386-05:00MacBook Pro and the slow-motion beachball of death crash<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxrfMKsqPS2YxiieYgtRUJBI23Er-_10qS4wA9N9yBXNMrOYV_sCqyzm5IX0Ychrc0I1GNXI_a3i_NQGrhbUBvpazykEDFl4tvjgpukkyamjOPbYoLrSdwWNzQ1lwmZH8QzW3WKA/s1600-h/beachball_of_death.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 200px; height: 191px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxrfMKsqPS2YxiieYgtRUJBI23Er-_10qS4wA9N9yBXNMrOYV_sCqyzm5IX0Ychrc0I1GNXI_a3i_NQGrhbUBvpazykEDFl4tvjgpukkyamjOPbYoLrSdwWNzQ1lwmZH8QzW3WKA/s200/beachball_of_death.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5303591532198394834" /></a><br />I've been using 15" Mac laptops since about 2002 and I love them dearly. As general purpose computing platforms for all of the various things I have to do (coding, presentations, communications, etc) they are by far the best blend of power and functionality I've ever had. Having said that, I've been pulling my hair out lately because my latest machine, a ~1 year old 2.4GHz machine with 4GB of RAM and a hand upgraded hard drive, has been crashing.<br /><br />Generally what's happening is the machine will operate fine for some period of time, several hours to several days. At some point I'll have 2-7 apps open and one of them, usually Firefox, will beachball. When this happens I can switch away to another app but then within 30 seconds or so that app will go down, then the menu bar becomes unresponsive, I can't launch any new apps and that's all she wrote. It's reboot time.<br /><br />This has been going on for about a month now and I've tried a variety of solutions in terms of running repairs and all that jazz with no luck. At this point I'm ready to throw in the towel. In addition to being a machine that I write code on this is also a machine that I use to do presentations in front of rooms full of people. A lockup during a presentation would be an unacceptable embarrassment so tomorrow I'll be taking this machine back to Sourcefire IT and getting a new MBP. Hopefully they can figure out what the problem is or send it to Apple and let them fix it.Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com13tag:blogger.com,1999:blog-31672472.post-22253859447227638612008-11-25T15:57:00.002-05:002008-11-25T15:58:29.393-05:00Daemonlogger 1.2.1 Released (and, oh yeah, 1.2.0)<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIaa9jxCLq8p9VRedCaDv4VPHbx304GzlEf9nhIZd1xrIwZoHfBNL5Dob7zyvHesJEQuofDmpo535mja6FXoPdvJoTG_7CwmQaHeLvXEeqoLezxS2FMkgKfDaySq02oxDIH2DrHA/s1600-h/daemon_logger_2.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;width: 200px; height: 172px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIaa9jxCLq8p9VRedCaDv4VPHbx304GzlEf9nhIZd1xrIwZoHfBNL5Dob7zyvHesJEQuofDmpo535mja6FXoPdvJoTG_7CwmQaHeLvXEeqoLezxS2FMkgKfDaySq02oxDIH2DrHA/s200/daemon_logger_2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5272702344917570034" /></a><br />Daemonlogger 1.2.1 is available at its <a href="http://www.snort.org/users/roesch/Site/Daemonlogger/Daemonlogger.html">usual place</a>. This release is a cleanup that allows compilation to work properly on systems which don't support the BSD TAILQ macros (like some Linux's). <br /><br />I also neglected to announce the release of Deamonlogger 1.2.0. That release changed the default pruning mechanism for ringbuffer mode back to the original default of pruning the oldest file in the logging directory and allows you to do per-run pruning by setting the -z switch at the command line. It also repaired a bug with size-based rollovers not working properly.<br /><br /><a href="http://www.snort.org/users/roesch/Site/Daemonlogger/Daemonlogger.html">Get Daemonlogger here.</a>Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com2tag:blogger.com,1999:blog-31672472.post-60550169180119391862008-09-12T12:38:00.002-04:002008-09-12T12:39:26.708-04:00And now, some astronomyWe had some beautiful clear skies here over Labor Day weekend so I took advantage of it and managed to get my telescope out 3 nights in a row to do some astrophotgraphy. The results are my best yet but certainly just "decent" by advanced amateur standards these days. I'm getting better at it but I've still got a long way to go.<br /><br />Anyway, here's the pictures. For those of you interested in such things the gear used to collect the photons was a <a href="http://www.telescopengineering.com/">TEC-140 Apochromatic Refractor</a>, <a href="http://www.sbig.com/sbwhtmls/st10.htm">SBIG ST-10XME</a> CCD camera, and an <a href="http://www.astro-physics.com/products/mounts/900gto/900gto.htm">Astro-Physics 900GTO</a> mount. I'm also using a <a href="http://www.starlightinstruments.com/index.php">Starlight Instruments</a> Digital Feather Touch focusing system.<br /><br />Enjoy!<br /><br /><embed type="application/x-shockwave-flash" src="http://picasaweb.google.com/s/c/bin/slideshow.swf" width="600" height="400" flashvars="host=picasaweb.google.com&RGB=0x000000&feed=http%3A%2F%2Fpicasaweb.google.com%2Fdata%2Ffeed%2Fapi%2Fuser%2Fmroesch0%2Falbumid%2F5245169229382919361%3Fkind%3Dphoto%26alt%3Drss" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed>Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com2tag:blogger.com,1999:blog-31672472.post-67291689825384080292008-08-10T04:06:00.003-04:002008-08-10T04:08:49.923-04:00Snort 3.0 Architecture Series Part 3: The command shell<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcU7YgBMYKhGjazFAeFi00lb2igXvRgfauJEqNNxPPtCCQz-jewzVBcoDfFjfpQaS94ZSQ3KLCoSuZc-VSjMVXIGXCoISceS76YGF7y6MO46Z1amm5kd-SkyWNNlIPbhgJMFQfXA/s1600-h/snort_saved_my_bacon.gif"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcU7YgBMYKhGjazFAeFi00lb2igXvRgfauJEqNNxPPtCCQz-jewzVBcoDfFjfpQaS94ZSQ3KLCoSuZc-VSjMVXIGXCoISceS76YGF7y6MO46Z1amm5kd-SkyWNNlIPbhgJMFQfXA/s200/snort_saved_my_bacon.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5232797888193593570" /></a><br />One of the biggest user-facing changes in the Snort 3 architecture is the inclusion of a user shell interface to interact with the system. Up until now everything has been controlled strictly via the command line interface at startup time and via signals sent to the process. There are many reasons for going with a command shell that I detailed in the first part of this series but basically since Snort is designed to be able to run continuously now it was essential to have a way to interact with it.<br /><br />Upon starting SnortSP you are sent directly to the command shell prompt.<br /><br /><pre>[+] Loaded pcap DAQ<br />[+] Loaded file DAQ<br />[+] Loaded afpacket DAQ<br />[*] DAQ Modules Loaded...<br />[*] Loading decoder modules<br />[+] Loaded ethernet<br />[+] Loaded null<br />[+] Loaded arp<br />[+] Loaded ip<br />[+] Loaded tcp<br />[+] Loaded udp<br />[+] Loaded icmp<br />[+] Loaded icmp6<br />[+] Loaded gre<br />[+] Loaded mpls<br />[+] Loaded 8021q<br />[+] Loaded ipv6<br />[+] Loaded ppp<br />[+] Loaded pppoe<br />[+] Loaded gtp<br />[+] Loaded raw<br />[*] Decoder initialized...<br />[*] Flow manager initialized...<br />[*] Data source subsystem loaded<br />[*] Engine manager initialized<br />Control thread running - 3082939280 (18555)<br />[*] Loading command interface<br />[!] Loading SnortSP command metatable<br />[!] Loading data source command metatable<br />[!] Loading engine command metatable<br />[!] Loading output command metatable<br />[!] Loading analyzer command metatable<br />Executing etc/snort.lua<br /> ,,_ -*> SnortSP! <*-<br /> o" )~ Version 3.0.0b2 (Build 9) [BETA]<br /> '''' By Martin Roesch & The Snort Team: http://www.snort.org/team.html<br /> (C) Copyright 2008 Sourcefire Inc.<br />snort></pre><br /><br />If this is your first time running Snort you'll probably want to get help. Every subsystem in the Snort 3 architecture has its own help function available as do the engine modules, if you ever get lost working with a module just invoke its object help function. For example:<br /><br /><pre>snort> ssp.help()<br />[*] SnortSP Commands:<br /> help()<br /> set_log_level( [debug|info|notice|warn|error|critical] )<br /> shutdown()<br /> Available subsystems within SnortSP have their own help() methods:<br /> dsrc - Data Source<br /> eng - Dispatcher/Engine<br /> analyzer - Analytics Modules<br /> output - Output Modules<br /> For example: dsrc.help() will call the Data Source help function</pre><br /><br />As you can see, the top level module for SnortSP is called "ssp" and you can invoke its help function by calling it with a "ssp.help()" function. If you want to find out the data source subsystem's available functions simply call "dsrc.help()" and so on. <br /><br />One cool thing: if your system has Readline support available the Lua interpreter will pick it up automatically and you'll have standard shell functionality available within SnortSP such as command history and command line editing.<br /><br />Under the covers a few things are happening. When a command is invoked in SnortSP there are a series of lookups performed by the code in the src/platform/lua_interface.c file. Lua is really handy for wrapping C function calls, it's one of the initial reasons I went with it for Snort. Let's take a look at some of the simple functionality in the lua_interface.c file for the ssp.* commands.<br /><br /><pre><br />static int set_log_level_wrap(lua_State *L) {<br /> int level;<br /><br /> if (lua_isnumber(L, 1))<br /> {<br /> level = lua_tointeger(L, 1);<br /> }<br /> else<br /> {<br /> const char *name = (char *) luaL_checkstring(L, 1);<br /> if (name ==NULL) return 0;<br /><br /> if (strcasecmp(name, "debug") == 0) level = S_LOG_DEBUG;<br /> else if (strcasecmp(name, "info") == 0) level = S_LOG_INFO;<br /> else if (strcasecmp(name, "notice") == 0) level = S_LOG_NOTICE;<br /> else if (strcasecmp(name, "warn") == 0) level = S_LOG_WARN;<br /> else if (strcasecmp(name, "error") == 0) level = S_LOG_ERROR;<br /> else if (strcasecmp(name, "critical") == 0) level = S_LOG_CRITICAL;<br /> else return 0;<br /> }<br /> log_set_current_log_level(level);<br /> level = log_get_current_log_level();<br /> S_INFO("Log level set to %s", log_level_to_string(level));<br /> return 0;<br />}<br /><br />static int shutdown_wrap(lua_State *L) {<br /> stop_processing = 1;<br /> return 0;<br />}<br /><br />static int platform_help_wrap(lua_State *L) {<br /> printf("[*] "PLATFORM_NAME" Commands:\n"<br /> " help()\n"<br /> " set_log_level( [debug|info|notice|warn|error|critical] )\n"<br /> " shutdown()\n"<br /> " Available subsystems within "PLATFORM_NAME" have their own help() methods:\n"<br /> " dsrc - Data Source\n"<br /> " eng - Dispatcher/Engine\n"<br /> " analyzer - Analytics Modules\n"<br /> " output - Output Modules\n"<br /> " For example: dsrc.help() will call the Data Source help function\n");<br /> return 0;<br />}<br /><br />static void platform_set_info(lua_State *L) {<br /> lua_pushliteral (L, "_COPYRIGHT");<br /> lua_pushliteral (L, "Copyright (C) 2008 Sourcefire Inc.");<br /> lua_settable (L, -3);<br /> lua_pushliteral (L, "_DESCRIPTION");<br /> lua_pushliteral (L, "Network Intrusion Prevention System command interface");<br /> lua_settable (L, -3);<br /> lua_pushliteral (L, "_VERSION");<br /> lua_pushliteral (L, PLATFORM_NAME" 0.1");<br /> lua_settable (L, -3);<br />}<br /><br />static const struct luaL_reg platformlib[] = {<br /> {"help", platform_help_wrap},<br /> {"shutdown", shutdown_wrap},<br /> {"set_log_level", set_log_level_wrap},<br /> {NULL, NULL},<br />};<br /><br />static int platform_dir_create_meta (lua_State *L) {<br /> luaL_newmetatable (L, METATABLE);<br /> /* set its __gc field */<br /> lua_pushstring (L, "__gc");<br /> lua_settable (L, -2);<br /> return 1;<br />}<br /><br />static int luaopen_platform(lua_State *L) {<br /> platform_dir_create_meta(L);<br /> S_INFO("[!] Loading "PLATFORM_NAME" command metatable");<br /> luaL_openlib(L, PLATFORM_PROMPT, platformlib, 0);<br /> platform_set_info(L);<br /> return 1;<br />}</pre><br /><br />This code sets up a "metatable" in Lua which allows us to call functions from its root with dot notation. The call to bind the root name is a little hidden in this case, it's the luaL_openlib() call in the last function above. We ended up using the PLATFORM_PROMPT substitution due to our habit of renaming the whole project periodically in its earlier days....<br /><br />The array definition at the top of the code listing shows how the keywords are being mapped to the function calls themselves. When you put it all together, if you call "PLATFORM_PROMPT.help()" it maps that back to calling the platform_help_wrap() function. If you're familiar with the internals of Snort 1.x-2x you'll know that this mapping of keywords in the interpreter to function calls is very similar to how Snort's parser worked so it was a very comfortable transition for me. Add in the validation and scripting that you can get from lua and you can do some really cool stuff!<br /><br />Let's take a look at a sample lua function scripting the SnortSP architecture to demonstrate the power of this arrangement. One of the things I wanted to do with SnortSP in its early days is test out the decoders to make sure there were no huge flaws in any of them that would lead to crash, as you know Snort's decoders are "critical infrastructure" that must be fast and crash free, the rest of the system is counting on them. The function I wrote would allow a user to point Snort at a directory of pcap files and process each one sequentially until every file in the directory had been processed. <br /><br />In the old days we would have had to start a new instance of Snort (2.x) for each file we wanted to process and take all the time to load and shutdown. In SnortSP you could (in theory) load a detection configuration and then process each file, all without having to restart. The way the function is setup right now it just processes the file and prints the packet dump to the screen. Here's a listing.<br /><br /><pre>require"lfs" # load filesystem functions<br /><br />function rundir (path, mask) # args are path to pcap files, filename mask<br /> once = 0 # initialize the data source once only<br /> for file in lfs.dir(path) do # grab the path<br /> if string.match(file, mask) ~= nil then # find files that match the mask<br /> print("Processing File "..path..'/'..file)<br /> if(once == 0) then # setup the dsrc params once only<br /> once = 1<br /> dsrc1 = {name="src1",<br /> type="pcap", <br /> intf="file", <br /> filename=path..'/'..file,<br /> flags=1,<br /> snaplen=0,<br /> maxflows=16384,<br /> maxidle=10,<br /> flow_memcap=10000000,<br /> display="max"}<br /> dsrc.new(dsrc1) # instantiate the dsrc once only<br /> eng.new("e1") # instantiate the dispatcher once only<br /> eng.link({engine="e1", source="src1"}) # link dsrc to dispatcher<br /> print("Starting engine")<br /> eng.start("e1") # run it<br /> else<br /> eng.run_file("e1", path..'/'..file) # just fun the file now<br /> end<br /> end<br /> end<br /> if(once == 1) then # process the last file too<br /> eng.run_file("e1", "")<br /> end<br />end</pre><br /><br />The first thing happening here is the inclusion of a Lua library called "lfs". That's necessary for the filesystem interface, Lua doesn't ship with one natively. (I know.) After that it's the function definition which takes two arguments, the path to the directory containing the pcap files and the filename mask to use to select the right files from the directory. The first iteration through the config data structures and engine objects are instantiated and configured and the first file is processed, after that it just processes all the rest of the files in the directory.<br /><br />This is pretty cool! Practically speaking you can script the startup and shutdown of Snort and script the operation of all the different system objects as well. For example, you can make external system calls to lookup runtime parameters like interface configuration or available memory and use that information to tune your runtime configuration parameters of your engine. You can also script testing new software modules or even get into more crazy stuff like automatically turning engine modules on and off at certain times of day or pretty much anything else you can imagine.<br /><br />This is just a brief tour of the functionality of the command line, there's more that can be done! I'll leave it to interested readers to explore the SnortSP command shell and I'd love to hear about interesting things you do with it.<br /><br />In the next part of the series I'll start writing about the data source subsystem and its components.Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com8tag:blogger.com,1999:blog-31672472.post-51112010394861025512008-08-07T16:45:00.002-04:002008-08-07T16:48:04.527-04:00Snort 3.0 Architecture Series Part 2: Changes and BetasThings have changed a bit in the Snort 3.0 world since my last post so I thought I'd provide an update as a foundation for moving forward with this "series". I promise it'll be more than one article!<br /><br />In Part 1 I discussed the architecture of the Snort 3.0 technology and since then there have been some changes. The largest change has been organizational in nature. We've decided to name the core system framework apart from the overall project since you can do more than just Snort-style intrusion detection with it. So, as a result from now on we'll be calling the software framework SnortSP (the Snort Security Platform) and then the engines will be named separately. The overall architectural umbrella that this all lives under is still going to be called the "Snort 3 Architecture" and it will consist of different software components, chief among them will be SnortSP and the engine modules that utilize it.<br /><br />Here's a handy reference diagram:<br /><br /><div style="text-align:center;"><img src="http://lh6.ggpht.com/mroesch0/SJtd4NoOa3I/AAAAAAAAAE8/KqeDocK3nGE/SnortSP%20engine%20block%20diagram.jpg?imgmax=800" alt="SnortSP engine block diagram.jpg" border="0" width="541" height="652" /></div><br /><br />Ok, now that that's out of the way, let's talk about the beta. On June 30th we released the initial beta of SnortSP & the Snort 2.8.2 Engine to open source beta. It's located at <a href="http://www.snort.org/dl/snortsp">http://www.snort.org/dl/snortsp</a>. To date we have done three releases of the code base with progressive versions nailing down loose ends and fixing compilation issues and the like.<br /><br />We would love any feedback that people have on the betas, if you're a Snort fan you should definitely check it out and start getting your feet wet, this is the future of Snort!<br /><br />For my next post I'll be spending some time talking about the SnortSP command shell and some neat stuff you can do with it!<br /><br /><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/IDS" rel="tag">IDS</a>, <a href="http://technorati.com/tag/IPS" rel="tag">IPS</a>, <a href="http://technorati.com/tag/open%20source" rel="tag">open source</a>, <a href="http://technorati.com/tag/snort" rel="tag">snort</a>, <a href="http://technorati.com/tag/sourcefire" rel="tag">sourcefire</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com1tag:blogger.com,1999:blog-31672472.post-32959529918249653712008-08-07T14:09:00.004-04:002008-08-07T16:50:15.403-04:00Daemonlogger 1.1 Released!<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIT7VGhaKnGEKitaGfnYvgDAgpGFg_u_w4gUQYTO61R_EaVv4iyMKWwPJKQIMdWFycy10muslgfd_kRSpWz1e5mpC7UrCoTZ7gl1EaGPsAYeDtNNhCNqqt7hOsnZqLaeagDtZKQw/s1600-h/daemon_logger_2.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIT7VGhaKnGEKitaGfnYvgDAgpGFg_u_w4gUQYTO61R_EaVv4iyMKWwPJKQIMdWFycy10muslgfd_kRSpWz1e5mpC7UrCoTZ7gl1EaGPsAYeDtNNhCNqqt7hOsnZqLaeagDtZKQw/s200/daemon_logger_2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5231840342452164642" /></a><br />Daemonlogger 1.1 is <a href="http://www.snort.org/users/roesch/code/daemonlogger-1.1.0.tar.gz">available</a> on my <a href="http://www.snort.org/users/roesch">personal site</a> for those of you interested in packet logging and network tapping. New features include:<br /><br /><li>Rollover size command line shortcuts (e.g. "-s 1M" vs "-s 1048576")</li><br /><li>Disk utilization-based ringbuffer rollovers. For example, you can now tell Daemonlogger to write pcap files until the disk is 90% full and then "eat its tail" by deleting the oldest pcap file in the logging directory.<br /><br />I also fixed a bug that was found by Wesley Shields to prevent Daemonlogger from pruning files collected by previous runs of the software that were in its logging directory. This should also make it safe to have multiple instances of Daemonlogger write to the same logging directory without interfering with one another.<br /><br />Enjoy!<br /><br /><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/open%20source" rel="tag">open source</a>, <a href="http://technorati.com/tag/sourcefire" rel="tag">sourcefire</a>, <a href="http://technorati.com/tag/tools" rel="tag">tools</a>, <a href="http://technorati.com/tag/daemonlogger" rel="tag">daemonlogger</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com0tag:blogger.com,1999:blog-31672472.post-16120110416581174572008-03-26T12:04:00.003-04:002008-03-26T12:16:08.517-04:00@ CanSecWest<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh415s1TnmWUfE6N1dqomNPQ41YY2wF_WtGzpzR3efFmw4Wuho4yb4a6DJEm9_t5yLWoanr1aaLa48VyAEcCw4EOeRu6_j_edJFYKj-jeANuCl0IMqgCdtnsJqoMJZItjw4upLioQ/s1600-h/square_logo_cansec.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh415s1TnmWUfE6N1dqomNPQ41YY2wF_WtGzpzR3efFmw4Wuho4yb4a6DJEm9_t5yLWoanr1aaLa48VyAEcCw4EOeRu6_j_edJFYKj-jeANuCl0IMqgCdtnsJqoMJZItjw4upLioQ/s200/square_logo_cansec.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5182082411570199042" /></a><br />I'm at the <a href="http://www.cansecwest.com">CanSecWest</a> conference speaking this week. If you're up here in (not so sunny) Vancouver I'll see you at the Con!Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com2tag:blogger.com,1999:blog-31672472.post-58765588196644221172007-11-08T14:41:00.001-05:002007-11-08T14:57:15.470-05:00Snort 3.0 Architecture Series Part 1: Overview<a href="http://www.snort.org">Snort</a> 3.0 is the next generation Snort engine that is currently under active development at <a href="http://www.sourcefire.com">Sourcefire</a>. I have been acting as lead architect as well as a contributing developer on the project for many months now. As one of the people who's driving development of the system I thought it would be worthwhile to start talking about what we're building because I know a lot of people are interested in learning more about this next generation Snort engine.<br /><br />Snort is 9 years old this month and has a lot of miles under its belt. It's one of the most widely deployed network security technologies in the world and is therefore one of the most highly exposed (in terms of live network packets processed) and well tested IDP code bases available today. Snort 3.0 is a huge undertaking but I feel its a worthwhile effort to achieve some of the long term goals that we have for the engine. I believe that ultimately our users will benefit tremendously from the design of the new engine and that it will be a platform that will work well for the at least the next 9 years.<br /><br />There are several goals associated with Snort 3.0's development:<br /><br />0) Rewrite the core frameworks for Snort from the ground up to clean out code base cruft and leverage external libraries where possible to contain the scope of the rewrite. This will also allow us to flush unused features and effectively reduce the size and complexity of the code base making it easier to extend and ultimately lending the security benefits of a smaller code base.<br /><br />1) Build an "contextually aware engine", one that has the ability to understand what it's defending built around the concept of network context. Network context is essentially data about the environment that is being defended by Snort, the composition of the hosts in the network as well as the local network composition. This is important in Snort 3 in order to:<br /><ul><li>Reduce/simplify/eliminate tuning as much as possible leveraging network context.</li><br /> <li>Generate event priorities based on network context.</li><br /> <li>Address network and transport layer evasion by leveraging network context.</li></ul><br />2) Abstract and compartmentalize Snort's subsystems to make components "separable".<br /><ul><li>Compartmentalize common functionality that any network traffic analysis application would need to enable Snort to be a more effective platform for building arbitrary network traffic analyzers.</li></ul><br />3) Improve support for protocol encapsulation within the overall engine architecture to make handling things like enterprise/WAN protocols and IPv6 support more natural.<br /><br />4) Add an interactive shell to the system so that it may be more fully orchestrated at runtime.<br /><br />5) Multithread the engine to take better advantage of multi-core platforms that are standard today.<br /><ul><li>Make the engine parallalizable so that multiple analytics threads may run simultaneously on the same traffic.</li></ul><br />6) Normalize Snort's language so that it's easier to read and write.<br /><br /><div style="text-align:center;"><img src="http://lh5.google.com/mroesch0/RzNjC2GWidI/AAAAAAAAACk/teNRi8V-pNs/Snort3BlockDiagram.png?imgmax=800" alt="Snort3BlockDiagram.png" border="0" width="526" height="561" /></div><br /><br />As a result of these goals the engine architecture has a number of major discrete software components.<br /><br />1) Data Source. The Data Source component encapsulates common functionality required by any network traffic analyzer, functions that will have to be performed prior to running almost any analysis task. The data source incorporates a number of components including:<br /><ul><li>Data Acquisition (DAQ) - The DAQ provides an interface between the rest of the engine and the host OS packet facilities. This is where we get packets from the underlying hardware and where we talk to that hardware regarding the disposition of those packets. The DAQ subsystem allows Snort 3.0 to incorporate arbitrary external packet interfaces including things like libpcap, IPQ and divert sockets.</li><br /><li>Decoder - The Decoder performs basically the same tasks it did in Snort 2.x. Validate the packets, detect protocol anomalies and provide a referential structure for the rest of the program to operate upon.</li><br /><li>Flow Manager - The Flow Manager provides services for tracking conversations between endpoints on the network. In Snort 3.0 it also contains features for "fastpathing" traffic, allowing it to pass straight through the engine in the event that the analytics modules have decided that they're not interested in a particular flow any longer. Snort 3.0 also includes a mechanism called "flow slots" that subsystems can use to store stateful "flow local" information. This is the place that things like flowbits will get stored in Snort 3.0.</li><br /><li>IP Defragmenter - This module provides services for putting IPv4 and IPv6 packets back together and will include mechanisms to allow for target-based fragment reassembly.</li><br /><li>TCP Stream Reassembler - As with the IP Defragmenter, provides target-based services for reassembling TCP segments into normalized streams and presenting them to the underlying analytics.</li><br /><li>Data Source API - An abstraction API between the facilities provided by the data source and the rest of the Snort 3.0 software framework. This API exists to that the rest of Snort 3.0 can work without caring whether the Data Source is implemented as hardware or software.</li></ul><br />2) Action System. The Action System handles event queuing, notification and logging when the system fires events. The supported output types in Snort 3.0 will be text (console), syslog and Unified 2, a serialized binary stream format.<br /><br />3) Attribute Management System (AMS). The AMS will store network contextual data about the operational environment being defended by a particular Snort instance. This subsystem will be addressable continuously at runtime and provide interactive interfaces to the command shell as well as analytics modules that can leverage its data. The inclusion of the AMS will make all the goals in section 1 of the goals section attainable.<br /><br />4) Analytics System. The Analytics System is where Snort detection engine threads will be located. The idea in Snort 3.0 is to put all detection logic in analytics modules that run as separate threads, all the other code exists to support the functions of the Analytics System. Multiple threads may operate on the data coming from a dispatcher instance simultaneously. The Analytics System is structured so that all interaction between the analytics modules and the rest of the Snort 3.0 framework is brokered by an API called the "Snort Abstraction Layer" (SAL). Note that arbitrary functions may be performed by analytics modules in a given runtime instance. For example the Sourcefire edition of this engine is going to include RNA's functionality as an analytics module running side-by-side with Snort 3.0. <br /><br />5) Dispatcher. The Dispatcher exists to coordinate information flow between the different components of Snort 3.0 and to manage traffic queuing and disposition across analytics threads. It also ties together all of the objects in a runtime instance of a Snort engine, uniting data source, analytics, action system and the attribute manager into a single manageable entity for the purposes of process and threat management from the command shell.<br /><br />6) Snortd and the command shell. Snortd is the daemon process that provides marshaling services for the objects that are instantiated in a particular framework instance. The command shell also runs attached within a thread to snortd. The command shell provides interactive object management services for different software modules, runtime management of the process and threads, health management and a full scripting language for Snort 3.0. The command shell is running the <a href="http://www.lua.org">Lua</a> scripting language, a lightweight embeddable scripting language that is fast and portable as well as being very nice for implementing Domain Specific Languages. If Snort's parser wasn't one of your favorite features in the past you should definitely like this change! For those of you wondering if Snort 3.0 will handle Snort's existing rules language, of course it will. We're not planning on throwing out 9 years of accumulated detection functionality!<br /><br />The analytics modules that are under development right now (that I can discuss) include a Snort 2.x detection engine implementation, RNA (for Sourcefire implementations) and a <a href="http://www.lua.org">Lua</a> traffic analysis module for users who are in environments where a scripting interface to traffic analysis would be very useful.<br /><br />Over the coming days and weeks I am planning to post a subsystem by subsystem design overview of the engine components so that users of the system may familiarize themselves with the system as we prepare to release additional alpha code snapshots on our way to a Snort 3.0 beta!<br /><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/intrusion%20detection" rel="tag">intrusion detection</a>, <a href="http://technorati.com/tag/intrusion%20prevention" rel="tag">intrusion prevention</a>, <a href="http://technorati.com/tag/Open%20Source" rel="tag">Open Source</a>, <a href="http://technorati.com/tag/programming" rel="tag">programming</a>, <a href="http://technorati.com/tag/Snort%203.0" rel="tag">Snort 3.0</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com13tag:blogger.com,1999:blog-31672472.post-44079560569092377062007-11-05T21:24:00.001-05:002007-11-05T21:27:33.291-05:00Daemonlogger 1.0 released<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7gNAp4-sUvX7vzQwa7-xAiELQM1q7QT7KuPxWY46YP5brvA50NBPMpA11QmIeHjyONHBjwZoscy0Ct4AzdUYx0z1eykBZV7Pu0gRycj7riHAKV5FIozxNfIRoliL16Dtq8ow7BA/s1600-h/daemon_logger_2.png"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7gNAp4-sUvX7vzQwa7-xAiELQM1q7QT7KuPxWY46YP5brvA50NBPMpA11QmIeHjyONHBjwZoscy0Ct4AzdUYx0z1eykBZV7Pu0gRycj7riHAKV5FIozxNfIRoliL16Dtq8ow7BA/s200/daemon_logger_2.png" border="0" alt=""id="BLOGGER_PHOTO_ID_5129548334238943330" /></a><br /><a href="http://www.snort.org/users/roesch/Site/Daemonlogger/Daemonlogger.html">Daemonlogger 1.0</a> is available on my <a href="http://www.snort.org/users/roesch">user page</a> on snort.org. It's got a couple new features but nothing major, if you're a Daemonlogger fan it's definitely worth a download!Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com6tag:blogger.com,1999:blog-31672472.post-74015886368990912262007-07-27T10:53:00.001-04:002007-07-27T10:55:36.807-04:00Heading to BlackHat<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXLYdva6BaKlkPSOuHjexAaflPdg4chyphenhyphen_UFnQa8HvIEjbexWxmutEnRpqqJNmUEaigaanN-oDftnlVn-VOn4BGujLGX5nU82wziFYhAc5RdKYPSjC1uEYOiqp66SaP_uiyR4YQ4w/s1600-h/bh.gif"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXLYdva6BaKlkPSOuHjexAaflPdg4chyphenhyphen_UFnQa8HvIEjbexWxmutEnRpqqJNmUEaigaanN-oDftnlVn-VOn4BGujLGX5nU82wziFYhAc5RdKYPSjC1uEYOiqp66SaP_uiyR4YQ4w/s200/bh.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5091890054984789090" /></a><br />I decided at the last minute to head out to <a href="http://www.blackhat.com">BlackHat</a>. See you (who are going) there!<br /><br /><br /><br /><br /><br /><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/BlackHat" rel="tag">BlackHat</a>, <a href="http://technorati.com/tag/Travel" rel="tag">Travel</a>, <a href="http://technorati.com/tag/Conferences" rel="tag">Conferences</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com0tag:blogger.com,1999:blog-31672472.post-61994619273685306882007-07-23T16:53:00.001-04:002007-07-23T16:57:29.047-04:00Snort License Q&A<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBW4UtT4hjBjM0c-jV5cNMmoIE2kUBxx_s0aeaNIFpI72b6Vg3grltCGBHImHwqrYn3OYNpZh_I5wJB8OF71YyjytTy09CD4m4mr4n-yNb853SLHyOUc8d8Vi6LPNQ4TD6dTyfOw/s1600-h/questions.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBW4UtT4hjBjM0c-jV5cNMmoIE2kUBxx_s0aeaNIFpI72b6Vg3grltCGBHImHwqrYn3OYNpZh_I5wJB8OF71YyjytTy09CD4m4mr4n-yNb853SLHyOUc8d8Vi6LPNQ4TD6dTyfOw/s200/questions.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5090499331689442386" /></a><br />This is a repost of a message I sent out on snort-users last week. For the sake of continuity I thought it'd be a good idea to post it here too.<br /><br /><h3>General</h3><br /><br />Q1. Do these licensing updates change Sourcefire's commitment to open source.<br />A. No, Sourcefire remains committed to open source. Snort will always remain an open source product - period.<br /><br /><h3>Snort 2.x licensing questions</h3><br /><br />Q2. What are Sourcefire's issues with GPL v3?<br />A. Simply stated, similar to Linus Torvalds' stance - GPL v3 is not the license we chose. Without a complete legal review and opinion of the entire work we can't comment on the specifics. We want to complete due diligence on the license and make an informed decision. We will publish our opinion when it's ready.<br /><br />Q3. What is the practical impact to end users of the GPL v2 lock?<br />A. None. The lock provides us time to review GPL v3 and make an informed decision. End users are free to use, modify and redistribute Snort under GPL v2.<br /><br />Q4. Is it within Sourcefire's right to change the language in the source code preamble comments to lock the license at version 2 of the GPL?<br />A. The new language that we incorporated for the 2.7.x release changes a notification provision that applies to the GPL, IT DID NOT CHANGE THE GPL. This is a permissible change because it's modifying the suggested language for header preambles in Snort 2.7.x, not the license itself. If you read the GPL you'll see that this language is suggested in the section that comes AFTER the Terms and Conditions of the license. The new language follows one of these suggestions and specifies which version we want our licensees to follow.<br /><br />Q5. Is Sourcefire addressing the concerns raised by Victor and Will from the Snort-inline project.<br />A. Yes, we made some mistakes and have corrected them. Today's release of 2.7 addresses the issues raised by Will and Victor. If you have concerns regarding the headers or copyrights on code that you've contributed let us know and we'll take care of it.<br /><br />Q6. Do the GPL v2 derivative works clarifications used in the Snort 3.0-alpha code base apply to the 2.x releases of Snort?<br />A. No, these clarifications apply only to Snort 3.0<br /><br />Q7. Does the "assumptive assignment" clause from Snort 3.0 apply to the 2.6/2.7 releases of Snort?<br />A. No, the assignment provisions in the Snort 3.0 license do not apply past contributions. Sourcefire is in no way attempting to take ownership of the copyrights of past contributers.<br /><br /><h3>Snort 3.0 Licensing Questions</h3><br /><br />Q8. Will Snort 3.0 be licensed under GPL (currently v2 only).<br />A. Yes.<br /><br />Q9. Is Sourcefire claiming ownership of all contributed code?<br />A. No. The assignment clause in 3.0 will maintain your ownership of copyrights. It is simply a licensing agreement granting us the right to modify and relicense to 3rd parties.<br /><br />Q10. Does this apply to past contributions?<br />A. No. Snort 3.0 is a completely new code base that is entirely developed and copyrighted by Sourcefire. If we incorporate past contributions to the 2.x code base as work on the Snort 3.0 project continues they will maintain their original copyright and license.<br /><br />Q11. What if I refuse to accept the terms of the assignment?<br />A. As we said, simply tell us the terms under which you're contributing code and we'll work with you to come to an agreement. If we can't, you're free to maintain it as an external patch under any license you wish.<br /><br />Q12. What is the practical effect of the derivative works clarifications?<br />A. For end users there are none. You are free to use and modify Snort as you do today. For anyone that modifies and redistributes Snort *and* adheres to the terms of the GPL, there are none. You may continue to modify and redistribute Snort as you do today. The only impact is on organizations that redistribute Snort and fail to adhere to the terms of the GPL.<br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/Snort" rel="tag">Snort</a>, <a href="http://technorati.com/tag/Open%20Source" rel="tag">Open Source</a>, <a href="http://technorati.com/tag/GPL" rel="tag">GPL</a>, <a href="http://technorati.com/tag/licensing" rel="tag">licensing</a>, <a href="http://technorati.com/tag/Snort%203.0" rel="tag">Snort 3.0</a>, <a href="http://technorati.com/tag/intrusion%20detection" rel="tag">intrusion detection</a>, <a href="http://technorati.com/tag/intrusion%20prevention" rel="tag">intrusion prevention</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com0tag:blogger.com,1999:blog-31672472.post-8985237966141897852007-07-18T14:40:00.001-04:002007-07-18T14:43:03.994-04:00What's up with Snort licensingThere have been a lot of questions and speculation about the things we (Sourcefire) have been changing in Snort's licensing recently and it needs to be addressed so that we can clear the air.<br /><br />There are three things that people have been asking questions about or having issues with.<br /><br />1) GPL v2 lock that we put in place on June 29th.<br />2) "Clarifications" in Snort's license language (Snort 3.0).<br />3) "Clarifications" with regard to assignments of ownership for contributed code (Snort 3.0).<br /><br />Let me address these issues in order.<br /><br />1) GPL v2 lock.<br /><br />Here's what happened. About 3 weeks ago I got a heads up that under GPL v2, a licensee can choose to use GPL v3 if we don’t specify what version of the GPL to use; conceivably we could have people forking and changing license on us. Seeing as GPL v3 didn't even "ship" until June 29th we didn't feel like we were going to be able to make any decision on the language that was contained in the new version until we'd had some time to perform a formal legal review. It also didn't help that they decided to release on the last day of the quarter. Another contributing factor to the decision for me was that Linus decided to keep the Linux kernel at GPL v2, that in itself was enough to get me to hit the pause button and take some serious time reviewing this new license before making any decision. Linus himself said "I'm not arguing against the GPLv3. I'm arguing that the GPLv3 is wrong for _me_, and it's not the license I ever chose." It's not the license we chose either and we're not moving to it without a conscious decision to do so.<br /><br />If we didn't want the code base moving to the new version then what could we do? The simplest thing given the time constraints that we were working within was just to change the language in the source file header preambles (and not the license itself) noting that we were specifying Snort at GPL version 2 until we could make a solid and informed decision about how we wanted to treat GPL v3.<br /><br />For those of you with wholly contributed source files where the file headers were changed, many (most/all?) of them referred to "the program" as being under an indistinct version number (not just your source files) and so rather than try to track everyone down in the time frame we had to work with *I* made a unilateral decision to just move forward with it and we'd clean up the mess afterwards. I'm sorry for the "bull in the china shop" routine but we felt like we needed to have this language out there before GPL v3 shipped at noon EDT on June 29th. Clearly there were some mistakes made, obviously we shouldn't have changed things like the BSD license on the strl* files and so on, we'll fix that too. As Victor observed, this was done in something of a hurry. BTW, we didn't try to "slip it out on a Friday" per the note on some blog, Friday was the deadline and we had to move.<br /><br />Where do we go from here? We're going to examine the language in the new license and decide if we want to move forward with it. This is going to take a while but we'll make an announcement when we make the final decision. For those of you who have wholly authored source files that would like the language changed for your source files back to the original, with the provision that the language reflect that you're just referring to your file and not the entirety of the program, just let us (me) know and send us the verbage you want and we'll make the change. For those of you who object to this sort of thing all together that would like to maintain your code as an external patch set for Snort instead of in the main source tree, give us the heads up and we'll pull your code from the source trees. Once again, this is with the provision that we may reimplement the capabilities that your code offers as Sourcefire-authored code if it happens to be something that we consider important to the project.<br /><br />If anyone has any other input I'd be happy to hear it. Contrary to what several groups with vested interests seem to be promoting, Sourcefire isn't interested in closing Snort's source code or making this a closed-source project. The community continues to be important to us and we have no plans on that ever changing.<br /><br />2) Snort 3.0 "clarifications" and the GPL<br /><br />There has been a fair amount of opinion being put forth by people in the blogging world that Snort 3.0 will no longer be "open source" due to the clarifications that we put in place. This is just plain wrong.<br /><br />Sourcefire produces Snort as an open source project. My interest as the guy who started this whole thing and who has worked on and advanced this project for closing in on 9 years now has always been how good we can make the technology and how well we can serve the needs of the community. Now that Snort has my company behind it, the priorities really haven't changed but there's an interesting dynamic out there with companies that are using Snort as a part of their product or service offering. Many of them seem to expect us to work on this technology and improve it continuously so that their offering is cutting edge but contribute nothing to the project and complain bitterly whenever we do something that might cost them some money to continue to use a best-of-breed technology like this.<br /><br />It's Free as in "Free Speech", not Free as in "Free Money" people! Companies that use Snort as part of a service or product seem to be having a tough time accepting this. The goal of the new licensing language is to define what we consider to constitute conditions under which something built on or around Snort is a derivative work subject to the stipulations of the GPL (i.e. putting the derivative code under the GPL license). Despite all the gnashing of teeth that has resulted from this clarification, what we've really done is take about the most "open" stance you can with a GPL project and put it out there, true open source champions should be applauding us for our position.<br /><br />That didn't happen. Instead we've gotten a litany of grousing from the blogerati, primarily because we've offered a commercial license for people who don't want to play by the rules of the GPL in their product and service offerings that will (*gasp*!) cost money. If you're licensing technology from Sourcefire (which all of you using the GPL version of Snort are doing) and you don't wish to live under the terms of that license, we're giving you another one to choose from. If you don't like having world-class security technology available for a fee because it affects your cost structure, that's not my problem. If you want to use it for free then you have to live by the license but people always seem to interpret the GPL in ways that are optimally advantageous to them (if they don't just take the code directly and bury it in their product). The clarifications we put into Snort 3 are there to get us all on the same page and to make sure that commercial users of the technology understand that we're not a "venture technology" company, giving them technology for free to enable their business models which frequently compete against us in some regard. There's nothing wrong with using Snort as a part of your commercial offering as long as you adhere to its license. If you can't do that then we need to talk.<br /><br />At the same time we've taken many measures to ensure that the end users of the technology are unaffected. Want to integrate Snort or part of Snort into your open source project? No problem, it's free. Want to deploy 100 home-made Snort sensors in your non-profit/enterprise/government organization ? Go for it. Want to learn how these systems work at the code level? No problem, it's all there. Want transparency of your security technology and the content that drives it? It's all there, as it should be. Want to have access to the internals to extend or correct or add your own value to the project or just your operational environment? All part of the open source concept, make it happen. Want to fork and make your own IPS project built on the code-base? You can do that, just make sure you understand what you're doing in maintaining proper licensing for the forked project and respect our IP.<br /><br />I personally have *always* been the biggest advocate for the users of Snort since the day this company was formed and I always will be.<br /><br />3) Snort 3.0 and IP assignments<br /><br />This is the most controversial provision of the clarifications that we put into the Snort 3.0 license. Basically what it says is:<br /><br />* By sending these changes to Sourcefire or one of the Sourcefire-moderated<br />* mailing lists or forums, you are granting to Sourcefire, Inc. the unlimited,<br />* perpetual, non-exclusive right to reuse, modify, and/or relicense the code.<br />* Snort will always be available Open Source, but this is important<br />* because the inability to relicense code has caused devastating problems for<br />* other Free Software projects (such as KDE and NASM). We also occasionally<br />* relicense the code to third parties as discussed above. If you wish to<br />* specify special license conditions of your contributions, just say so when<br />* you send them.<br /><br />So what's that mean? If you send a patch to the mailing lists or to Sourcefire, if you contribute code to the Snort project we consider that code and it's IP to be "assigned" to us. The reason for doing this should be pretty clear, we don't feel that contributing a 3-line patch to a 200k+ LOC codebase means that the contributer has copyright claims over Snort at that point. In the early years there were many people who contributed (in any way) to Snort but over the years since Sourcefire was incorporated the total contribution by these external contributers has decreased substantially. After that, Sourcefire developed more and more of the code, especially the core functionality of the detection engine and preprocessors, not to mention tons of the rules as well. I have felt for a long time that we need to have a sense of proportionality about this and we should also have the ability to be flexible with the code base in terms of licensing without needing to approach every contributer individually to get sign-off on any changes that we make. That's why we've put this provision into Snort 3.0.<br /><br />This "assumptive assignment" is exactly what projects like Nmap use. Perhaps we should take the next step and use the FSF's model where contributers to projects like GCC need to sign a legal document explicitly to contribute to the project. The FSF does this because they need to have flexibility but also because they need to get out from under any potential problems that may occur due to someone inappropriately contributing IP from a 3rd party. I don't like that concept because of the overhead associated with interacting with the project, Snort's not a huge project like GCC so I've liked that people can contribute as they see fit. The FSF does take one additional step, they guarantee that the projects that people make assignments to will be available as open source projects in perpetuity. I think that maybe we need to make a statement like that but quite frankly it's always been our position that Snort will always be available as Free Software and we have no intention to change our position ever.<br /><br />I think that the part of this provision that people have had the most trouble with is that we also retain the right to relicense the contributed code under alternative licenses. We have to be able to do that if we're going to offer alternative licenses to Snort, maintaining a "patch free" code branch and a "patch tainted" branch doesn't make any sense to me and probably not to you either. The assignment doesn't mean we're going to "steal" your code and "disappear" it CIA-style. It means that we need to be able to retain the right to offer it under our commercial license. The code you contribute will always be available to you and everyone else in the open source code base, we're not going to steal it but we are going to make it available to our commercial users. If you've got a problem with this, don't contribute the code to us, maintain it as an external patch.<br /><br />That's about it. I'm sorry we haven't been as communicative with the OSS community as we probably should be, I personally have a lot of demands on my time and I'm the person at SF who's the most familiar with the totality of the Snort project so I have a lot of input into the process here and I'm also fairly parochial regarding communicating concepts like this to the user community. In the future I'll try to be more forthcoming with all of you and I hope you'll continue to be patient with both me and Sourcefire; our hearts really are in the right place with the users of this technology but we also have to be pragmatic about how all of this is going to work given all of the commercial use that Snort sees.<br /><br />We're trying to be pragmatic about these issues, I hope that people can feel comfortable with the direction that we're taking things. I look forward to reading people's responses.<br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/Snort" rel="tag">Snort</a>, <a href="http://technorati.com/tag/Open%20Source" rel="tag">Open Source</a>, <a href="http://technorati.com/tag/GPL" rel="tag">GPL</a>, <a href="http://technorati.com/tag/licensing" rel="tag">licensing</a>, <a href="http://technorati.com/tag/Snort%203.0" rel="tag">Snort 3.0</a>, <a href="http://technorati.com/tag/intrusion%20detection" rel="tag">intrusion detection</a>, <a href="http://technorati.com/tag/intrusion%20prevention" rel="tag">intrusion prevention</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com3tag:blogger.com,1999:blog-31672472.post-22124076576791867592007-05-10T17:19:00.001-04:002007-05-10T17:23:02.186-04:00Snort 3.0 licensing<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5P5UijFGXAxOSCryKq9QIR7tns4gaOJ0qolLpDSPZ5bO80OSCMxZP6BZZLhsQExRd6xvwSm-tS-x5YIjXR2rcbv3GLe6X3LBwC_hwVgENfKLaKvJ70qFbKYiOMinmpuZBNXLl2A/s1600-h/chris_matthews-1.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5P5UijFGXAxOSCryKq9QIR7tns4gaOJ0qolLpDSPZ5bO80OSCMxZP6BZZLhsQExRd6xvwSm-tS-x5YIjXR2rcbv3GLe6X3LBwC_hwVgENfKLaKvJ70qFbKYiOMinmpuZBNXLl2A/s320/chris_matthews-1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5063045414358195618" /></a><br />I've been hearing quite a bit of punditry out there regarding Snort 3.0's licensing language and lots of commentary on What It All Means, so I guess it's time for me to clear the air.<br /><br />If you want to know what Snort 3.0's licensing language is going to be, try reading it. It's available in the first Snort 3.0 pre-alpha release I did last month and we're using the GPL. Apparently it was hard to locate because it was in a file called COPYING instead of one called LICENSE. The origin of naming the license file COPYING comes from the FSF as I recall and is typical of most GPL projects. Anyway, to avoid further confusion (and so I can tell people to look at my blog if it comes up!) I'll post the preamble that we added to the COPYING file before the GPL license language in Snort 3.0 right here:<br /><br /><pre><br />/*************** IMPORTANT SNORT LICENSE TERMS ****************************** <br />* <br />* The Snort Network Traffic Analysis Platform ("Snort") software is the <br />* copyrighted work of Sourcefire, Inc. (C) 2007 Sourcefire, Inc. All Rights <br />* Reserved. This program is free software; you may use, redistribute and/or <br />* modify this software only under the terms and conditions of the GNU General<br />* Public License as published by the Free Software Foundation; Version 2 with<br />* the clarifications and exceptions described below. If you wish to embed this <br />* Snort technology into proprietary software, we sell alternative licenses <br />* (contact snort-license@sourcefire.com). <br />* <br />* Note that the GPL requires that any work that contains or is derived from<br />* any GPL licensed work also must be distributed under the GPL. However,<br />* there exists no definition of what is a "derived work." To avoid<br />* misunderstandings, we consider an application to constitute a "derivative<br />* work" for the purpose of this license if it does any of the following: <br />* - Integrates source code from Snort.<br />* - Includes Snort copyrighted data files.<br />* - Integrates/includes/aggregates Snort into a proprietary executable<br />* installer, such as those produced by InstallShield.<br />* - Links to a library or executes a program that does any of the above where<br />* the linked output is not available under the GPL.<br />* <br />* The term "Snort" should be taken to also include any portions or<br />* derived works of Snort. This list is not exclusive, but is just<br />* meant to clarify our interpretation of derived works with some common<br />* examples. These restrictions only apply when you actually redistribute<br />* Snort. For example, nothing stops you from writing and selling a<br />* proprietary front-end to Snort. Just distribute it by itself, and<br />* point people to http://www.snort.org/ to download Snort.<br />* <br />* We don't consider these to be added restrictions on top of the GPL, but just<br />* a clarification of how we interpret "derived works" as it applies to our<br />* GPL-licensed Snort product. This is similar to the way Linus Torvalds has<br />* announced his interpretation of how "derived works" applies to Linux kernel<br />* modules. Our interpretation refers only to Snort - we don't speak<br />* for any other GPL products.<br />* <br />* If you have any questions about the GPL licensing restrictions on using<br />* Snort in non-GPL works, we would be happy to help. As mentioned<br />* above, we also offer an alternative license to integrate Snort into<br />* proprietary applications and appliances. These contracts can generally<br />* include a perpetual license as well as providing for priority support and<br />* updates as well as helping to fund the continued development of Snort<br />* technology. Please email snort-license@sourcefire.com for further<br />* information.<br />* <br />* If you received these files with a written license agreement or contract<br />* stating terms other than the terms above, then that alternative license<br />* agreement takes precedence over these comments.<br />* <br />* Source is provided to this software because we believe users have a right to<br />* know exactly what a program is going to do before they run it. This also<br />* allows you to audit the software for security holes.<br />* <br />* Source code also allows you to port Snort to new platforms, fix bugs,<br />* and add new features. You are highly encouraged to send your changes to<br />* roesch@sourcefire.com for possible incorporation into the main distribution.<br />* By sending these changes to Sourcefire or one of the Sourcefire-moderated<br />* mailing lists or forums, you are granting to Sourcefire, Inc. the unlimited,<br />* perpetual, non-exclusive right to reuse, modify, and/or relicense the code.<br />* Snort will always be available Open Source, but this is important <br />* because the inability to relicense code has caused devastating problems for<br />* other Free Software projects (such as KDE and NASM). We also occasionally<br />* relicense the code to third parties as discussed above. If you wish to<br />* specify special license conditions of your contributions, just say so when<br />* you send them. <br />* <br />* This program is distributed in the hope that it will be useful, but WITHOUT<br />* ANY WARRANTY; including without limitation any implied warranty of <br />* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General<br />* Public License for more details at http://www.gnu.org/copyleft/gpl.html, <br />* or in the COPYING file included with Snort. <br />* <br />*/<br /></pre><br /><br />There you go.<br /><br />Why did we add this preamble? The GPL license is vague in a number of ways as to what constitutes a "derivative product" and there are lots of confused vendors out there who, one way or another, "misinterpret" this language in ways that are very beneficial to themselves. At the same time those vendors infrequently, if ever, actually contribute anything to the projects that they use. While it's not a stipulation of the GPL that you must contribute back to the projects that you use as core technologies in your products, it is a stipulation that you have to hew to the license language. That being the case, we took a cue from Nmap and decided to add the preamble to the license language to provide clarity for users as to what we believe constitutes a derivative product so that there's as little confusion as possible. If you're an end user and you're using Snort as your IDS/IPS technology, this has no effect on you. If you're a commercial company that's using Snort as part of a product offering in such a way that you've breached the terms of the GPL license, you have two choices. You can distribute the source code for your product under the GPL or you can seek an alternative license from Sourcefire. <br /><br />As I said before, the template that we used for this language comes from Nmap, one of the most popular and wide-spread open source security applications on the internet today. As I have said before in many places, Snort 3.0 is open source technology and is distributed under the GPL. Nothing has changed from the Snort 2.x series except for the clarifications to the license and the option to seek an alternate license from Sourcefire. As with Snort 2.x, technology integrators that don't violate the terms of the license can continue as they have before.<br /><br />Regarding forking the code base, that's always an option if you don't like the direction that the project is taking but if the goal is license evasion then you're probably going to be disappointed. When you fork a GPL project you can't change the license on the forked code base unless you replace every line of code from the original code base with new code belonging to the group maintaining the fork. But we all know that the purpose of the parties who are discussing a fork doesn't have anything to do with license evasion, right?<br /><br />If you don't like Snort 3.0's license language you can keep using Snort 2.x, you can use one of the other free IDS/IPS engine technologies out there or you can write your own. It's a pretty straightforward process to build one of these things, I did it in my spare time...<br /><br /><br /><!-- Technorati Tags Start --><br /><p>Technorati Tags:<br /><a href="http://technorati.com/tag/Snort" rel="tag">Snort</a>, <a href="http://technorati.com/tag/Open%20Source" rel="tag">Open Source</a>, <a href="http://technorati.com/tag/GPL" rel="tag">GPL</a>, <a href="http://technorati.com/tag/licensing" rel="tag">licensing</a>, <a href="http://technorati.com/tag/Snort%203.0" rel="tag">Snort 3.0</a>, <a href="http://technorati.com/tag/intrusion%20detection" rel="tag">intrusion detection</a>, <a href="http://technorati.com/tag/intrusion%20prevention" rel="tag">intrusion prevention</a><br /></p><br /><!-- Technorati Tags End -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com2tag:blogger.com,1999:blog-31672472.post-88842040892218075932007-01-29T13:24:00.001-05:002007-01-29T13:26:03.499-05:00Thoughts on Alerts<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguKJJ6M6raFwQVZ-zny3sshUTiI-vZSABUJO7NxrxBDH4fRn935ecXc0rXlgOx-fKeZcVnRIgEKUd-CcXQ4fo-6I3VzP4TFkBpqxMgQDRhFHtY-aLyJ_klVIPZsdaO1lSyyk_fIg/s1600-h/logs.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguKJJ6M6raFwQVZ-zny3sshUTiI-vZSABUJO7NxrxBDH4fRn935ecXc0rXlgOx-fKeZcVnRIgEKUd-CcXQ4fo-6I3VzP4TFkBpqxMgQDRhFHtY-aLyJ_klVIPZsdaO1lSyyk_fIg/s320/logs.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5025520349958311250" /></a><br />I've been thinking about my previous post regarding NSM methods and the "log everything" mentality that I believe is unworkable in medium to large environments. Given that I'm a guy who doesn't like to give people "it's impossible" for an answer and I don't like "unsolved" problems, I've been thinking about some of the other things that could be put into events that would make them more useful for NSM-style incident analysis. My thinking on this topic was further bolstered by <a href="http://www.taosecurity.com/">Bejtlich's</a> <a href="http://taosecurity.blogspot.com/2007/01/my-investigative-process-using-nsm.html">recent post</a> on his NSM process.<br /><br />Given that "alertocentrism" is a Bad Thing, what are some of the other things we could do with an engine like Snort that could add value to the events that it generates? I'm not going to recommend logging everything, although you certainly could do that pretty easily. I noticed from the post referenced above that flow analysis seems to constitute a large portion of the time that is spent performing NSM. Given that Snort 2.x (and 3.x) already have the ability to log flow information (albeit somewhat limited in stream4), what are the things that we could do to improve alerts?<br /><br />A Snort unified alert typically contains the following information:<br /><br /><ul><li>An Event structure containing</li></ul><ul><li>generator ID</li><li>snort ID</li><li>snort ID Revision number</li><li>classification ID</li><li>priority</li><li>event ID</li><li>event reference</li><li>event reference time</li></ul><ul><li>Event packet information containing</li></ul><ul><li>packet timestamp</li><li>source IP</li><li>destination IP</li><li>source port/Icmp code</li><li>dest port/icmp type</li><li>protocol number</li><li>event flags</li></ul>Additionally, flow records from Snort (stream4) look like this:<br /><ul><li>start time</li><li>end time</li><li>server (responder) IP</li><li>client (initiator) IP</li><li>server port</li><li>client port</li><li>server packet count</li><li>client packet count</li><li>server byte count</li><li>client byte count</li></ul>I've been thinking that one thing that could be done that would be pretty easy and add some value would be to add "point-in-time" flow summary data to Snort events. The idea behind doing this would be to add the data for the flow that the event occurred upon to the event data. Something like this:<br /><ul><li>Event structure (as above)</li><li>Event packet info (as above)</li><li>"Flow point" information including</li></ul><ul><li>flow start time</li><li>last packet time</li><li>initiator packet count</li><li>initiator bytes</li><li>responder packet count</li><li>responder bytes</li><li>initiator TCP flag aggregate (if any)</li><li>responder TCP flag aggregate</li><li>last packet originator (initiator/receiver)</li><li>alerts on flow (count)</li><li>flow flags (bitmap)</li></ul>I think that this kind of information could certainly be useful for putting an event into context within its flow, the analyst could see if there has been bidirectional interaction prior to the event, get a sense for the number of alerts on the flow prior to the current event, etc.<br /><br />There are some other things that could be done along with this. I think that adding in flow point data along with doing things like post-event packet logging would probably be more useful than what we have today. I know post-event logging is not what you want in a full-blown NSM context but it certainly helps to constrain the data management issue associated with just logging every packet and it's better than nothing. I suppose we could also add things like persistent logging to the system as an option (thinking more in the Snort 3.0 timeframe) to allow continuous logging of selected packet traffic, of course this is a DoS waiting to happen so it'd have to be turned off by default and have some pretty serious constraint logic associated with it (in terms of port/protocol/IP filtering).<br /><br />I'm going to think about this more, anyone NSM-heads have any thoughts on the topic?<br /><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/alerts" rel="tag">alerts</a>, <a href="http://www.technorati.com/tag/NSM" rel="tag">NSM</a>, <a href="http://www.technorati.com/tag/programming" rel="tag">programming</a>, <a href="http://www.technorati.com/tag/Snort" rel="tag">Snort</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com1tag:blogger.com,1999:blog-31672472.post-1168975099574702192007-01-16T14:18:00.001-05:002007-01-16T14:19:18.722-05:0010 Pounds of Packets in a 5 Pound Bag<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFqtgOPxI0ASQcG4zjMrvuXF3Zn83n5R267HaQ7mNbN_AAyN5PtxVd4-k2po8JF6vMC3LL2qHBS6frSKUzhlxQsGihsDqcqZIotcRgYf3vXRjbOMFeg1YlZbwtKkhhrjaXD-MAeA/s1600-h/packets.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFqtgOPxI0ASQcG4zjMrvuXF3Zn83n5R267HaQ7mNbN_AAyN5PtxVd4-k2po8JF6vMC3LL2qHBS6frSKUzhlxQsGihsDqcqZIotcRgYf3vXRjbOMFeg1YlZbwtKkhhrjaXD-MAeA/s320/packets.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5020710037169347490" /></a><br /><a href="http://www.taosecurity.com/" title="TaoSecurity">Richard Bejtlich</a> has been <a href="http://taosecurity.blogspot.com/2007/01/hawke-vs-machine.html" title="TaoSecurity">talking</a> <a href="http://taosecurity.blogspot.com/2007/01/and-another-thing-more-nsm-thoughts.html" title="TaoSecurity">a</a> <a href="http://taosecurity.blogspot.com/2007/01/operational-traffic-intelligence-system.html" title="TaoSecurity">lot</a> about the difference between Network Security Monitoring (NSM) and "alert-centric" technologies like Snort. His basic premise is that "real" NSM requires more than just IDP alerts and packet logs, it requires event notifications, full packet logs of the entire network and flow data as well. He also quotes me as saying "Richard, I wrote Snort so you don't have to look at packets". This isn't quite right. I think what I actually said was "Did you think about how much data you're going to record if you do that on a high speed network? We wrote IDS so that we wouldn't have to record everything."<br /><br />I get it, I understand what the NSM guys are saying and I really don't disagree with them at all. The problem I have is that if you try to deploy this concept in a large network environment with lots and lots of sensors, you've got some big problems to overcome. Let's look at the problems.<br /><br />1) Flow aggregation - As I see from Richard's latest post on Cico's MARS product, he wants the raw flow data, not just statistical NetFlow rollups. RNA does that already, as can Snort with the right options turned on. This works fine as long as the network environment is relatively small and you don't try to roll up all of the data for post processing and analysis. If you do aggregate it to a central collector, then you've got the multiplication problem on the aggregation link(s), namely the more traffic the sensors see and the more sensors you have the more they pump up to the collector and have to push into the database that you're using to be able to manage all this information. If you're in an environment where you're aggregating more than a few million flows per hour, that's a ton of data to manage if you figure ~40-bytes per flow record (in binary format). That's 200MB of data just for flow data for an hour, almost 5GB per day. 150GB per month. Those also have to get blasted into a database so they can be worked with, so you're going to be translating all that data into SQL insert statements and then pumping it into the local database on your aggregation machine or across the network to a database server (or cluster). That's a lot of processing, a lot of network bandwidth and a lot of disk, not to mention a lot of RAM to maintain the indices in memory for the database. It's not that this isn't doable, but now we're talking about offloading the work across multiple machines at a minimum and that's going to increase your costs dramatically. Overall this isn't a huge problem (the NetFlow analysis/NBA guys do it for a living) but it is a big one in any large enterprise, it takes a lot of work to scale technology to work with it effectively.<br /><br />2) Traffic aggregation - If you thought the flow aggregation problem was fun then start logging all the traffic on your network. Let's take a fairly well utilized modern enterprise network backbone running at a sustained 500Mbps, that's 62.5MBps of data to record on a single sensor. 225GB/hour of packet traffic, 5.4TB per day from a single sensor. All that data is going to need to be rolled up too, unless you're going to spool it into a local database and do distributed queries across the network for packet traces. At that kind of data density your NSM sensor is going to need a NAS device someplace nearby so that the data can be stored, it's going to be really hard to do that on a 1U appliance just due to physical drive space limitations. Once you have all that data, you're going to need to be able to work with it, so it's got to be in a database or it has to be indexed on the filesystem in some logical fashion so that smaller chunks of data can be rapidly located, decoded and presented to the user on demand. There are companies that build products to do this, I can't really speak to their effectiveness. I can hook a high-speed collection process like daemonlogger up to a big disk and grab all this data, but once again how much value are you really getting for recording all that data vs the logistical overhead of trying to maintain all that information in a usable fashion for extended periods of time? What's the time horizon if this data? Do I need to keep a week/month/year of this data live in a database for referential purposes? If there's going to be any expectation of success the amount of data that's kept "live" is going to have to have some pragmatic constraints.<br /><br />3) Alert aggregation - This is what IDP vendors spend their time working on getting to their users. We have pretty well established metrics as to what is acceptable in this realm in terms of sustainable event rates, data overload thresholds for analysts, data density and so on. This is the de facto standard in IDP because this is the thing that people are paying for, we've got to generate the events and everyone wants to see them since that's what the technology is supposed to be doing. This is a lot of data to deal with too, and this is the raw information that analysts have to work with. We do a lot at Sourcefire to pare down the number of events analysts have to deal with via our Impact Assessment technology that's enabled by RNA, so it is possible to do effectively in large environments even with less than optimal tuning of the sensor infrastructure. <br /><br />When I started Sourcefire one of the things that I decided to do to get people to want to pay for something that was free (i.e. Snort) was to try solving their data management problems. If you look at most of the IDS vendors before Sourcefire was founded, they would sell you IDS sensors and management front-ends but they wouldn't solve the biggest problem that most people would have once they deployed the technology, namely managing the information produced by the sensors. As we all know, IDS can generate immense amounts of data with just alerts and if you want to be able to work with that data it needs to go into a database that has been optimized for the data set. Prior to Sourcefire, you could buy $250k worth of sensors from vendor X and when you deployed the sensor grid you'd call vendor X and ask them how you're going to manage all those alerts. Their answer was typically "go call Oracle, they make a really nice database and we'll sell you professional services if you need help setting it up." This greatly increased the cost and complexity of deployment of the IDS solutions. When Sourcefire started I decided that this was an area where we could add real value, so we built what is now called Defense Center allowing customers to have a plug-n-play appliance that solved their data management problems and provided a path to deploy large infrastructures of our gear quickly. As you can see from our S-1 filing, this was probably a Good Idea.<br /><br />A "real" NSM infrastructure is going to primarily be built around the idea of collecting, moving and storing data and then making it highly available in a variety of presentation formats for users. If you try to do this on a network that's generating lots of traffic across lots of sensors/segments, the likelihood of building a scalable solution that anyone is willing to pay for is vanishingly low. You're going to need hundreds of terabytes of disk, a dedicated out of band management network for moving data, huge database servers AND the management and sensing infrastructure to actually grab the data.<br /><br />Now we want to scale it. I know from experience that there are large distributed international enterprises out there that have remote offices sitting on the other side of 128kbps (and below) links. They get really irritated when you saturate that link to pump out a continuous stream of security data. These organizations also have core networks with 10Gbps links that can sustain 2+Gbps of internal traffic for <strong>hours</strong>. That's a couple terabytes per hour of traffic you want to log, give or take, just in the core. Then you have the rest of the enterprise with 100+ sensors deployed that are seeing varying amounts of traffic but say none of them go below 10Mbps typically, so that's another TB of data every hour you've got to collect and forward to a central aggregation point. Then we throw in the flow data (lots of small records to insert) and the event data (more small records to insert) and you've got a data aggregation nightmare. Concentrating this data to a central collector or a load balanced set of collectors will saturate a gigabit line so you're going to either have to figure out how to leave it local on the sensors and perform distributed queries against it or you're going to have to deploy a bunch of additional network gear to absorb the load.<br /><br />The cost of deploying a solution like this will make today's IDP deployments look like rounding error and the amount of time required to sell this into an enterprise will make today's sales cycles look like selling fast food. <br /><br />Then we've got training. I know what the binary language of moisture vaperators, Rich knows the binary language of moisture vaperators, lots of Sguil users know it too. The majority of people who deploy these technologies do not. Giving them a complete session log of an FTP transfer is within their conceptual grasp, giving them a fully decoded DCERPC session is probably not. Who is going to make use of this data effectively? My personal feeling is that more of the analysis needs to be automated, but that's another topic.<br /><br />One of the comments made to one of Rich's posts said <br /><blockquote>It seems that a lot of these SIM and IDS/IPS systems are really now being sold to small and medium enterprises without any regard to the amount of additional staff time and expertise that will be required to maintain them. Consequently I find that the ones I've used aren't oriented towards making investigation of an incident easier but are there simply to send out more alerts under the premise that more alerts is surely better because we're detecting and stopping more attacks.</blockquote>That's incorrect. They're being sold to extremely large enterprises (Fortune 100) and when they're sold in those environments there's an expectation that they will scale. There is more data that we can get to the users of these systems for sure, but everything is an unrealistic expectation given the realities of the large enterprises that these technologies are sold into.<br /><br />Recording everything doesn't scale today but maybe someday it will. Like after the Singularity.<br /><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/NSM" rel="tag">NSM</a>, <a href="http://www.technorati.com/tag/data management" rel="tag">data management</a>, <a href="http://www.technorati.com/tag/Snort" rel="tag">Snort</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com4tag:blogger.com,1999:blog-31672472.post-1168974863522884122007-01-16T14:14:00.000-05:002007-01-16T14:20:49.175-05:00Upgrading the Apparatus<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6UPa0usTX9tQ0tw-Vlfzj-PQjGzdqVBGv3COTA1nSeCX8ebPso2AF06Aj4ME_Dzx9gE9a4pst3rUIPBRaRdpGrKyLG-0YcOt3Oyofq6Iye2wvebTVrI-qw-RP47zOWyPTiHWMJQ/s1600-h/tytn_141x228.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6UPa0usTX9tQ0tw-Vlfzj-PQjGzdqVBGv3COTA1nSeCX8ebPso2AF06Aj4ME_Dzx9gE9a4pst3rUIPBRaRdpGrKyLG-0YcOt3Oyofq6Iye2wvebTVrI-qw-RP47zOWyPTiHWMJQ/s320/tytn_141x228.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5020710445191240626" /></a><br />I've acquired several new gadgets over the past four months and I thought that people might be interested in my experiences.<br /><br />1) New phone - <a href="http://www.htc.com/product/03-products-1.htm" title="HTC Site">HTC TyTN</a><br />With Sourcefire expanding like it is, I need a phone that I can use anywhere in the world. I've been using a Sony z600 for a few years now and buying prepaid SIM cards for whatever country I'm heading to. The downside of this is that nobody can really call me when I'm abroad and if the SIM card runs out of money then I've got to jump through hoops to get it working again. I solved that problem by moving to Cingular and getting the TyTN. The TyTN is a quad-band GSM phone with tri-band HSDPA data and 802.11, plus Bluetooth. It runs Windows Mobile 5 but is still pretty useful despite that. Compared to the Treo 650 it's replacing it's been reliable and pretty straight forward to use. The Windows paradigm is so so on the mobile platform, it's certainly got a lot of clicking required to perform complex tasks compared to Palm. That said, it's a good phone and runs the apps that I need on a mobile device (<a href="http://pocketinformant.com/products_info.php?p_id=mail&dir=wm" title="Flexmail 2007">Mail</a>, <a href="http://www.agilemobile.com/agile_messenger.html" title="Agile Messenger">IM</a>, <a href="http://www.pocketputty.net/" title="PocketPutty">SSH</a> and a web browser). Overall I've been about as happy with this phone as I am with any cell phone, the data connectivity is nice and it worked well when I was overseas in December.<br /><br />2) New laptop - <a href="http://www.apple.com/macbookpro/" title="MacBook Pro">MacBook Pro</a><br />I passed on the initial MacBook Pro release and waited very patiently for the Core 2 Duo processors to make their debut in the Apple laptop line and was rewarded with this very nice machine. The Core 2 Duo chip has a few nice new features over the first generation Core Duo chips including the EM64T instruction set, a larger L2 cache, higher performance and lower power consumption. It turns out it also runs cooler than the Core Duo.<br /><br />Since my laptop is my primary development/presentation/communications/everything machine, I got it maxxed out with a 2.33GHz CPU, 3GB of RAM and the 200GB hard drive and the glossy screen option. It's a heck of a lot faster than the PowerBook it's replacing for pretty much everything I do except maybe MS Office since it's run in emulation via Rosetta. It's also great for running <a href="http://www.parallels.com/" title="Parallels">Parallels</a>, the OS X virtualization environment. I've been running XP under Parallels for various esoteric applications, like running my telescope and CCD cameras for astrophotography and it works like a champ.<br /><br />This is without a doubt the best laptop I've ever owned, it's fast, stable and like all Macs, it just works. It's a great development platform, a great travel machine and all around nice computer.<br /><br />3) <a href="http://www.novatelwireless.com/products/expresscard/merlin-xu870.html" title="XU870">Novatel Wireless XU870 HSDPA modem</a><br /><br />I used to use a Novatel EV-DO card on Sprint for my mobile internet needs but the MacBook Pro has an ExpressCard/34 slot and there was no EV-DO card available for it. Luckily, since I was switching to Cingular anyway I found this card. It supports GPRS/EDGE/UMTS/HSDPA data connections up to 3.6 Mbps and uses a standard SIM card for network access. It also has drivers for OS X available, so it's pretty much a winner across the board. I got a second 3G SIM card from Cingular, set it up as a modem in OS X and it was off to the races! I also got international data roaming turned on for the SIM card so it even worked overseas. I've been surprised at how often it's been able to find an HSDPA signal to connect to, it's worked really well everywhere I've tried to use it (including France and the UK).<br /><br />Supposedly there's a firmware upgrade that will be coming in the not too distant future that will allow the modem to handle up to 7.2 Mbps over the air. Once it comes out I'll be sure to post my experiences with it.<br /><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/gadgets" rel="tag">gadgets</a>, <a href="http://www.technorati.com/tag/laptop" rel="tag">laptop</a>, <a href="http://www.technorati.com/tag/mobile" rel="tag">mobile</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com2tag:blogger.com,1999:blog-31672472.post-1162187530796803742006-10-30T00:44:00.000-05:002006-11-03T15:27:22.633-05:00On Amateur Astronomy<a href="http://photos1.blogger.com/blogger/4050/3442/1600/smallscope.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="http://photos1.blogger.com/blogger/4050/3442/320/smallscope.jpg" border="0" alt="" /></a><br /><p><br />As some of you know, I'm an amateur astronomer. It's a great hobby, I've been involved with it since I was 14 years old and I really enjoy relaxing under the night sky and commanding a big stack of advanced optics and robotics, not to mention some really cool cameras to be able to see the heavens and enjoy them more fully than many people do.<br /></p><p><br />When you first buy a telescope, it's a pretty daunting experience. There are different types of optical systems, different types of mount, automated pointing systems, CCD cameras, eyepieces, film cameras, focusers, dew control electronics, counterweights, pointing aids, and a host of reading material, not to mention a bunch of specialized software to use all the gear together effectively. Now, a lot of time when people first get into astronomy, the first thing they think to do is just go out and buy a telescope. Seems like a natural thing to do, eh?<br /></p><p><br />When you go to the store to get your telescope, you're likely to be assaulted by a barrage of marketing about all the wonders you will observe with your shiny new telescope. The pictures on the side of the <a href="http://www.telescope.com/jump.jsp?itemType=PRODUCT&itemID=383" title="60mm refractor">60mm achromatic refractor</a> you're likely to end up with promise you that you'll be able to observe the universe from your back yard and it'll look <a href="http://www.robgendlerastropics.com/M45STLmosaic.html" title="M45">a</a> <a href="http://www.robgendlerastropics.com/M51NM.html" title="M51">lot</a> <a href="http://www.robgendlerastropics.com/M31NMmosaic.html" title="M31">like</a> <a href="http://www.robgendlerastropics.com/M20NM.html" title="M20">this</a>. When you get your shiny new scope home and set it up in the back yard for the first time, you'll soon discover that reality and marketing sometimes are at odds, because what you're really going to see will look a lot more like <a href="http://cerberus.sourcefire.com/~roesch/astro/pix/m51.jpg" title="M51">this</a> (if you're lucky and spent some money on a decent scope and you live under a really dark sky). Chances are that you're not even going to see that because you haven't been versed in using <a href="http://vegas.astronomynv.org/Tutorials/avertedvision.htm" title="averted vision">averted vision</a> to pick out detail in faint objects. Sometimes when I let people look through my telescopes for the first time and I show them something pretty incredible like a galaxy that's 50 million light years away, they barely see anything. It's always easiest to show them something bright and obvious like Jupiter or Saturn, something that requires little skill to find or observe and that will give quick gratification.<br /></p><p><br />In fact, if you're going to get good use out of that telescope, you're going to have to become familiar with a whole host of topics that you may not have thought a whole lot about prior to getting into astronomy. Things like celestial coordinate systems, ephemeris, polar alignment techniques, averted vision, calculating the magnification and field of view of an optical system, different object catalogs and their contents, gauging the viewing quality of the sky, amateur meteorology (hauling out 200lbs of gear on a night when a cold front is going to bring rain 2 hours after you get setup is a bummer), not to mention all the topics of astrophotography if you really do want to see sights like those promised on the box in which the telescope came.<br /></p><p><br />When you get right down to it most people's entry into astronomy is guided by a lot of marketing hype followed shortly by disappointment. Disillusionment, you might call it. Once you understand what the real capabilities of the equipment are and you spend the time to learn how it works, what its limitations are and what the optimal setup is to see the things you're really interested in AND you take the time to learn all the systems and background data on what's up in the sky and when is the best time to see it based on where you are and the local terrain, THEN you will be finally understand what you can expect and be happy when you get your gear to reveal the things you were interested in in the first place.<br /></p><p><br />A lot of people don't have the patience or time to devote the energy required to actually get good at this hobby and a lot of people abandon it for just this reason. There are a lot of hobbies out there that have much more immediate gratification. After all, you can just go to <a href="http://www.stsci.edu/hst/" title="Hubble">STScI</a> and see pretty much everything an amateur would ever want to try to see and more. In fact, in a world where <a href="http://www.ipac.caltech.edu/2mass/" title="2MASS">automatic telescopic surveys</a> of the night sky are happening continuously, why the hell would anyone sign up for this hobby. It's futile, isn't it? Everything that can be seen will be seen by the professionals, right? Wrong. There is still valuable science being done by amateurs, new comets being discovered, supernovas, <a href="http://www.redspotjr.com/" title="Red Jr">Red Spot Jr</a>. In fact, amateurs still contribute valuable science using the modest tools they have available. The fact of the matter is that the gear is getting so good these days that you can achieve <a href="http://jupiter.cstoneind.com/" title="Christopher Go">incredible results</a> with pretty modest equipment if you know what you're doing.<br /></p><p><br />Some people don't get the appeal of this hobby, some people do, some people really get sucked into it and spend huge amounts of time and effort on it because it's a really interesting and it gets them in touch with the universe around them. Some people see a lot of value in that, some people think it's a waste of time and money. At the end of the day it turns out that you get out of astronomy what you put into it.<br /></p><p><br />A lot like another field of endeavor that I know.<br /></p><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/Astronomy" rel="tag">Astronomy</a>, <a href="http://www.technorati.com/tag/Philosophy" rel="tag">Philosophy</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com5tag:blogger.com,1999:blog-31672472.post-1162179298928442642006-10-29T22:28:00.000-05:002006-10-29T22:41:02.750-05:00Sourcefire Files S-1...<a href="http://photos1.blogger.com/blogger/4050/3442/1600/dollars%20pic.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="http://photos1.blogger.com/blogger/4050/3442/200/dollars%20pic.jpg" border="0" alt="" /></a><br /><p><br />...so we're in what's called the "<a href="http://www.sec.gov/answers/quiet.htm" title="Quiet Period">quiet period</a>". What that means for me, practically speaking, is that I won't be doing a lot waxing profound in public for some time. So while there are many "interesting" threads on both <a href="http://www.sourcefire.com" title="Sourcefire">Sourcefire</a> and one of the technologies we develop in progress on various <a href="http://archives.neohapsis.com/archives/dailydave/" title="mailing">mailing</a> <a href="http://linuxbox.org/pipermail/funsec/" title="funsec">lists</a> <a href="http://www.matasano.com/log/" title="matasano">and</a> <a href="http://taosecurity.blogspot.com/" title="taosecurity">blogs</a> and as much as I'd like to, I won't be making any comments on those discussions. <br /></p><p><br />Not even if you buy me beer. I dare you to try!<br /></p><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/Sourcefire" rel="tag">Sourcefire</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com0tag:blogger.com,1999:blog-31672472.post-1159801796586103682006-09-26T16:52:00.000-04:002006-10-03T20:13:25.066-04:00Miracle Weapon in the War on Terror Discovered!<a href="http://photos1.blogger.com/blogger/4050/3442/1600/ziploc.jpg"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="http://photos1.blogger.com/blogger/4050/3442/320/ziploc.jpg" border="0" alt="" /></a><br /><p><br />I've mentioned before that I spend a lot of time in airports and am a big fan of security theater, but today I had a revelation that was on par with what Neo must have felt like when he woke up in that slime-filled pod. I read yesterday with no small amount of joy that the War on Liquid was being relaxed and that, unlike the day prior, certain liquids were no longer dangerous explosives but instead had been declared innocuous and were super duper ok to bring on aircraft once again. I was relieved that I'd be able to bring saline solution and toothpaste on board without checking my bags with the attendant 30-60 minute baggage delay on landing or resulting to smuggling 1/2 oz bottles through security in my pockets or discretely in my carry-ons. <br /></p><p><br />Little did I know that there was a caveat to this major victory: these liquids are only safe for travel if they're contained within a clear plastic 1-quart zip-loc baggie. I discovered this the hard way, I brought my shaving creme too. Now, I had seen the announcements yesterday, including the caveat of putting things in a clear plastic 1-quart zip-loc baggie, but I didn't understand that the container around the liquids was the thing that was actually ensuring the safety of the aircraft and all those aboard.<br /></p><p><br />When I arrived at the check point there was a TSA person there declaring loudly for everyone to hear that the key to salvation was in fact the clear plastic 1-quart zip-log baggie and that anything inside was ok to travel and anything that wasn't in the bag was contraband and hazardous to the national security of the United States. At this point, I knew that there was going to be trouble because I had brought a 2oz travel can of shaving creme but I DIDN'T HAVE A CLEAR PLASTIC 1-QUART ZIP-LOC BAGGIE WITH ME! Now, not wanting my 2/3oz bottle of saline solution or travel toothpaste I was smuggling in my bag to get confiscated, I came up with a plan. <br /></p><p><br />When I got I up to the head of the line I took out my shaving creme and said "I'm sorry, I don't have a clear plastic 1-quart zip-loc baggie with me to contain this deadly shaving creme, do you guys have any?" I may have left out the "deadly shaving creme" part. Anyway, they said "No, that can't go". There were no trash cans handy so I asked what I should do with it and they said to put it in one of the small trays and send it through the X-ray. For a second I thought that sanity had prevailed and that they were going to apply critical thought to the situation, but I was relieved to see that mindless obedience to the rules won the day. Upon emerging from the machine, one of the TSA people grabbed the tray and said "who's shaving creme is this?". I indicated it was mine and he shook his head and said "that can't go" and promptly chucked it in the garbage. As I was putting my shoes back on and packing away my laptop, he wandered by and I thought to ask him a question.<br /></p><p><br />Me: "Just so I'm clear, if I put that shaving creme in a clear plastic 1-quart zip-loc baggie that would have been fine?"<br /></p><p><br />TSA guy: "Uh, yeah."<br /></p><p><br />I made eye contact with him, shaking my head and he looked back at me for a couple seconds before he cracked a sheepish-grin.<br /></p><p><br />I don't know about you but I'm selling my stock in Halliburton and LockheedMartin and buying S.C Johnson as soon as this plane lands.<br /></p><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/Flying" rel="tag">Flying</a>, <a href="http://www.technorati.com/tag/Futility" rel="tag">Futility</a>, <a href="http://www.technorati.com/tag/Meatspace" rel="tag">Meatspace</a>, <a href="http://www.technorati.com/tag/Security" rel="tag">Security</a>, <a href="http://www.technorati.com/tag/Travel" rel="tag">Travel</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com6tag:blogger.com,1999:blog-31672472.post-1156779396557991612006-08-28T11:29:00.000-04:002006-08-29T12:08:21.386-04:00Kawasaki Interviews MySQL CEO<a href="http://photos1.blogger.com/blogger/4050/3442/1600/hippie-tie-dye.0.gif"><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="http://photos1.blogger.com/blogger/4050/3442/320/hippie-tie-dye.0.gif" border="0" alt="" /></a><br /><p><br /><a href="http://blog.guykawasaki.com/">Guy Kawasaki</a> has an <a href="http://blog.guykawasaki.com/2006/08/ten_questions_w_2.html">interesting interview</a> with the <a href="http://www.mysql.com/company/management.html">CEO of MySQL</a> where they touch on one of my pet theories regarding the value of open source development. From the article:<br /></p><blockquote><br />Question: How do you make money with an Open Source product?<br /><br /><br /><br />Answer: We start by not making money at all— but by making users. The vast community of MySQL users and developers is what drives our business.<br /><br /><br /><br />Then we sell an enterprise offering to those who need to scale and cannot afford to fail. The enterprise offering consists of certified binaries, updates and upgrades, automated DBA services, 7x24 error resolution, etc. You pay by service level and the number of servers. No nonsense, no special math. Enterprise software buyers are tired of complex pricing models (per core, per cpu, per power unit, per user, per whatever the vendor feels like that day)—models that are still in use by the incumbents.<br /><br /><br /><br />At MySQL we LOVE users who never pay us money. They are our evangelists. No marketing could do for us what a passionate MySQL user does when he tells his friends and colleagues about MySQL. Our success is based on having millions of evangelists around the world. Of course, they also help us develop the product and fix bugs.<br /><br /><br /><br />And the few times that they say that they hate MySQL, that helps us too because complaints usually contain some good suggestion for improvement.<br /></blockquote><p><br />Making money by creating users, some small percentage of which will eventually pay you money to solve the hard problems that they run into using your open source technology.<br /></p><p><br />Back around the time that <a href="http://www.nessus.org/">Nessus</a> went to its new licensing model I got quite a few questions from people regarding the open source business model that <a href="http://www.sourcefire.com">Sourcefire</a> operates under and whether they could expect us to follow suit and go to some non-OSI approved license. My response was always "No, we'd be crazy to do that".<br /></p><p><br />The reason I say that is because I believe that the value of an open source technology is not the technology that it implements (beyond a certain point, it has to do something interesting and do it well). The value of an open source technology to the company that develops and supports it is the community that grows around it. It's pretty obvious that the community that grows around your project is your potential customer base, the thing that may not be obvious is that they are also a strong part of your marketing team. My observation is that open source users have a tendency to be evangelistic and that evangelism can go a long way towards getting your company in the door at their company, as well as at the companies of the friends that they have. Additionally, the guys who use open source tools when they're either young with no money (e.g. proverbial college students) or tasked with investigating a technology before getting into a formal deployment (e.g. proverbial IT security guys with hot tasking from on high) start with open source products and will stick with them if they have a good experience. Guys who learned <a href="http:www.snort.org">Snort</a> in college in 1999-2000 are IT directors/managers/VPs now and having them familiar with and (possibly) fond of the technology is a big deal for us at Sourcefire. Back in 2002 when Sourcefire was ~10 people and we'd lose a deal to open source Snort my philosophy was always that it was not a big deal, the customer would be back when the problems got sufficiently hard and they'd think of us first as the place to go for a solution if we continued to deal in an even handed fashion with the user community and continued to advance the product. <br /></p><p><br />Advancing the product is a big deal too. Some have theorized that doing things like adding a new detection engine to Snort that could do gigabit speeds and then giving it away was a Bad Idea because it allowed our Snort-based competitors to have a more level playing field with which to compete against us. My opinion is that it keeps the ball moving forward and keeps people's eyes on what we're doing instead of letting them get bored and going off to check out some other more rapidly developing OSS technology or a commercial solution. Letting your technology get stagnant is almost as bad as closing the technology, once the community is bored they'll be looking elsewhere for something exciting. One important point to note in this regard (in a product company) is that just because you're releasing advances to the open source community at large doesn't mean that you are required to drive your differentiation from that technology to zero. If you want to be able to get people to want to pay for what you do, then having some sort of key differentiation is a must! At Sourcefire we did things like developing a <a href="http://www.sourcefire.com/products/rna.html">complementary technology</a> that allowed us to address one of the toughest problems in the intrusion detection world, false positives. If you can't maintain differentiation against your open source product or your competitors that use your open source technology, then you've got a problem that you need to get creative around, closing the technology isn't an acceptable answer in my opinion.<br /></p><p><br />Once you've open sourced your technology then you have to approach its continued development as a community building exercise that works best by advancing the technology and trying to maintain community-friendly policies and programs. If you do this and try to be clueful about interacting with the open source users as the company grows (a whole different topic) then you have the foundation necessary to build a business of substance. That's the principle that I originally built Sourcefire on and so far it has <a href="http://www.sourcefire.com/news/press_releases/pr-29.html">worked</a> <a href="http://www.sourcefire.com/news/press_releases/pr-2.html">pretty</a> <a href="http://www.sourcefire.com/news/press_releases/pr041805.html">well</a>.<br /></p><br /><!-- technorati tags start --><p style="text-align:right;font-size:10px;">Technorati Tags: <a href="http://www.technorati.com/tag/Open Source" rel="tag">Open Source</a>, <a href="http://www.technorati.com/tag/Philosophy" rel="tag">Philosophy</a>, <a href="http://www.technorati.com/tag/Snort" rel="tag">Snort</a>, <a href="http://www.technorati.com/tag/Sourcefire" rel="tag">Sourcefire</a></p><!-- technorati tags end -->Martin Roeschhttp://www.blogger.com/profile/17029362481574933874noreply@blogger.com1