Sunday, October 28, 2012

Hackers For Charity

Hackers For Charity provides basic needs for some of the poorest of the poor in Uganda, as well as bringing technology to the people of Uganda for training and access to knowledge that most of us take for granted.
Johnny Long and his family are bringing hope and help to those otherwise forgotten by the world.
You may be familiar with Johnny from his work with Google Hacking, his speaking engagements at many conferences including Def Con and Black Hat, and the plethora of books he's written.
If you'd like to make a difference in the lives of folks who really have no means to help themselves, please go to and make as generous a donation as possible. Your gift will so help someone who would have no hope if it weren't for Johnny and the those who partner with him by donating. Thank you in advance.

Friday, October 26, 2012

SourceFire Default Setting - Server Flow Depth

If you're running SourceFire, there's a setting in the HTTP Configuration module you'll want to check when doing your tuning and configuration. Under the Configuration section, 5th setting down, you'll find Server Flow Depth. This setting has to do with how many bytes of HTTP server response data the rules inspect. It's a little more complex than that, as there are other settings that help determine what parts of the data are looked at, but that's all well documented. The thing to look at here is the default setting, which is 500 bytes. Possible values are 1-65,535 to specify a particular number of bytes, or 0 for all, including data that exceeds 65,5535. 500 is a very low value here, even though the docs say the rules usually target the headers or traffic that will be in the first hundred of so bytes of data.
I started testing larger values here by increasing this to 5,000 bytes. That's a ten fold increase of the default, but still far smaller than the recommended value of 65,535. The change was startling, as we saw an immediate increase in the number of alerts, some from rules that had never fired before. I monitored two of the busiest sensors in the system and saw no noticeable hit in performance.
To cut to the chase, I tried values of 10,000, 50,0000 and finally the recommended 65,535 bytes. None of those values gave me an unacceptable performance hit on the sensors, but each time the volume of alerts, what had been false negatives, went up in large measure. 
The amount of tuning needing done on the new, heretofore unseen traffic was on par with having added a new segment to monitor. It was amazing and disconcerting how much traffic that low default setting had blinded the sensors to.  
The moral of the story here is check every configuration item carefully and make sure you understand what each one does. IDS is a complex beast and you might be missing a lot traffic you should be seeing if you're not careful.

Monday, October 15, 2012

Bash to Check Packet Captures (Again)

To expand on the previous example a little..

To do a little more specific searching if you need, say,  certain packets from an IP in a certain time frame:

1.Put your file names into a file:

Here's the output of ls -lah:

-rw-r--r--. 1 root root 573M Oct 15 07:42 external3.1350301240

Our file name is in the ninth field (separated by spaces, the default in awk)

So we list the files, grep for a date, pipe the output into awk, telling it to print to the screen (stdout) the ninth field and redirect to a file called "list":

ls -lah | grep 'Oct 15' | awk '{print $9}' > list

Use this list of files to search for an IP address and write the packets out to another pcap file:

for i in $( cat list );do tcpdump -nnvve -r $i -s0 -X 'host' -w interesting_events.pcap

Tuesday, October 9, 2012

A Little More On Spondulas

As I mentioned before, Bart Hooper gave a great presentation on malware site analysis at Derby Con (suggest you watch the video if you monitor IDS and have to deal with end users accessing malicious sites). In his presentation he demo'd a tool he wrote called Spondulas. Spondulas is a web browser emulator and link parser. It grabs the raw output from the site, performs any needed post-processing, and saves an output file with the categorized links listed for you. Very nice tool that extends the functionality of tools like Malzilla.
 It's features (from the tool's Wiki site, found here) are:

  • Support for GET and POST methods
  • Parsing of retrieved pages to extract and categorize links
  • Support for HTTP and HTTPS methods
  • Support for non-standard port numbers
  • Support for the submission of cookies
  • Support for SOCKS5 proxy using TOR
  • Support for pipe-lining (AJAX)
  • Monitor mode to poll a website looking for changes in DNS or body content
  • Input mode to parse local HTML files, e.g., e-mailed forms
  • Automatic conversion of GZIP and Chunked encoding
  • Automatic IP address Look-up
  • Selection or generation of User Agent Strings
  • Automatic creation of an investigation file
You can download either the Python-based  Linux version or the Windows version, which comes in 32 or 64 bit flavors. Excellent tool. Worth a second mention.

Thursday, October 4, 2012

DerbyCon 2.0 Videos

IronGeek has gotten the videos of DerbyCon 2012 up already, four days after the end of the conference. They're up on his site found here. At the bottom of the page are links to the postings of the videos, which include the archive torrents.
It was an excellent conference and I'll be busy for quite a while replaying the most interesting talks and listening to all the ones I couldn't be in.

Blog Archive