Thursday, December 9, 2010

Polymorphic Malware from Noah Schiffman

Good, condensed explanation of metamorphic/polymorphic malware...

The propagation of malicious code dates back to the early days of sneakernet-style transmission of boot-sector viruses via floppy disks. Once the spread of infectious code reached critical levels, the security community counteracted with programs designed to patch, protect, scan and block -- the birth of the antivirus suite. Since then virus writers and antivirus vendors have worked day and night to outdo each other, which in turn has caused malicious code to evolve at a remarkable rate, creating new injection vectors, evasion techniques and attack payloads.

One of the most innovative and insidious creations of malware propagators has undoubtedly been the advent of metamorphic malware. To understand the concept -- and its cousin, polymorphic malware -- requires a basic understanding of underlying malware encryption techniques. In the simplest of models, an encrypted virus consists of a virus decryption routine (VDR) and an encrypted virus body (EVB). Execution of an infected application enables the VDR to decrypt the EVB, which in turn causes the virus to perform its intended function. In the propagation phase, the virus is re-encrypted and appended onto another host application. A new key is randomly generated with each copy, thus altering the appearance of the code. However, the VDR remains constant and this is its inherent weakness, resulting in detection via signature recognition.

Polymorphism, which literally means "changing that of appearance," adds an additional component to the encrypted code -- a mutation engine (ME). The ME essentially can change the code of another program without changing its functionality. For example, an ME can alter the code of a VDR with each replication, while maintaining its ability to decrypt the EVB. The continuous alteration of the VDR is achieved using obfuscation techniques such as junk code insertion, instruction reordering and mathematical contrapositives. However, the preservation of the decrypted virus body is its Achilles' heel, as it provides a form of complex signature.
Consequently, advanced techniques such as generic decryption scanning, negative heuristic analysis and the use of emulation and virtualization technologies have proven to be successful polymorphic detection methods.

Evolving from the deficiencies of polymorphism, metamorphic malware brought virus mutation to the next level. Instead of mutating the EVB and reapplying a cryptographic cover, metamorphism employs the ME to transform the virus itself. Using a disassembly phase, the code is represented as a meta-language that characterizes its end function, disregarding how the code achieves this function. Thus, after analysis, code morphing and reassembling, the end result is new code that bears no resemblance to its original syntax, yet it's functionally the same.

Metamorphic malware's ability to completely re-alter its code -- and change its signature pattern -- with each cycle is evidence to its disturbingly significant power to evade AV techniques. One prototypical model can be observed with the Win32.Metaphor virus (aka Win32.Etap, Win32/Simile).
Acronymically named for metaphoric permutating high-obfuscating reassembler, this virus first surfaced in 2002, with numerous variants following. Despite its non-destructive payload (various messages were displayed depending on the date), its incorporation of several innovative and advanced metamorphic techniques provided successful propagation and antivirus evasion. The powerful combination of entry point obscuring (EPO), pseudo-code permutation, size shrinking and expansion (the "accordion model" technique), anti-emulation time stamp analysis, advanced infection routines and cross-platform compatibility with Linux, created a new class of malware -- a threat level surpassing non-metamorphic code. This changed the enterprise security model, requiring different strategic perspective for central, perimeter and endpoint security.

While no definitive all-encompassing detection methodologies exist for this continually evolving class of malware, identification is possible.
Metamorphism reveals its inherent weakness in its need for self-analysis.
As an entity, it can analyze its own code, thus theoretically it can be analyzed by other programs. Effective methods have been developed using emulation techniques to heuristically examine the post-morphed function of the code. Furthermore, research in methods such as automated replication systems, similarity indices, geometric analysis, and tracing emulators continue to grow. Despite the advancements in detection and prevention, virus writers are creating more sophisticated and efficient mutation engines and new obfuscation techniques. Until a method for definitive identification is developed, new forms of metamorphic code will continue to propagate and pose a challenge for the security community.

Protection from any type of metamorphic malware is best addressed by blended threat management platforms using a multi-layered approach.
Antivirus software, updated frequently, remote access restrictions and compliance monitoring should be employed at the server and end-user levels.
Network and personal firewalls should have any unused service ports shut down. Email servers should employ content filters and file scanning.
Finally, any corporate setting should develop, maintain and enforce a well-defined and effective set of security policies. In extreme situations, when dealing with highly sensitive data, extra security measures such as real-time emulation analysis and specialized network segmentation may be considered.

About the author:
Noah Schiffman is a reformed former black-hat hacker who has spent nearly a quarter century penetrating the defenses of Fortune 500 companies. Today he works as an independent IT security consultant specializing in risk assessment, pen testing, cryptography and digital forensics, predictive analysis models, security metrics and corporate security policy. He holds degrees in psychology and mechanical engineering, as well as a doctorate in medicine from the Medical University of South Carolina. Schiffman is based in Charleston, S.C.

Wednesday, November 10, 2010

A Quicker Way

I posted a command I found a while back to strip off the port number from the IP address in tcpdump-style notation, as in I was going through my notes and found a quicker way (I believe this was posted as a reply to a Handlers article on the Internet Storm Center.)

The first command was: sed 's/.[^.]*$//' and though it works, of course, it's not real easy to remember on the fly (though we love regex... how much easier our jobs are because of it).

The simpler way is to use the cut command, like this: cut -d. -f1,2,3,4

This simply displays the first four fields delimited by a period, so it drops the last one (the port number). As Lola of Nick Jr. and book fame says, "Easy peasy lemon squeezie!"

Friday, October 29, 2010

Security Onion Live CD

Doug Burks has created an IDS Live DVD running Ubuntu. Pre-installed are the following packages:
tcpreplay and others.
The .iso can also be installed on a USB flash drive, giving you an IDS-on-a-stick. Very handy.
I'm looking forward to trying it out on the security test box I have at home.

Doug's page is at There you'll find a download link, a presentation on Security Onion and a FAQ, as well as his posts on network security.

Wednesday, October 27, 2010


Security experts recommendation for users to subscribe to a VPN service isn't practical...even if there weren't the possibility that someone might grab that cookie as it leaves the VPN server for the destination. I just can't see many ordinary users shelling out money for a VPN service, setting it up on their laptops and using it. whatever the final solution is, it's going to have to be an end service fix, and transparent or nearly so, to get most folks to adopt using it.
Firesheep's site here.

Friday, October 8, 2010

A Few Handy Built-Ins

Anyone who's worked with Linux knows it's a great operating system (which ever of the many flavors you run). Not only is it stable and secure, but it makes great use of hardware with lower overhead, making it fast as well. Another really nice thing about an open source OS is the constant additions of utilities that make life easier for both the admin of the box and the user.
Here are a few of my favorites for those new to Linux...

watch: watch allows you to re-execute a program over and over and output to the screen. It's very handy to watch for changes to a directory, or to watch for a service to start or monitor connections. For example, let's say you run a service on port 8000. You want to watch for any connections to that port. You could do that by running "netstat -an | grep 8000", or better yet, "netstat -an | grep 8000 | grep EST". that would take the output of netstat, which shows network connectivity, statistics and such, pipe it through grep to filter out all lines except those that contain 8000, the port you wish to monitor, then filter out from those lines any except ones that have upper case EST in them. This would show ports in the ESTABLISHED state.
That's great, but what if you were watching for connections over an hours time span? watch works great for this. watch takes a -n parameter, which is the number of seconds between executions. The default is 2 seconds. If we wanted updates as quickly as possible, we would run:
watch -n 1 'netstat -an | grep 8000 | grep EST'. Every second, watch would rerun the netstat command and show you the results, clearing the screen between each iteration.

lokkit: lokkit is a command line (used to be ncurses GUI based) utility to modify the iptables firewall. It's very simple to quickly open up a port with lokkit (I'd recommend making a copy of iptables, found in /etc/sysconfig, first). If you want to open port 21 to all inbound traffic, you'd run "lokkit  -p 21:tcp". Viewing your firewall tables by running "iptables -L' should show:
ACCEPT     tcp  --  anywhere             anywhere            state NEW tcp dpt:ftp
You can also disable and enable the firewall, open ports by service name, add trusted interfaces, add custom rules and add and remove modules.
Just make sure you back up before making changes, and be careful modifying iptables remotely, whether using lokkit or manually, as you could lock yourself out when you restart (if you screw up).

ntsysv: ntsysv is a ncurses GUI that allows you to enable or disable services at start up, the equivalent of using the chkconfig command. chkconfig is more granular, as you can specify the startlevel you wish, but if you're unfamiliar with Linux, it's helpful till you get up to speed. Just invoke the command, no parameters, and you'll be presented with a list of all the available services. Each has a box beside it that can be checked to enable it. Use the arrow keys to scroll down and back up and hit the space bar to toggle on or off.
Yes, there X apps that do the same thing with a nice GUI, but if you're working on NetSec boxes, you won't have X installed a lot of times (or shouldn't). Do you really want a GUI, with all the myriad of apps that installs that could have security flaws installed on your IDS or packet auditor?

Tuesday, September 28, 2010

Stripping The Port Off Tcpdump Output

You can use the sed command in Linux to strip off the port from tcpdump output, after using awk to pull out the IP addresses. tcpdump adds a decimal point and the port number to both the source and destination, such as, which would designate port 23 on If you wanted to capture all the source addresses on your network, you could do so with something like: tcpdump -nn -i eth0 -q | awk '{print $3}'. We're piping the output of tcpdump to the awk command instead of the screen and telling awk to print the third column. Our output, without awk,  would look something like this:
11:59:15.871010 IP > tcp 31
awk prints only the third column, separated by spaces.
To strip the last octet off, which is the port number, we could pipe the results of awk through sed, using the search and replace function, like this: sed 's/.[^.]*$//'

What we would then have would be just a column of source IP addresses. Pipe it into a text file using the redirection operator, > file1.
Now we can run that file through the sort command, to sort them numerically, and then through the uniq command, to remove duplicates, and pipe that into another filename:
sort file1 | uniq > file2.

So command 1 would be:
tcpdump -nn -i eth0 -q | awk '{print $3}' | sed 's/.[^.]*$//'  > file1 (change the -i parameter to whatever interface you will be monitoring)

And command2 would simply be:

sort file1 | uniq > file2

And file 2 can then be search, or run through a script to resolve hostnames, imported into a spreadsheet for reporting or whatever is needed.

Thursday, September 23, 2010


One of my very favorite tools is IDABench. IDABench is a packet auditing tool using perl and tcpdump (and other libpcap based tools), based on Shadow. If you're familiar with Shadow, you know it's basic function is to capture packets into hourly dump files and give you a Web based interface to search those packets, as well as giving you a daily summary of source and destination addresses and ports. George Bakos, when he was at ISTS, the Institute for Security Technology Studies at Dartmouth, took Shadow and revamped it with Perl scripts to allow you to use ngrep, tethereal (now Wireshark's tshark) and p0f. What's even better is that IDABench is modular and can be modified to use just about any tool that can read pcap files. It runs on Linux and Apache and is a great tool for the intrusion analyst or team that looks at packets frequently. It hasn't been maintained for a number of years and as I searched for a download link, I found they all point back to ISTS and the page doesn't exist. That's a real shame, it's a very useful tool. If you're interested in trying it, let me know and I'll get the files to you...

Friday, September 10, 2010

Another Great Tool

There are a number of good packet crafting tools available for Linux distributions, including scapy, nemesis and my favorite, hping.
hping was written by Salvatore Sanfilippo and is now in it's third major version (last updated in 2005).
It is a packet crafter, which means it allows you to construct and send packets independent of your TCP/IP stack built into your OS, using raw sockets. You can create TCP (the default), ICMP, UDP or raw IP packets (no higher level embedded protocol)
hping is a command line tool run inside a tcl interpreter, so you can make use of all of tcl's abilities to script your commands.
You can download the tool here.

A basic example of crafting your own packets:

Let's use hping to send an ICMP Address Mask Request. We need to to know the ICMP type and code  for this, which is type 18, no code. We would construct our command as follows:

hping -1 -C 18

Here we're telling hping to send an ICMP packet (-1) of Type 18 (-C) to address

Hping will display a line like this showing what operation it's doing:
HPING (eth0 icmp mode set, 28 headers + 0 data bytes

Any replay from the host will be printed to the screen. In this case, the destination address dropped our packets. When we kill the command (Control-C), we'll see the stats:

--- hping statistic ---
15 packets tramitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.0/0.0/0.0 ms

There's a built-in macro for this command, which is icmp--addr, so we could have just run hping icmp-addrr

Lets use hping to send a ping packet so we can see the results:

hping -1 -C 8

HPING (eth0 icmp mode set, 28 headers + 0 data bytes
len=46 ip= ttl=255 id=30150 icmp_seq=0 rtt=0.4 ms
len=46 ip= ttl=255 id=15674 icmp_seq=1 rtt=0.2 ms
len=46 ip= ttl=255 id=46964 icmp_seq=2 rtt=0.2 ms
len=46 ip= ttl=255 id=23097 icmp_seq=3 rtt=0.2 ms
len=46 ip= ttl=255 id=8324 icmp_seq=4 rtt=0.2 ms
len=46 ip= ttl=255 id=7159 icmp_seq=5 rtt=0.2 ms
len=46 ip= ttl=255 id=19765 icmp_seq=6 rtt=0.2 ms
len=46 ip= ttl=255 id=54740 icmp_seq=7 rtt=0.2 ms
len=46 ip= ttl=255 id=30929 icmp_seq=8 rtt=0.2 ms
--- hping statistic ---
9 packets tramitted, 9 packets received, 0% packet loss
round-trip min/avg/max = 0.2/0.2/0.4 ms

Just like using the ping command, we see our response showing the ttl, sequence number and round trip time.

Now lets use hping to send some data in a TCP packet. Suppose we just wrote a very simple  IDS signature that looked for the string "evil_string_123" and wanted test and make sure it worked.
First, we'd create a text file with the string in it. Lets say we called it packet_data.

Now we could use hping to fire that packets wit that string to a host that sits behind our IDS, then watch for our signature to fire.

hping -p -S -d 14 -E packet_data

Here we're using a TCP packet (the default, so we don't need to specify) with the Syn flag set (-S), a data size of 20  in a file called packet_data, going to host

Running a sniff on the box we're using, we should see our string in the packet data, like this:

12:09:17.837181 IP > Flags [S], seq 168566124:168566144, win 512, length 20
        0x0000:  4500 003c 412a 0000 4006 d37c 0a0a 010f  E..
        0x0010:  0a0a 0102 0a0c 0016 0a0c 1d6c 5796 3dbf  ..P........lW.=.
        0x0020:  5002 0200 a8f6 0000 6576 696c 5f73 7472  P.......evil_str
        0x0030:  696e 675f 3132 330a 0000 0000            ing_123.....

Salvatore goes in depth in using the tool, especially in the tcl shell for scripting here.

Tuesday, August 24, 2010

Sanity Checking Your IDS Config

Tuning an IDS is never an once and done proposition. As a matter of fact, an IDS/IPS probably needs more constant maintenance and tuning than just about any other system you'll ever administer. After doing your initial setup and tuning, you';ll notice over time the false positive rate creeping up and the white noise getting louder.
A few things you might want to look at, on a regular basis, to keep the FP rate down and keep your focused EOI's that matter (events of interest) are:
  1. Protected Networks: Have new segments been added recently? If you don't add them to your list of protected networks, all those signatures with a flow of external to internal traffic can false posit on internal traffic. Review your monitored segments periodically, and look at your events for new internal subnets that may need defined.
  2. New signatures. Hopefully, your review your vendors new signatures before deploying (even if you use automation) to see if they're relevant for your infrastructure.Consider omitting signatures that aren't needed for your environment, or at least not adding them to real-time alerting or decreasing the alert level. If you're network is a strict Windows shop, running IIS Web servers, do you REALLY need 500 Apache/PHP signatures? Maybe your philosophy is you want to see ANY malicious traffic directed towards your networks, but you probably don't need real-time alerting on them in any case. How many analysts still get real-time alerting on Code Red?
  3. New servers: As new servers get added, you may see a marked increase in FP alerts. Patching software, anti-virus management servers, web content monitoring and the like do a LOT of talking on the network that could be construed as attacks by the IDS. Make sure you track down your top talkers regularly and adjust your filters as needed.

Tuesday, August 10, 2010

Network Security Dashboards

If you're a graphical person, and like the dashboard approach to an overview of what's going on in NetSec, there are a couple I've found that are pretty nice. The first one is the Talisker Computer Defence Operation Picture site, found at Andy's had this up quite a while, and there's even a shot of it on the wall at a site owned by the NSA! (

I just found the second one, Infocon, which is located at I saw a post by the author, Valter Santos, on a listserv, and I guess he's working on a new version of the site. Pretty sweet even in it's present incarnation.

If nothing else, throw one of these up on one of your screens at work. Even if you rarely look at it, it's sure to impress folks when they come into your office or cube!

Favorite Tools

New tools come out with amazing regularity. If you're getting started in NetSec, one of the first things you'll find out is there are tons of tools, and multiple ones to do any task, AND that you better learn enough Linux to install, configure and run them, as most of them don't have ports to Windows. With a few exceptions, even the ones that ARE available for Windows rarely run as well.
I have a toolkit (actually two, as I keep Windows and Linux tools separate) that has dozens and dozens of tools in it. Many I've tried out for a day or two, some I use on a semi-regular basis, and some I've never even found the time to install yet. But every network security analyst has, or should have, their core essentials.
If they run a distro on a thumb drive for emergency use, these would undoubtedly be on it. They are probably installed on every test box and personal box they have access to.
Mine as as follows (in no particular ranking or order...)

  1. nmap - You have to have a port scanner, and year in and year out, Fyodor keeps making nmap better to the point that I've never changed. I've tried a bunch of others, but nmap continues to be the most stable and  dependable scanner I've ever tried. (Unicornscan WAS fun, I'll admit... smokin'!)
  2. hping - You also need a packet crafting tool.. in this area, I think there are several really good ones, including scapy and Nemesis, but I like hping the best for it's simplicity and functionality. 
  3. netcat - netcat does just about anything you need it to do as far as sending and receiving packets. It's called the "swiss army knife" of network tools for good reason. Need encryption? Get crypcat instead of or in addition to. 
  4. ngrep - Ngrep is a libpcap tool that searches for strings in packets. If you're doing traffic analysis, it's almost indispensable.
  5. dsniff - This suite of tools by Dug Song includes dsniff that searches traffic for logon credentials, as well as tools to sniff for web pages, files, mail and chat. There are also tools to do man in the middle attacks on SSH and HTTPS, as well as a nifty sniping tool to shoot down traffic.
I use many others, but those are the core 5 for my toolkit...

Monday, August 2, 2010

Why I Still Like Fedora

I see/hear the occasional trashing of RedHat/Fedora on the lists and from instructors (one really well respected and favorite instructor of mine said "Friends don't let friends run RedHat" =-) ) and I understand it, in part. The RedHat package for certain tools that NetSec folks use aren't up to snuff with with other distros or an install from source. Old tools like Shadow and IDABench took pains to mention if you're running RedHat, ditch the installed version and get source. But the issues I've found or heard about aren't game changers. I install most of my tools from source anyway. I rarely depend on a package. And RedHat's habit of renumbering interfaces between reboots...well, you ought to have the MAC hardwired in your network scripts anyway. The thing I love about RedHat is that it works. Plain and simple. I've used it for 10 years now (starting with RedHat 6.2) and never had a situation where it wouldn't install (unlike the Debian 5 install I just tried to do, where it couldn't find the factory installed (common) disk drive in an IBM 8171 Think Centre). It's been stable and easy to maintain version after version. I like things that work they way they're supposed. I've tested many other flavors for desktop/laptops, and always keep coming back to Fedora. I'd like to give Debian another shot. If it could find my hard drive...

Wednesday, July 21, 2010

Too Configurable...

I was working with a semi-popular IDS system, and discovered that TCP checksum checking is turned off by default. That's bad enough, as an IDS that doesn't check (and drop) packets with a bad TCP checksum is vulnerable an IDS insertion attack (the scenario where the IDS sees a packet that the host will discard). It gets better... it's configurable from anywhere from 0 (check all packets) to 255 (check every 255th packet). What good is TCP checksumming if you're not going to do it on every packet, especially if you're going to skip every 10 or 50 or 255? packets? It only takes one packet with a bad TCP checksum to do an insertion attack, so to me the pretty common sense default here would be ON, and not allow the admin to tinker with what packets are checked. Yes, I realize the issue of overhead, but if you're going to check them, the only assurance you have against that particular attack is to check them all. Or you could turn it off altogether and enjoy your blissful ignorance

Monday, July 5, 2010

Winpcap 4.1.2

New Winpcap is out, get it at Thanks, ISC.

Saturday, June 19, 2010


OSSIM is an open source security information management and correlation tool from a company called AlienVault (there's a pro version too). I installed on it on two boxes recently, one using the unattended install the other using the custom install. It's an incredibly easy app to install. You download an ISO from their web site and make an install CD, boot it up and give it some info (the unattended asks only for a few basic items like the network config info, a name for a box and a few details on how you want the app configured). The default unattended install sets up the server, sensor, and database all on one box. Once the app is installed and rebooted, you'll need to set up your monitoring interfaces (the custom install asks which ones to use) and you're off and running. If you want to use Nagios, you will need to configure that as well. You'll have over 30 apps all properly installed, with a nice dashboard to show your status at a glance, and then you can drill down to investigate events, check your network status, see what hosts are detected from the traffic and more. The box does passive vulnerability assessment using Nessus, runs Snort, arpwatch, P0F, Ntop, Osiris and many others.
I see this as being a great teaching tool for new analysts, as it will allow them to work with a lot of tools quickly without the learning curve of getting them all installed and configured properly and working together. The site for the open source version is here.

Thursday, June 17, 2010

Tune It Like a Fiddle

Whens the last time you did a comprehensive review of your IDS for further filtering? If you haven't done it in a while, you might be shocked at the false positive creep. New partner circuits get added, new app servers, maybe your company is using a a totally new app? Which brings up another point. Not only do you need to review what needs filtered, you may need to review what needs UNFILTERED as well. If your company wasn't using Citrix, for example, the last time you did a review, you may have all those signatures disabled to optimize the performance of your sensors and reduce overhead. If you work for a smaller company, and those decisions are left to your discretion, as opposed to a group that regularly reviews policy, you'll need to keep awareness of what platforms and apps your company uses on a regular basis. Doing regular ports scans should alert you to new services opened up, and using the OS scan switch can help determine if if there are new platforms you need signatures for.
And don't just do this for your external addresses. As they say, most networks are a Tootsie-Pop. Hard on the outside with a soft chewy center. If an attacker pops a perimeter box, he now has a pivot point to attack further in, depending on how in-depth your defenses and detectors are layered. That's why it's important not to put all your eggs in one basket with just perimeter sensors.
You need sensors in front of your most vital assets, like database servers, HR and payroll boxes and anything with confidential info stored on it. That way, if the attacker eludes your perimeter defenses, you have another opportunity to detect (and stop) her. HIDS, and log files are your last line of defense. All that good log data is worth anything unless you have a process in place to parse, and alert on it.
Review those signatures.. not only can you cut down a lot of white noise, you might find out you didn't know what you were missing.

Saturday, June 12, 2010

Honeynet Challenges

New Honeynet Challenges available at The June challenge is VOIP based, so if voice is your cup of tea (or honey), slide over and take a whack at it. These really are great for self-training to sharpen your skills and find out what you know (and more importantly, don't...)

Tuesday, May 25, 2010

Google SSL

Google is in beta with an SSL version of search. Another way of ensuring some privacy online in an age where privacy is becoming scarcer and scarcer (Facebook, anyone?). If I were a blogger or a journalist in a nation where the free dissemination of information is illegal (say, China, North Korea, Iran, etc.), I'd use the SSL version tunneled through Tor. No such thing as bullet proof privacy but every little bit helps..

Tuesday, May 11, 2010


I was amazed at how little ECN traffic I see on a network I'm responsible for. One packet in a fifteen minute period?
Wondering if that's the normal experience for others.. is ECN really that under-utilized?
Anyone else look at this? Oooh... there went another. TWO now. Both SMTP based, which makes sense.. busy mail server, timely delivery and all that..

Monday, April 26, 2010

Snort 2.8.6 Released!

Go here for the Storm Center article from Joel Esler (an ISC handler who also works for Sourcefire. Lot's of new additions..

Wednesday, March 24, 2010


This seems to be my month to try out new tools (Jim Clausing would be happy with me), and I'm running another new one as I speak. This one is a web vulnerability scanner called skipfish. It runs on Linux, FreeBSD, MacOSX or Windows, so I'm, of course, running it on one of my *nix test boxes (I don't do security tools on Windows if I can help it). Downloaded the tarball, extracted it, and compiled after installing the one dependency the README said I'd probably need, GNU libidn (funny thing, how reading that documentation always seems to make these installs go smoother!)
I'm running it against a NetSec box, so I created an empty dictionary and used -L to disable brute forcing of extensions it found, which if I read the docs right, means I'll just get a nice crawl the first time through. Anyway, it's been mentioned on the SANS lists and even posted on the Storm Center diary. That in of itself is enough of a recommendation that I'd give it a test run, if you need a web test tool (maybe a pen tester or you're responsible for hardening/protecting web servers).
Get it here if you're interested...

Friday, March 19, 2010

Cyber Security Act Part 4

The Cybersecurity Act has been reworked again, removing the so-called "kill switch" for the president, which would have allowed him to shut down key infrastructure segments under attack. Instead the new version requires the White House to work with the private sector to determine critical networks and how they should be protected. Details on the Post found here.

Friday, March 12, 2010


I recently took BotHunter, found here, for a test drive. I fortunately already had a test box with an interface monitoring a segment I could use, so it was simply a matter of download, install, done.
Set up couldn't be any easier. The java-based installer does all the heavy lifting for you, compiling the binaries for BH, installing a customized version of snort, and using rpm to download any dependencies needed. It then prompts you for ranges of your internal networks, any DNS servers, mail servers and the like to add context to it's results.
It has a GUI console if you prefer, but you can also administer and monitor it via command line. It uses a weighting system, which is covered in-depth in the docs, to produce a score depending on events it's observed from the host. The higher the score, the more likely the box has been popped and is part of a bot net. Anything over .8, it flags for your attention.
It's free, of course, from SRI International, though it's not open source and they retain all rights over the software. You can choose to upload your results to their repository, adding the the overall knowledge of botnets and help fight the good fight, or you can choose to keep your results local if there would be issues with that. The install even helpfully offers to install Tor, if you would like to upload your results anonymously.
I wouldn't recommend doing this in a corporate environment, for obvious reasons, but for other places like a home network, research lab or NetSec vendor, adding to the overall info helps the community as a whole. Which is why, by the way, you should participate in DShield if you're not already. (

Mum's The Word

The firing of the State of Pennysylvania's CISO, for discussing a system breach in the states online driver exam scheduling system is a sober reminder to never, ever discuss security incidents unless you're been expressly given the OK. In writing. By someone who has the authority to authorize that. Incidents are usually the realm of the companies public relations department and decisions are made at the C-Suite level. Ouch. That little indiscretion cost him what was probably a decent gig. Details here.

Tuesday, March 9, 2010

A Moments Reflection

I'm coming up on my 10th anniversary in Network Security, my 15th in Information Technology.
I moved, abruptly, from being the head of a desktop support team to NetSec, in a day. Probably not the usual path one takes to security. I think these days most start out in that area from college, or move over from Infrastructure or the Server Team.
There were no information security people on staff when I moved over. None, in any area. No one had any idea what I should do or even where to find out. So I became a generalist in every area, as well as having to build up each new area from the ground up, with no experience, no help and no training. I didn't get to my first training conference (SANS) until 2002, two years into my new duties.
I got IDS off the ground, then moved on to vulnerability testing, anti-virus, content monitoring, and centralized logging. I wrote policy, procedures on hardening servers and applications, did threat research, incident response and even a little end user awareness writing. Probably others I can't recall.
For all the negatives there are in never getting to specialize in one area (and consequently becoming a SME, at least to your company), I think all the exposure to different tools and technologies helped some too. Even though sometimes the "jack of all trades" gig gets old, it's instilled a confidence in me I'll never lose. I can dive head long into a new project, even if I know nothing about it at the outset, believing I can get myself up to speed eventually and accomplish what needs done. I've done just that many times out of necessity.
That role, for me, is quickly coming to an end. I'll soon be transitioned out of my generalist duties and into a more siloed position. My old company was bought by a new, much larger company and our migration to the new networks and ways of doing things are in full swing.
That said, if you're just getting started or will be soon, the way I see the industry going, my opinion would be to specialize. I don't see in the future how very many companies, except the very small ones will be able to get by with a generalist like I was. Find out what what really interests you, and hit it hard until you've mastered it. You'll make yourself very valuable to a team some where, and you'll go to work each day and do what you love and love what you do.
Diversification is great for stock portfolios, in my opinion. For network security people, not so much.

Thursday, February 25, 2010

Packet Fun

Last week I started playing with NetWitness Investigator, a threat analysis app that makes it very easy to sort and drill down into packets when doing analysis. There's a freeware version (limited to 1 Gb pcaps in the demo and to local collections only). You can download it here. NetWitness runs on Windows or Linux, but the Linux version is in the commercial version only.

So today I took a look at Mu Dynamics xtractor, a cloud app with similar capabilities. Their demo movie takes to task a forensics challenge asking you to answer 8 questions about Ann's online activities. It's quite nifty. The movie is here, as well as a download link. xtractor runs on Linux distros and starts a Web server. Just point your browser at it. They do say Chrome or FireFox work well; IE not so much...

Thursday, February 11, 2010

Aurora Disinfect Tool

HB Gary, one of the companies working on the forensics of the Aurora attacks against Google, Adobe and others, has released what they call an "inoculation shot" for Aurora. It's a free scan and remove tool for the malware. The tool can be found on their site here. There's a good write up on the investigation to date on Dark Reading, found here.

Monday, February 8, 2010

Packet Captures

If you're looking for packet captures to sharpen your analytical skills, the folks behind Wireshark have a nice site, found at

You'll find captures with all sorts of protocols (over 60) from the mundane to the esoteric (how about a capture of a line of text using STANAG 5066 (S5066))?

There are lots of sites with packet captures of malicious traffic or war games traffic, but it's also always helpful to keep increasing your knowledge of normal traffic too. As the instructors say, if you don't know what normal looks like, how will you recognize the anomaly?

Oh and if you need some sites with challenge or war games type captures, here's a couple I've come across..

Trojaned Mozilla Plugins

If you use either Sothink Web Video Downloader 4.0 or Master Filer add-ons in Firefox for Windows, both have been found to contain Trojans. Details at the Download Blog post found here.
This raises the topic again of how you verify safety of all the gadgets and gizmo's you install? This is especially an issue with automated updates and installs via the Web browser, like these Firefox add-ons.
The vast majority of end users trust almost everything they come across and click without giving it a thought, despite all the efforts at end-user education, so how do we protect folks against themselves on the Internet.
Even if you manually download every app, checksum it, and run multiple scanners against it, we know it's still possible to get burned, so how do find a way to protect folks who are willing to click on any link they come across? Or teach them that just because the site is "good" isn't a guarantee some Bad Guys haven't compromised the site and injected malicious code via a script, or a Flash ad, or replaced a good version of a file with a piece of malware?

Monday, February 1, 2010

UDP scanning with NMAP

Fyodor has made a major improvement to UDP scanning in the latest release of nmap. Rather than regurgitate the entire write up by Rob Vanderbrink on the Internet Storm Center, found here, let me summarize by saying Fyodor has changed nmap's operation for certain UDP services. nmap will now actually connect to that service and therefore verify the port is open, and that the service is actually running. If you don't know why this was an issue in the past (and still is for any services not included in the new nmap), read Rob's diary entry. He does a great job of simplifying the explanation.
As always. the latest version of nmap can be found at Fyodor's site found here.

Wednesday, January 13, 2010

Security Blogs

A few security blogs from well known players in NetSec...

Marty Roesch, author of Snort and CTO 0f Sourcefire, here

Joel Esler of Sourcefire and ISC handler, here

Richard Bejtlich, author, Director of Incident Response for GE and former head of TaoSecurity, here

Tenable Security, here

Dr. Anton Chuvakin, author, security researcher and consultant, here

RaDaJo blog, Raul Siles, David Perez and Jorge Ortiz, here

Joanna Rutkowska, security researcher, here

This is obviously just a small sampling, but the point is, there is an absolute glut of information out there provided by very smart and experienced people. Every time you read one of these blogs or some security website, listen to a podcast, participate in a webcast or do some free online training, you're adding to your cumulative knowledge, increasing your value and making yourself a sharper analyst..

Tuesday, January 12, 2010

2009 Data Breaches

The Identity Theft Resource Center released their yearly report on data breaches, found here.
Malicious attacks surpassed human error for the first time in three years. One shocking stat is that of the 498 breaches reported, only six (yes six!) had any kind of encryption or strong security features guarding the data. Companies still continue to fall down on basic steps to safeguard their customers or clients data, and it doesn't look like it's getting any better...

Monday, January 11, 2010

SANS AppSec 2010 - San Francisco

Send your developers to learn secure coding. The number one way to guard against vulnerabilities is to eliminate them to begin with!

Identifying TCP Retries

When looking at packet dumps, distinguishing TCP retry packets from network scanning is straightforward. Look for these characteristics:

  1. Source ports will remain the same across all packets, as this is the same connection attempt.
  2. The TCP Sequence numbers will also remain the same, for the same reason.
  3. IP ID numbers will increment, because the sending host is creating a new packet each time.
  4. Time stamps will increment equally. This is due to the TCP back-off algorithm that waits an increasing amount of time before resending the next retransmission attempt. Usually the time before attempts will double; for example 3, then 6 then 12 seconds between attempts.

Wednesday, January 6, 2010

Linux SysAdmin Newsletter

nixCraft has a nice newsletter for Linux users with answers to common (and not so common) questions posted by users of the site. You can go to their site to sign up. Today's email included questions like how to set port forwarding in Mac OS X, how to turn on SELinux protection in RedHat/CentOS and for the newer users, how to determine which services are enabled at boot time...

Blog Archive