I try to calculate GET
Request from my server.
I use tshark
.
I run followed command to filter incoming traffic and fetch only GET
requests:
/usr/sbin/tshark -b filesize:1024000 -b files:1 \
'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-w samples.pcap -R 'http.request.method == "GET"'
As you see I defined to store filtered results to 1 file with max size 1G and name: samples.pcap
.
The problem is, when i try to open pcap file i see that tshark stored all traffic there
:
3245 172.692247 1.1.1.1 -> 2.2.2.2 HTTP [TCP Retransmission] Continuation or non-HTTP traffic
3246 172.730928 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
3247 172.731944 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
3248 172.791934 1.1.1.1 -> 2.2.2.2 HTTP GET /services/client/client.php?cnc=13 HTTP/1.1
3249 172.825303 1.1.1.1 -> 2.2.2.2 HTTP HTTP/1.1 200 OK [Unreassembled Packet [incorrect TCP checksum]]
3250 172.826329 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
3251 172.826341 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
3252 172.826347 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
3253 172.826354 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
3254 172.826359 1.1.1.1 -> 2.2.2.2 HTTP Continuation or non-HTTP traffic
I have really big traffic, during 10 min i get pcap file size 950M. And it takes about 4 min to parse it.
The interesting thing is when I try to run it without to store it to local file (but under /tmp):
/usr/sbin/tshark \
'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-R 'http.request.method == "GET"':
3.776587 1.1.1.1 -> 2.2.2.2 HTTP GET /services/client/client.php?cnc=13 HTTP/1.1
4.775624 1.1.1.1 -> 2.2.2.2 HTTP GET /services/client/clsWebClient.php HTTP/1.1
8.804702 1.1.1.1 -> 2.2.2.2 HTTP GET /services/client/client.php?cnc=13 HTTP/1.1
It works, but in this case i have under /tmp several temp files with huge size 1G+.
Did i miss something?
Thank you
=======================================================
Edit
Lars asked to add -f
:
sudo /usr/sbin/tshark -T fields -e 'http.request.uri contains "cnc=13"' \
-b filesize:1024000 -b files:1 \
-f 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-w samples.pcap
Doesn't help, still samples.pcap stores all traffic:
74 6.908388 172.20.0.23 -> 89.78.170.96 HTTP Continuation or non-HTTP traffic
75 6.908394 172.20.0.23 -> 89.78.170.96 HTTP Continuation or non-HTTP traffic
This seems to work when you'd want to combine -w and bpf packet filters (ie, what you put on -f):
tcpdump -nli en1 -w - 'tcp port 80' | tshark -i - -R'http.request.method == "GET"'
(replacing the initial tcpdump with tshark results in this error in my local system: tshark: Unrecognized libpcap format )
Saving the result of a read-filter (-R) doesn't seem to be supported anymore since version 1.4.0 when capturing (or reading from a capture) and writing out the result again (see: http://ask.wireshark.org/questions/10397/read-filters-arent-supported-when-capturing-and-saving-the-captured-packets ). Presumably pre 1.4.0 versions would allow for writing to pcap and limiting output with -b
(haven't tested that).
If you just want the text output of the -R (as opposed to the pcap output). The above command would be your solution I think.
To limit your output (ie. you mention you just want to take a sample) you could use head -c <bytes>
at any point in the processing pipeline:
tcpdump -nli en1 -w - 'tcp port 80' | \
tshark -i - -R'http.request.method == "GET"' | \
head -c 1024000 > output.txt
to produce a 1024000 byte text-output file called output.txt or
tcpdump -nli en1 -w - 'tcp port 80' | \
head -c 1024000 | \
tshark -i - -R'http.request.method == "GET"' > output.txt
to process 102400 bytes of pcap input that was prefiltered for TCP port 80, and put the text output into a file called output.txt
well, don't use -w, it will save the raw data, you should use redirect operator ">" to specify the destination directory.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With