One of the tools I am using uses encryption/decryption to send out data over the network. I am modifying the tool and I need to be sure that the data is actually being sent in an encrypted form.
Are Wireshark and tcpdump the right tools for the purpose? At which point during the transfer do they capture the network packets?
To filter packets transmitted via a specific protocol, type in the protocol name with the tcpdump command, and it will only capture packets traveling via the defined network protocol. For example, to capture ICMP-based packets, you would simply attach icmp at the end of the tcpdump command.
tcpdump operates at layer2 +. it can be used to look at Ethernet, FDDI, PPP & SLIP, Token Ring, and any other protocol supported by libpcap, which does all of tcpdump's heavy lifting.
When you run the tcpdump command it will capture all the packets for the specified interface, until you hit the cancel button. But using -c option, you can capture a specified number of packets.
tcpdump is a packet analyzer that is launched from the command line. It can be used to analyze network traffic by intercepting and displaying packets that are being created or received by the computer it's running on.
Short answer: packets are tapped at very end of software network stack (e.g. in Linux).
Long answer with code digging in tcpdump, libpcap and linux kernel 3.12:
Both Wireshark and tcpdump uses libpcap, for example,
http://sources.debian.net/src/tcpdump/4.5.1-2/tcpdump.c#L1472
if (pcap_setfilter(pd, &fcode) < 0)
which in turn install a packet filter via setfilter_op and activate_op. There are lot of implementations of these operations, and I think that on recent Linux PF_PACKET
will be used with pcap_activate_linux
libpcap-1.5.3-2/pcap-linux.c#L1287:
/*
* Current Linux kernels use the protocol family PF_PACKET to
* allow direct access to all packets on the network while
* older kernels had a special socket type SOCK_PACKET to
* implement this feature.
* While this old implementation is kind of obsolete we need
* to be compatible with older kernels for a while so we are
* trying both methods with the newer method preferred.
*/
status = activate_new(handle);
...
activate_new(pcap_t *handle)
...
/*
* Open a socket with protocol family packet. If the
* "any" device was specified, we open a SOCK_DGRAM
* socket for the cooked interface, otherwise we first
* try a SOCK_RAW socket for the raw interface.
*/
sock_fd = is_any_device ?
socket(PF_PACKET, SOCK_DGRAM, htons(ETH_P_ALL)) :
socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
PF_PACKET is implemented in kernel, in the file net/packet/af_packet.c. Initialization of PF_SOCKET is done in packet_do_bind
with register_prot_hook(sk)
function (if the device is in UP state), which calls dev_add_pack
from net/core/dev.c to register the hook:
370 /**
371 * dev_add_pack - add packet handler
372 * @pt: packet type declaration
373 *
374 * Add a protocol handler to the networking stack. The passed &packet_type
375 * is linked into kernel lists and may not be freed until it has been
376 * removed from the kernel lists.
377 *
378 * This call does not sleep therefore it can not
379 * guarantee all CPU's that are in middle of receiving packets
380 * will see the new packet type (until the next received packet).
381 */
382
383 void dev_add_pack(struct packet_type *pt)
384 {
385 struct list_head *head = ptype_head(pt);
386
387 spin_lock(&ptype_lock);
388 list_add_rcu(&pt->list, head);
389 spin_unlock(&ptype_lock);
390 }
I think, pf_packet handler - the tpacket_rcv(...)
function - will be registered in ptype_all.
Hooks, registered in ptype_all
are called for outgoing packets from dev_queue_xmit_nit
("Support routine. Sends outgoing frames to any network taps currently in use.") with list_for_each_entry_rcu(ptype, &ptype_all, list) { ... deliver_skb ...} .. func
, deliver_skb calls the func which is tpacket_rcv
for libpcap.
dev_queue_xmit_nit is called from dev_hard_start_xmit
(Line 2539 in net/core/dev.c) which is AFAIK the last stage (for outgoing packets) of device-independent packet handling in Linux networking stack.
The same history is for incoming packets, ptype_all
-registered hooks are called from __netif_receive_skb_core
with same list_for_each_entry_rcu(ptype, &ptype_all, list) {.. deliver_skb..}
. __netif_receive_skb_core
is called from __netif_receive_skb
in the very beginning of handling incoming packets
Linux foundation has good description of networking stack (http://www.linuxfoundation.org/collaborate/workgroups/networking/kernel_flow), you can see dev_hard_start_xmit
on the image http://www.linuxfoundation.org/images/1/1c/Network_data_flow_through_kernel.png (warning, it is huge) at left side just under the legend. And netif_receive_skb
is inside the rightmost lower square ("net/core/dev.c"), which is fed from IRQ, then NAPI poll or netif_rx and the only exit from here is netif_receive_skb
.
The picture even shows one of two pf_packet hooks - the leftmost square under legend ("net/packet/af_packet.c") - for outgoing packets.
What is your tool? How it connects to the networking stack? If you can locate the tool in the Network_data_flow picture, you will get the answer. For example, Netfilter is hooked (NF_HOOK
) only in ip_rcv
(incoming) ip_output
(local outgoing) and ip_forward
(outgoing from routing) - just after netif_receive_skb
and just before dev_queue_xmit
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With