Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

IPTables do not block IP with ipset immediately

I have the following IPTables with IPSet as rule source to block attacking IP, but when I add an attacking IP to IPSet, in my nginx access log, I still see continuous access of the attack IP. After a while,maybe 3~5 minutes, the IP was blocked.

iptables

~$ sudo iptables -nvL --line-numbers
Chain INPUT (policy ACCEPT 317K packets, 230M bytes)
num   pkts bytes target     prot opt in     out     source               destination
1     106K 6004K DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set Blacklist src

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set Blacklist src

Chain OUTPUT (policy ACCEPT 350K packets, 58M bytes)
num   pkts bytes target     prot opt in     out     source               destination

ipset

sudo ipset -L
Name: Blacklist
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536 timeout 60
Size in memory: 13280
References: 2
Members:
xxx.xxx.xxx.xxx(attacker ip) timeout 0

I don't know why the rule has not effect immediately, which make me crazy just like the attacker is laughing at me.

I add ipset to the iptables rule with -I option which should keep the rule at the first position. So maybe the Chain INPUT(policy Accept) do the trick?

Please help me out, thanks so much.

BTW.

I use Nginx+Djano/uWSGI to deploy my application, and I use shell script to analyze nginx log to put evil ip to Blacklist ipset.

like image 553
Fogmoon Avatar asked Jul 30 '17 03:07

Fogmoon


1 Answers

The reason that a firewall rule may have no immediate effect on blocking traffic may be due to stateful inspection of packets.

It may be inefficient for the firewall to analyse every single packet that arrives in the line, so, for performance reasons, what happens is that the rules the user creates often apply only to the initial packets that establish the connection (known as TCP's SYN, SYN+ACK, ACK) — subsequently, said connection is automatically whitelisted (to be more precise, it is the state that the original rule has created that is whitelisted), until terminated (FIN).

What likely happens here is that, due to pipelining and keep-alive connections, which nginx excels at, a single connection may be used to issue and process multiple independent HTTP requests.

So, in order for you to fix the issue, you could either disable pipelining and keep-alives in nginx (not a good idea, as it'll affect performance), or drop the existing whitelisted connections, e.g., with something like tcpdrop(8) on *BSD — surely there must be a Linux equivalent tool, too.

However, if you're simply having an issue with a single client performing too many requests, and as such overloading your backend, then the appropriate course of action may be to rate-limit the clients based on the IP-address, with the help of the standard limit-req directive of nginx. (Note, however, that some of your customers may be behind a carrier-grade NAT, so, be careful with how much you apply the limiting to ensure false-positives won't be an issue.)

like image 94
cnst Avatar answered Oct 26 '22 23:10

cnst