On Fri, Mar 07, 2025 at 06:40:31PM +0100, Guido Trentalancia wrote: > I am not familiar with the application layer tools such as > NetworkManager. > > The point is that the underlying issue does not change with auxiliary > tools: I believe iptables should not abort setting up all rules, just > because one or more of them fail to resolve in DNS. There is consensus amongst Netfilter developers that skipping rules or even parts of them when loading a ruleset is a critical flaw in the software because loaded ruleset behaviour is not deterministic anymore. The usual security context demands that behaviour is exactly as requested by the user, any bit flipped could disable the whole security concept. "We printed a warning" is not an excuse to this. In order to implement the desired behaviour, just call iptables individually for each rule and ignore failures. You could also cache IP addresses, try a new lookup during firewall service startup and fall back to the cached entry if it fails. My personal take is this: If a DNS reply is not deterministic, neither is a rule based on it. If it is, one may well hard-code the lookup result. > As already said, if one or more rules fail then those specific hosts > are most likely unreachable anyway. No, it just means DNS has failed. The resulting rules use IP addresses and there is no guarantee these are not reachable. You are making assumptions based on your use-case, but the proposed behaviour will affect all use-cases (and there is always that special one ... ;). Cheers, Phil