Hello and thanks for your feedback. On Thu, 20/03/2025 at 16.39 +0100, Phil Sutter wrote: > On Fri, Mar 07, 2025 at 06:40:31PM +0100, Guido Trentalancia wrote: > > I am not familiar with the application layer tools such as > > NetworkManager. > > > > The point is that the underlying issue does not change with > > auxiliary > > tools: I believe iptables should not abort setting up all rules, > > just > > because one or more of them fail to resolve in DNS. > > There is consensus amongst Netfilter developers that skipping rules > or > even parts of them when loading a ruleset is a critical flaw in the > software because loaded ruleset behaviour is not deterministic > anymore. It's the Internet and DNS connectivity that are inherently not deterministic. The Internet is a best-effort network or in other words non-deterministic, so any network filter for the Internet can only be best-effort. The patch simply makes the netfilter behaviour adaptive to that inherent principle and fault-tolerant. > The usual security context demands that behaviour is exactly as > requested by the user, any bit flipped could disable the whole > security > concept. "We printed a warning" is not an excuse to this. Priting a warning and recoverying in a best-effort manner from an unrecoverable failure is the best a network filter can do. A partial failure is always better than a total failure. > In order to implement the desired behaviour, just call iptables > individually for each rule and ignore failures. You could also cache > IP > addresses, try a new lookup during firewall service startup and fall > back to the cached entry if it fails. The former is just an expensive trick. The latter might be an alternative solution to the problem, but it is more complex than just rescheduling netfilter setup after network (or DNS) comes up again. You need quite a lot of C code in order to cache DNS lookups, while you only need a script to reload the netfilter. > My personal take is this: If a DNS reply is not deterministic, > neither > is a rule based on it. If it is, one may well hard-code the lookup > result. There is no prescription on being deterministic, when the underlying network is inherently non-deterministic. It has already been discussed that even when DNS Round-Robin is not being used for a specific host, the host might still change its IP address on a slowly-varying timescale: in such scenario hostname-based filtering rules significantly simplify network management and minimize network downtimes. In any case, hostname-based rules are optional and using or not using them is entirely discretional, the patch does not force the use of hostname-based rules, it just makes them fault-tolerant. > > As already said, if one or more rules fail then those specific > > hosts > > are most likely unreachable anyway. > > No, it just means DNS has failed. The resulting rules use IP > addresses > and there is no guarantee these are not reachable. You are making > assumptions based on your use-case, but the proposed behaviour will > affect all use-cases (and there is always that special one ... ;). If a DNS lookup fails, it can be due to a DNS failure as you said but also to a completely unreachable DNS server or a completely unreachable network (no connectivity at all): so circumstances might vary. In the case of an Internet client such as a workstation, most Internet connectivity happens through DNS lookups: you generally don't type IP addresses in your web browser or ftp client. So, if the DNS lookup fails, no matter what the reason actually is (DNS failure or network failure), the client request will also most probably fail, no matter whether its resolved IP address is actually reachable or not, because the request is most likely based on an hostname. Static IP addresses are normally only used for connectivity between servers: in that case, the code in the patch is not used and things stay as they are. > Cheers, Phil I hope this helps... Cheers, Guido