Hello, I'm observing that gcc compilation slows down significantly when the system experiences RAM starvation. I understand that some slowdown is expected under memory pressure, but I'm trying to determine whether this level of degradation is expected GCC behavior or is primarily due to Linux kernel memory management. What I observe: * During large builds (make -jN), when total RAM is nearly or fully exhausted, compilation slows drastically. Same source file compile ~20s when we have free memory comparing to 40s-60s when we reach the threshold of memory (if not killed by OOM meanwhile). * Sometimes the Linux kernel's OOM killer is triggered, mostly by other processes not gcc itself, then gcc is selected to be killed e.g.: [Fri Apr 18 16:14:49 2025] python invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=815 [Fri Apr 18 16:14:49 2025] Memory cgroup out of memory: Killed process 108353 (cc1plus) total-vm:5550480kB, anon-rss:5421224kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:10876kB oom_score_adj:815 * The system has swap disabled. Questions: * Is this performance degradation considered normal for gcc under memory pressure? * Is there any GCC-specific tuning or flags that can help in constrained memory environments? (I'm aware of ggc-min-expand and ggc-min-heapsize but these in most cases leads to lower memory consumption at the cost of higher user time). * I want to understand whether this observation is only an outcome of kernel's memory management or gcc itself is aware of reaching limit of RAM and then somehow "slows down". System Info: * GCC version: gcc (GCC) 13.1.0 * OS: AlmaLinux release 9.3 Any insights into whether this is GCC-internal behaviour or if it's more likely tied to Linux's memory management would be appreciated. BR, Krystian