On Tue, May 20, 2025 at 2:52 PM Nico Pache <npache@xxxxxxxxxx> wrote: > > On Tue, May 20, 2025 at 12:06 AM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > > > Background > > ---------- > > > > At my current employer, PDD, we have consistently configured THP to "never" > > on our production servers due to past incidents caused by its behavior: > > > > - Increased memory consumption > > THP significantly raises overall memory usage. > > > > - Latency spikes > > Random latency spikes occur due to more frequent memory compaction > > activity triggered by THP. > > > > These issues have made sysadmins hesitant to switch to "madvise" or > > "always" modes. > > > > New Motivation > > -------------- > > > > We have now identified that certain AI workloads achieve substantial > > performance gains with THP enabled. However, we’ve also verified that some > > workloads see little to no benefit—or are even negatively impacted—by THP. > > > > In our Kubernetes environment, we deploy mixed workloads on a single server > > to maximize resource utilization. Our goal is to selectively enable THP for > > services that benefit from it while keeping it disabled for others. This > > approach allows us to incrementally enable THP for additional services and > > assess how to make it more viable in production. > > > > Proposed Solution > > ----------------- > > > > For this use case, Johannes suggested introducing a dedicated mode [0]. In > > this new mode, we could implement BPF-based THP adjustment for fine-grained > > control over tasks or cgroups. If no BPF program is attached, THP remains > > in "never" mode. This solution elegantly meets our needs while avoiding the > > complexity of managing BPF alongside other THP modes. > > > > A selftest example demonstrates how to enable THP for the current task > > while keeping it disabled for others. > > > > Alternative Proposals > > --------------------- > > > > - Gutierrez’s cgroup-based approach [1] > > - Proposed adding a new cgroup file to control THP policy. > > - However, as Johannes noted, cgroups are designed for hierarchical > > resource allocation, not arbitrary policy settings [2]. > > > > - Usama’s per-task THP proposal based on prctl() [3]: > > - Enabling THP per task via prctl(). > > - As David pointed out, neither madvise() nor prctl() works in "never" > > mode [4], making this solution insufficient for our needs. > Hi Yafang Shao, > > I believe you would have to invert your logic and disable the > processes you dont want using THPs, and have THP="madvise"|"always". I > have yet to look over Usama's solution in detail but I believe this is > possible based on his cover letter. > > I also have an alternative solution proposed here! > https://lore.kernel.org/lkml/20250515033857.132535-1-npache@xxxxxxxxxx/ > > It's different in the sense it doesn't give you granular control per > process, cgroup, or BPF programmability, but it "may" suit your needs > by taming the THP waste and removing the latency spikes of PF time THP > compactions/allocations. Thank you for developing this feature. I'll review it carefully. The challenge we face is that our system administration team doesn't permit enabling THP globally in production by setting it to "madvise" or "always". As a result, we can only experiment with your feature on our test servers at this stage. Therefore, our immediate priority isn't THP optimization, but rather finding a way to safely enable THP in production first. The kernel team needs a solution that addresses this fundamental deployment hurdle before we can consider performance improvements. -- Regards Yafang