On 2025-08-08 09:37:26 [+0200], Vlastimil Babka wrote: > > Given the information I got from here, the comment on the code [1] > > and an older commit message [2], I suspect CMA somehow influences our problem. > > However, kcompactd doesn't perform CMA allocations, only compaction, in a > mode that does not include ISOLATE_UNEVICTABLE. So this is weird. As per smaps, the RT task should have all VMAs listed as "lo". If use mlock() then something like an accidental fork() would remove it. Otherwise it should be there. At the time of the fault you could add something like | diff --git a/mm/memory.c b/mm/memory.c | --- a/mm/memory.c | +++ b/mm/memory.c | @@ -4476,6 +4476,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) | entry = pte_to_swp_entry(vmf->orig_pte); | if (unlikely(non_swap_entry(entry))) { | if (is_migration_entry(entry)) { | + | + if (!strcmp("tRealtime", current->comm)) { | + trace_printk("Migrated: 0x%lx VMA flags: %lx\n", | + vmf->address, vma->vm_flags); | + } | + | migration_entry_wait(vma->vm_mm, vmf->pmd, | vmf->address); | } else if (is_device_exclusive_entry(entry)) { to see address is gone. Not sure if the PTE flags are of any help here. Is it easily possible on the other side (isolate_migratepages(), right?) to figure out which task a certain address space/ page belongs to? So would if a "bad" page is considered for migration. Sebastian