On 07/08/2025 00.57, Kees Cook wrote:
On Wed, Aug 06, 2025 at 10:18:14PM +0600, Svetlana Parfenova wrote:
Preserve the original ELF e_flags from the executable in the core dump
header instead of relying on compile-time defaults (ELF_CORE_EFLAGS or
value from the regset view). This ensures that ABI-specific flags in
the dump file match the actual binary being executed.
Save the e_flags field during ELF binary loading (in load_elf_binary())
into the mm_struct, and later retrieve it during core dump generation
(in fill_note_info()). Use this saved value to populate the e_flags in
the core dump ELF header.
Add a new Kconfig option, CONFIG_CORE_DUMP_USE_PROCESS_EFLAGS, to guard
this behavior. Although motivated by a RISC-V use case, the mechanism is
generic and can be applied to all architectures.
In the general case, is e_flags mismatched? i.e. why hide this behind a
Kconfig? Put another way, if I enabled this Kconfig and dumped core from
some regular x86_64 process, will e_flags be different?
The Kconfig option is currently restricted to the RISC-V architecture
because it's not clear to me whether other architectures need actual
e_flags value from ELF header. If this option is disabled, the core dump
will always use a compile time value for e_flags, regardless of which
method is selected: ELF_CORE_EFLAGS or CORE_DUMP_USE_REGSET. And this
constant does not necessarily reflect the actual e_flags of the running
process (at least on RISC-V), which can vary depending on how the binary
was compiled. Thus, I made a third method to obtain e_flags that
reflects the real value. And it is gated behind a Kconfig option, as not
all users may need it.
This change is needed to resolve a debugging issue encountered when
analyzing core dumps with GDB for RISC-V systems. GDB inspects the
e_flags field to determine whether optional register sets such as the
floating-point unit are supported. Without correct flags, GDB may warn
and ignore valid register data:
warning: Unexpected size of section '.reg2/213' in core file.
As a result, floating-point registers are not accessible in the debugger,
even though they were dumped. Preserving the original e_flags enables
GDB and other tools to properly interpret the dump contents.
Signed-off-by: Svetlana Parfenova <svetlana.parfenova@xxxxxxxxxxxxx>
---
fs/Kconfig.binfmt | 9 +++++++++
fs/binfmt_elf.c | 26 ++++++++++++++++++++------
include/linux/mm_types.h | 5 +++++
3 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt
index bd2f530e5740..45bed2041542 100644
--- a/fs/Kconfig.binfmt
+++ b/fs/Kconfig.binfmt
@@ -184,4 +184,13 @@ config EXEC_KUNIT_TEST
This builds the exec KUnit tests, which tests boundary conditions
of various aspects of the exec internals.
+config CORE_DUMP_USE_PROCESS_EFLAGS
+ bool "Preserve ELF e_flags from executable in core dumps"
+ depends on BINFMT_ELF && ELF_CORE && RISCV
+ default n
+ help
+ Save the ELF e_flags from the process executable at load time
+ and use it in the core dump header. This ensures the dump reflects
+ the original binary ABI.
+
endmenu
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index caeddccaa1fe..e5e06e11f9fc 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1290,6 +1290,11 @@ static int load_elf_binary(struct linux_binprm *bprm)
mm->end_data = end_data;
mm->start_stack = bprm->p;
+#ifdef CONFIG_CORE_DUMP_USE_PROCESS_EFLAGS
+ /* stash e_flags for use in core dumps */
+ mm->saved_e_flags = elf_ex->e_flags;
+#endif
Is this structure actually lost during ELF load? I thought we preserved
some more of the ELF headers during load...
As far as I can tell, the ELF header itself is not preserved beyond
loading. If there's a mechanism I'm missing that saves it, please let me
know.
+
/**
* DOC: "brk" handling
*
@@ -1804,6 +1809,8 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
struct elf_thread_core_info *t;
struct elf_prpsinfo *psinfo;
struct core_thread *ct;
+ u16 machine;
+ u32 flags;
psinfo = kmalloc(sizeof(*psinfo), GFP_KERNEL);
if (!psinfo)
@@ -1831,17 +1838,24 @@ static int fill_note_info(struct elfhdr *elf, int phdrs,
return 0;
}
- /*
- * Initialize the ELF file header.
- */
- fill_elf_header(elf, phdrs,
- view->e_machine, view->e_flags);
+ machine = view->e_machine;
+ flags = view->e_flags;
#else
view = NULL;
info->thread_notes = 2;
- fill_elf_header(elf, phdrs, ELF_ARCH, ELF_CORE_EFLAGS);
+ machine = ELF_ARCH;
+ flags = ELF_CORE_EFLAGS;
#endif
+#ifdef CONFIG_CORE_DUMP_USE_PROCESS_EFLAGS
+ flags = dump_task->mm->saved_e_flags;
+#endif
This appears to clobber the value from view->e_flags. Is that right? It
feels like this change should only be needed in the default
ELF_CORE_EFLAGS case. How is view->e_flags normally set?
view->e_flags is set at compile time, and view is pointing to const
struct. The override of e_flags is intentional in both cases
(ELF_CORE_EFLAGS and CORE_DUMP_USE_REGSET) to allow access to the
process actual e_flags, regardless of the selected method.
+
+ /*
+ * Initialize the ELF file header.
+ */
+ fill_elf_header(elf, phdrs, machine, flags);
+
/*
* Allocate a structure for each thread.
*/
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index d6b91e8a66d6..39921b32e4f5 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1098,6 +1098,11 @@ struct mm_struct {
unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */
+#ifdef CONFIG_CORE_DUMP_USE_PROCESS_EFLAGS
+ /* the ABI-related flags from the ELF header. Used for core dump */
+ unsigned long saved_e_flags;
+#endif
+
struct percpu_counter rss_stat[NR_MM_COUNTERS];
struct linux_binfmt *binfmt;
--
2.50.1
-Kees
--
Best regards,
Svetlana Parfenova