On Thu, Jun 26, 2025 at 01:57:59PM +1000, Dave Chinner wrote: > writeback errors. Because scientists and data analysts that wrote > programs to chew through large amounts of data didn't care about > persistence of their data mid-processing. They just wanted what they > wrote to be there the next time the processing pipeline read it. That's only going to work if your RAM is as large as your permanent storage :) > IOWs, checking for a past writeback IO error is as simple as: > > if (sync_file_range(fd, 0, 0, SYNC_FILE_RANGE_WAIT_BEFORE) < 0) { > /* An unreported writeback error was pending on the file */ > wb_err = -errno; > ...... > } > > This does not cause new IO to be issued, it only blocks on writeback > that is currently in progress, and it has no data integrity > requirements at all. If the writeback has already been done, all it > will do is sweep residual errors out to userspace..... Not quite. This will still wait for all I/O on the range, and given that sync_file_range treats a 0 length as the entire file that might actually do a significant amount of waiting. But yes, it's the closest we get right now.