On Thu, Jun 26, 2025 at 03:25:21AM -0700, Christoph Hellwig wrote: > On Thu, Jun 26, 2025 at 01:57:59PM +1000, Dave Chinner wrote: > > writeback errors. Because scientists and data analysts that wrote > > programs to chew through large amounts of data didn't care about > > persistence of their data mid-processing. They just wanted what they > > wrote to be there the next time the processing pipeline read it. > > That's only going to work if your RAM is as large as your permanent > storage :) No, the old behaviour worked just fine with data sets larger than RAM. When there is a random writeback error in a big data stream, only those pages remained dirty and so never get tossed out of RAM. Hence when a re-read of that file range occurred, the data was already in RAM and the read succeeded, regardless of the fact that writeback has been failing. IOWs the behavioural problems that the user is reporting are present because we got rid of the historic XFS writeback error handling (leave the dirty pages in RAM and retry again later) and replaced it with the historic Linux behaviour (toss the data out and mark the mapping with an error). The result of this change is exactly what the OP is having problems with - reread of a range that had a writeback failure returns zeroes or garbage, not the original data. If we kept the original XFS behaviour, the user applications would handle these flakey writeback failures just fine... Put simply: we used to have more robust writeback failure handling than we do now. That could (and probably should) be considered a regression.... -Dave. -- Dave Chinner david@xxxxxxxxxxxxx