On Mon, Jul 28, 2025 at 07:23:03PM +0200, Thomas Gleixner wrote: > On Mon, Jul 28 2025 at 01:24, Chris Li wrote: > > The liveupdate devices are already initialized by the kernel before the > > kexec. During the kexec the device is still running. Avoid write to the > > liveupdate devices during the new kernel boot up. > > This change log is way too meager for this kind of change. > > 1) You want to explain in detail how this works. > > "initialized by the kernel before the kexec" is as vague as it gets. > > 2) Avoid write .... > > Again this lacks any information how this is supposed to work correctly. > > > drivers/pci/ats.c | 7 ++-- > > drivers/pci/iov.c | 58 ++++++++++++++++++------------ > > drivers/pci/msi/msi.c | 32 ++++++++++++----- > > drivers/pci/msi/pcidev_msi.c | 4 +-- > > drivers/pci/pci-acpi.c | 3 ++ > > drivers/pci/pci.c | 85 +++++++++++++++++++++++++++++--------------- > > drivers/pci/pci.h | 9 ++++- > > drivers/pci/pcie/aspm.c | 7 ++-- > > drivers/pci/pcie/pme.c | 11 ++++-- > > drivers/pci/probe.c | 43 +++++++++++++++------- > > drivers/pci/setup-bus.c | 10 +++++- > > Then you sprinkle this stuff into files, which have completely different > purposes, without any explanation for the particular instances why they > are supposed to be correct and how this works. Yeah, everyting needs to be very carefully explained. For instance I'm not sure we should be doing *anything* to the MSI. Why did you think so? MSI should be fully cleared by the new kernel and the new VFIO should re-establish all the MSI routing from scratch as part of adopting the device. We already accept that any interrupts are lost during the kexec process so what reason is there to do anything except start up the new kernel with a fully disabled MSI and cleared MSI? If otherwise it should be explained why we can't work this way - and then explain how the new kernel will adopt the inherited operating MSI (hint: I doubt it can) without disrupting it. Same remark for everything. Explain in the commits and perhaps a well placed comment why anything needs to be done and why exactly we can't use the cold boot flow for each item. eg "we can't use the cold boot flow for BAR sizing because BAR sizing requires changing the BAR register and that will break ongoing P2P DMAs" "we can't use the cold boot flow for bridge windows because changing the bridge windows in any way will break ongoing P2P DMAs" (though you also need to explain why the cold boot flow would change the bridge windows) etc etc. There is also some complication here as the iommu driver technically owns some of the PCI state, and we really don't want the PCI Core to change it, but we do need theiommu driver to affirm what the in-use state should be because it is responsible to clean it up. This may actually require some restructing of the iommu driver/pci core interfaces to switch from an enable/disbale language to a 'target state' language. Ie "ATS shall be on and ATS page size shall be X". This series is very big, so I would probably try to break it up into smaller chunks. Like you don't need to preserve bridge windows and BARs if you don't support P2P. You don't need to worry about ATS and PASID if you don't support those, etc, etc. Yes, in the end all needs to be supported, but going bit by bit will be easier for people to understand. Basic VFIO support with a basic IOMMU using basic PCI with no P2P is the simplest thing you can do, and I think it needs surprisingly little preservation. Jason