On Tue, 1 Jul 2025, Matthew W Carlis wrote: Hi Matthew, I have a few simple style related comments to the patch itself and questions about this scenario below. The wording in the shortlog (in subject) sounds a bit clumsy to me, perhaps change it to something like this: PCI: Don't use Target Speed quirk if device is not ASM2824 > The pcie_failed_link_retrain() was added due to a behavior observed with > a very specific set of circumstances which are in a comment above the > function. The "quirk" is supposed to force the link down to Gen1 in the > case where LTSSM is stuck in a loop or failing to train etc. The problem > is that this "quirk" is applied to any bridge & it can often write the > Gen1 TLS (Target Link Speed) when it should not. Leaving the port in > a state that will result in a device linking up at Gen1 when it should not. > Incorrect action by pcie_failed_link_retrain() has been observed with a > variety of different NVMe drives using U.2 connectors & in multiple different > hardware designs. Directly attached to the root port, downstream of a > PCIe switch (Microchip/Broadcom) with different generations of Intel CPU. > All of these systems were configured without power controller capability. > They were also all in compliance with the Async Hot-Plug Reference model in > PCI Express® Base Specification Revision 6.0 Appendix I. for OS controlled > DPC Hot-Plug. > The issue appears to be more likely to hit to be applied when using > OOB PD (out-of band presence detect), but has also been observed without > OOB PD support ('DLL State Changed' or 'In-Band PD'). > Powering off or power cycling the slot via an out-of-band power control > mechanism with OOB PD is extremely likely to hit since the kernel would > see that slot presence is true. Physical Hot-insertion is also extremly extremely > likely to hit this issue with OOB PD with U.2 drives due to timing > between presence assertion and the actual power-on/link-up of the NVMe > drive itself. When the device eventually does power-up the TLS would > have been left forced to Gen1. This is similarly true to the case of > power cycling or powering off the slot. > Exact circumstances for when this issue has been hit in a system without > OOB PD due hasn't been fully understood to due having less reproductions > as well as having reverted this patch for this configurations. Paragraphs should be separated with empty lines and started without spaces as indent. This description did not answer to the key question, why does pcie_lbms_seen() returns true in these case which is required for 2.5GT/s to be set for the bridge? Is it a stale indication? Would LBMS get cleared but quirk runs too soon to see that? Is this mainly related to some artificial test that rapidly fires event after another (which is known to confuse the quirk)? ...I mean, you say "extremely likely". I suppose when the problem occurs and the bridge remains at 2.5GT/s, is it possible to restore the higher speed using the pcie_cooling device associated with the bridge / bwctrl? You can find the correct cooling device with this: grep -H . /sys/class/thermal/cooling_device*/type | grep PCIe_ ...and then write to cur_state. > Signed-off-by: Matthew W Carlis <mattc@xxxxxxxxxxxxxxx> > --- > drivers/pci/quirks.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c > index d7f4ee634263..39bb0c025119 100644 > --- a/drivers/pci/quirks.c > +++ b/drivers/pci/quirks.c > @@ -100,6 +100,8 @@ int pcie_failed_link_retrain(struct pci_dev *dev) > }; > u16 lnksta, lnkctl2; > int ret = -ENOTTY; As per the coding style, please add an empty line after the local variables. > + if (!pci_match_id(ids, dev)) > + return ret; -- i.