On Tue, Jul 01, 2025 at 11:38:44AM -0500, Bjorn Helgaas wrote: > > > > No. The PERST# delay should be handled by the glue drivers since > > they are the ones controlling the PERST# line. Doing an > > unconditional wait for both the cases in DWC core, seems wrong to > > me. > > It ends up being a little bit weird that the delay is in the DWC core > (dw_pcie_wait_for_link()) for ports that support fast links, and in > the glue drivers otherwise. It would be easier to verify and maintain > if the delay were always in the DWC core. > > If we had a dw_pcie_host_ops callback for PERST# deassertion, the > delay could be in the DWC core, but I don't know if there's enough > consistency across drivers for that to be practical. Currently, there is not much consistency between the glue drivers, so adding a DWC core API to assert/deassert PERST# sounds like a good idea to me. The callback could even be supplied a struct gpio_desc pointer. Like I mentioned in my previous email: https://github.com/torvalds/linux/blob/v6.16-rc4/drivers/pci/controller/dwc/pci-imx6.c#L885 https://github.com/torvalds/linux/blob/v6.16-rc4/drivers/pci/controller/dwc/pcie-qcom.c#L294 https://github.com/torvalds/linux/blob/v6.16-rc4/drivers/pci/controller/dwc/pcie-keembay.c#L89 these drivers seem to have a 100 ms delay after PERST# has been deasserted, but there are of course more glue drivers, so a lot of them will not have a 100 ms wait _after_ PERST# is deasserted. (All glue drivers seem to have a delay between asserting and deasserting PERST#.) Right now, e.g. qcom will have a 100 ms delay both after deasserting PERST# and after link up. (However, based on the supported link speed, only one of the delays should be needed.) However, my main concern is not that qcom waits twice, it is those drivers that do not have a 100 ms delay after PERST# has been deasserted, because before commit 470f10f18b48 ("PCI: Reduce PCIE_LINK_WAIT_SLEEP_MS"), those drivers might have been "saved" by the ridiculously long PCIE_LINK_WAIT_SLEEP_MS. However, now when we sleep less in each iteration when polling for link up, those drivers that do not have a 100 ms delay after PERST# has been deasserted might actually see regressions, because (the now reduced) PCIE_LINK_WAIT_SLEEP_MS time is no longer "saving" them. If we don't want to make the PCIE_RESET_CONFIG_WAIT_MS wait unconditional (not care about the supported link speed), then perhaps we should drop 470f10f18b48 ("PCI: Reduce PCIE_LINK_WAIT_SLEEP_MS") from pci/next ? Kind regards, Niklas