On Wed, Jun 25, 2025 at 03:46:56PM -0600, Manivannan Sadhasivam wrote: > On Wed, Jun 11, 2025 at 08:48:51AM +0200, Mike Looijmans wrote: > > > > Met vriendelijke groet / kind regards, > > > > Mike Looijmans > > System Expert > > > > > > TOPIC Embedded Products B.V. > > Materiaalweg 4, 5681 RJ Best > > The Netherlands > > > > T: +31 (0) 499 33 69 69 > > E: mike.looijmans@xxxxxxxx > > W: www.topic.nl > > > > Please consider the environment before printing this e-mail > > On 10-06-2025 20:57, Bjorn Helgaas wrote: > > > On Tue, Jun 10, 2025 at 04:39:03PM +0200, Mike Looijmans wrote: > > > > When the driver loads, the transceiver and endpoint may still be setting > > > > up a link. Wait for that to complete before continuing. This fixes that > > > > the PCIe core does not work when loading the PL bitstream from > > > > userspace. Existing reference designs worked because the endpoint and > > > > PL were initialized by a bootloader. If the endpoint power and/or reset > > > > is supplied by the kernel, or if the PL is programmed from within the > > > > kernel, the link won't be up yet and the driver just has to wait for > > > > link training to finish. > > > > > > > > As the PCIe spec allows up to 100 ms time to establish a link, we'll > > > > allow up to 200ms before giving up. > > > > +static int xilinx_pci_wait_link_up(struct xilinx_pcie *pcie) > > > > +{ > > > > + u32 val; > > > > + > > > > + /* > > > > + * PCIe r6.0, sec 6.6.1 provides 100ms timeout. Since this is FPGA > > > > + * fabric, we're more lenient and allow 200 ms for link training. > > > > + */ > > > > + return readl_poll_timeout(pcie->reg_base + XILINX_PCIE_REG_PSCR, val, > > > > + (val & XILINX_PCIE_REG_PSCR_LNKUP), 2 * USEC_PER_MSEC, > > > > + 2 * PCIE_T_RRS_READY_MS * USEC_PER_MSEC); > > > > +} > > > I don't think this is what PCIE_T_RRS_READY_MS is for. Sec 6.6.1 > > > requires 100ms *after* the link is up before sending config requests: > > > > > > For cases where system software cannot determine that DRS is > > > supported by the attached device, or by the Downstream Port above > > > the attached device: > > > > > > ◦ With a Downstream Port that does not support Link speeds greater > > > than 5.0 GT/s, software must wait a minimum of 100 ms following > > > exit from a Conventional Reset before sending a Configuration > > > Request to the device immediately below that Port. > > > > > > ◦ With a Downstream Port that supports Link speeds greater than > > > 5.0 GT/s, software must wait a minimum of 100 ms after Link > > > training completes before sending a Configuration Request to the > > > device immediately below that Port. Software can determine when > > > Link training completes by polling the Data Link Layer Link > > > Active bit or by setting up an associated interrupt (see § > > > Section 6.7.3.3). It is strongly recommended for software to > > > use this mechanism whenever the Downstream Port supports it. > > > > > Yeah, true, I misread the comment in pci.h. I cannot find any #define to > > match the "how long to wait for link training". Each driver appears to use > > its own timeout. So I should just create my own? > > > > We recently merged a series that moved the timing definitions to > drivers/pci/pci.h: > https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git/commit/?h=controller/linkup-fix&id=470f10f18b482b3d46429c9e6723ff0f7854d049 > > So if you base your series on top this branch, you could reuse the definitions. > You could also introduce a new macro if all you need is 1s: diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h index 9d20f0222fb1..f03bb882bf2e 100644 --- a/drivers/pci/pci.h +++ b/drivers/pci/pci.h @@ -64,6 +64,7 @@ struct pcie_tlp_log; */ #define PCIE_LINK_WAIT_MAX_RETRIES 100 #define PCIE_LINK_WAIT_SLEEP_MS 10 +#define PCIE_LINK_WAIT_MS PCIE_LINK_WAIT_MAX_RETRIES * PCIE_LINK_WAIT_SLEEP_MS /* Message Routing (r[2:0]); PCIe r6.0, sec 2.2.8 */ #define PCIE_MSG_TYPE_R_RC 0 - Mani -- மணிவண்ணன் சதாசிவம்