On Wed, Jun 04, 2025 at 06:19:53PM +0800, Jinjian Song wrote: > From: Larysa Zaremba <larysa.zaremba@xxxxxxxxx> > > >> Fixes: 5545b7b9f294 ("net: wwan: t7xx: Add NAPI support") > >> Signed-off-by: Jinjian Song <jinjian.song@xxxxxxxxxxx> > >> --- > >> v3: > >> * Only Use READ_ONCE/WRITE_ONCE when the lock protecting ctlb->ccmni_inst > >> is not held. > > > >What do you mean by "lock protecting ctlb->ccmni_inst"? Please specify. > > Hi Larysa, > > This description might have been a bit simplified. This process is as follow: > > In patch v1, I directly set ctlb->ccmni_inst. This may be not safe, as the NAPI > processing and the driver's internal interface might not be synchronized. Therefoe, > following Jakub's suggestion, I add READ_ONCE/WRITE_ONCE in all places where this > pointer is accessed. > > In patch v2, Paolo suggested using READ_ONCE in places that are not protected by locks. > Some interfaces are protected by synchronization mechanisms, so it's unnecesssary to add them there. > Therefore, I removed READ_ONCE from the interfaces. > I have seen the discussion for previous version, I am asking you for the symbol name/names for the locks that make READ_ONCE in the removed places not needed. > >> @@ -441,7 +442,7 @@ static void t7xx_ccmni_recv_skb(struct t7xx_ccmni_ctrl *ccmni_ctlb, struct sk_bu > >> > >> static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno) > >> { > >> - struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0]; > >> + struct t7xx_ccmni *ccmni = READ_ONCE(ctlb->ccmni_inst[0]); > >> struct netdev_queue *net_queue; > >> > > > >You do not seem to check if ccmni is NULL here, so given ctlb->ccmni_inst[0] is > >not being hot-swapped, I guess that there are some guarantees of it not being > >NULL at this moment, so I would drop READ_ONCE here. > > This ctlb->ccmni_inst[0] is checked in the upper-level interface: > static void t7xx_ccmni_queue_state_notify([...]) { > [...] > if (!READ_ONCE(ctlb->ccmni_inst[0])) { > return; > } > > if (state == DMPAIF_TXQ_STATE_IRQ) > t7xx_ccmni_queue_tx_irq_notify(ctlb, qno); > else if (state == DMPAIF_TXQ_STATE_FULL) > t7xx_ccmni_queue_tx_full_notify(ctlb, qno); > } > > Since this is part of the driver's internal logic for handing queue events, would it be > safer to add READ_ONCE here as well? > Well, I am not 100% sure. What would make the code easier to reason about in terms of READ_ONCE/WRITE_ONCE is if you replaced struct t7xx_ccmni_ctrl *ctlb argument in t7xx_ccmni_queue_tx_irq_notify() and t7xx_ccmni_queue_tx_full_notify() with ctlb->ccmni_inst[0], the code would look like this: struct t7xx_ccmni *ccmni = READ_ONCE(t7xx_dev->ccmni_ctlb->ccmni_inst[0]); if (!ccmni) { dev_warn(&t7xx_dev->pdev->dev, "No netdev registered yet\n"); return; } if (state == DMPAIF_TXQ_STATE_IRQ) t7xx_ccmni_queue_tx_irq_notify(ccmni, qno); else if (state == DMPAIF_TXQ_STATE_FULL) t7xx_ccmni_queue_tx_full_notify(ccmni, qno); This way atomic reads in notifiers would be dependent on a single READ_ONCE, which should prevent nasty reordering, as far as I am concerned. The above holds if you think you do not need to check for NULL in the notifiers, but is such case I would rather consider proper locking or RCU. > >> @@ -453,7 +454,7 @@ static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno > >> > >> static void t7xx_ccmni_queue_tx_full_notify(struct t7xx_ccmni_ctrl *ctlb, int qno) > >> { > >> - struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0]; > >> + struct t7xx_ccmni *ccmni = READ_ONCE(ctlb->ccmni_inst[0]); > >> struct netdev_queue *net_queue; > >> > > > >Same as above, either READ_ONCE is not needed or NULL check is required. > > Yes, This function in the same upper-level interface. > > > if (atomic_read(&ccmni->usage) > 0) { > > @@ -471,7 +472,7 @@ static void t7xx_ccmni_queue_state_notify(struct t7xx_pci_dev *t7xx_dev, > > if (ctlb->md_sta != MD_STATE_READY) > > return; > > > > - if (!ctlb->ccmni_inst[0]) { > > + if (!READ_ONCE(ctlb->ccmni_inst[0])) { > > dev_warn(&t7xx_dev->pdev->dev, "No netdev registered yet\n"); > > return; > > } > > -- > > 2.34.1 > > > > > > Thanks. > > Jinjian, > Best Regards.