Re: [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm sorry but Red Hat will not approve any upstream change like this that modifies the policy for OTHER VENDORS stuff.

You can't simply change the IO policy for all of these arrays.  Many vendors have no autoconnect/udev-rules because they don't want one.  They want to use the default ctrl_loss_tmo and the default iopolicy (numa)... you can't just change this for them.

If you want people to migrate their udev rules out of separate files and into a single autoconnect file like this then you'll have to get them to agree.

When I look upstream I see exactly 3 vendors who have a udev-rule for their iopolicy.

nvme-cli(master) > ls -1 nvmf-autoconnect/udev-rules/71*
nvmf-autoconnect/udev-rules/71-nvmf-hpe.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-netapp.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-vastdata.rules.in

I suggest that you get these three vendors to agree to move their policy into a single 71-nvmf-mulitpath-policy.rules.in file, and then leave everyone else's stuff alone.

In the future, vendors who want to add a multipath-policy rule can then use the new file instead of adding their own.

/John

On 8/20/25 5:32 PM, Xose Vazquez Perez wrote:
One file per vendor, or device, is a bit excessive for two-four rules.


If possible, select round-robin (>=5.1), or queue-depth (>=6.11).
round-robin is a basic selector, and only works well under ideal conditions.

A nvme benchmark, round-robin vs queue-depth, shows how bad it is:
https://marc.info/?l=linux-kernel&m=171931850925572
https://marc.info/?l=linux-kernel&m=171931852025575
https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data
https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf


[ctrl_loss_tmo default value is 600 (ten minutes)]

You can't remove this because vendors have ctrl_loss_tmo set to -1 on purpose.

v3:
  - add Fujitsu/ETERNUS AB/HB
  - add Hitachi/VSP

v2:
  - fix ctrl_loss_tmo commnent
  - add Infinidat/InfiniBox


Cc: Wayne Berthiaume <Wayne.Berthiaume@xxxxxxxx>
Cc: Vasuki Manikarnike <vasuki.manikarnike@xxxxxxx>
Cc: Matthias Rudolph <Matthias.Rudolph@xxxxxxxxxxxxxxxxxx>
Cc: Martin George <marting@xxxxxxxxxx>
Cc: NetApp RDAC team <ng-eseries-upstream-maintainers@xxxxxxxxxx>
Cc: Zou Ming <zouming.zouming@xxxxxxxxxx>
Cc: Li Xiaokeng <lixiaokeng@xxxxxxxxxx>
Cc: Randy Jennings <randyj@xxxxxxxxxxxxxxx>
Cc: Jyoti Rani <jrani@xxxxxxxxxxxxxxx>
Cc: Brian Bunker <brian@xxxxxxxxxxxxxxx>
Cc: Uday Shankar <ushankar@xxxxxxxxxxxxxxx>
Cc: Chaitanya Kulkarni <kch@xxxxxxxxxx>
Cc: Sagi Grimberg <sagi@xxxxxxxxxxx>
Cc: Keith Busch <kbusch@xxxxxxxxxx>
Cc: Christoph Hellwig <hch@xxxxxx>
Cc: Marco Patalano <mpatalan@xxxxxxxxxx>
Cc: Ewan D. Milne <emilne@xxxxxxxxxx>
Cc: John Meneghini <jmeneghi@xxxxxxxxxx>
Cc: Daniel Wagner <dwagner@xxxxxxx>
Cc: Daniel Wagner <wagi@xxxxxxxxx>
Cc: Hannes Reinecke <hare@xxxxxxx>
Cc: Martin Wilck <mwilck@xxxxxxxx>
Cc: Benjamin Marzinski <bmarzins@xxxxxxxxxx>
Cc: Christophe Varoqui <christophe.varoqui@xxxxxxxxxxx>
Cc: BLOCK-ML <linux-block@xxxxxxxxxxxxxxx>
Cc: NVME-ML <linux-nvme@xxxxxxxxxxxxxxxxxxx>
Cc: SCSI-ML <linux-scsi@xxxxxxxxxxxxxxx>
Cc: DM_DEVEL-ML <dm-devel@xxxxxxxxxxxxxxx>
Signed-off-by: Xose Vazquez Perez <xose.vazquez@xxxxxxxxx>
---

This will be the last iteration of this patch, there are no more NVMe storage
array manufacturers.


Maybe these rules should be merged into this new file. ???
71-nvmf-hpe.rules.in
71-nvmf-netapp.rules.in
71-nvmf-vastdata.rules.in

---
  .../80-nvmf-storage_arrays.rules.in           | 48 +++++++++++++++++++
  1 file changed, 48 insertions(+)
  create mode 100644 nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in

diff --git a/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
new file mode 100644
index 00000000..ac5df797
--- /dev/null
+++ b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
@@ -0,0 +1,48 @@
+##### Storage arrays
+
+#### Set iopolicy for NVMe-oF
+### iopolicy: numa (default), round-robin (>=5.1), or queue-depth (>=6.11)
+
+## Dell EMC
+# PowerMax
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="EMC PowerMax"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="EMC PowerMax"
+# PowerStore
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="dellemc-powerstore"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="dellemc-powerstore"
+
+## Fujitsu
+# ETERNUS AB/HB
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
+
+## Hitachi Vantara
+# VSP
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="HITACHI SVOS-RF-System"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="HITACHI SVOS-RF-System"
+
+## Huawei
+# OceanStor
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Huawei-XSG1"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Huawei-XSG1"
+
+## IBM
+# FlashSystem (RamSan)
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="FlashSystem"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="FlashSystem"
+# FlashSystem (Storwize/SVC)
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="IBM*214"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="IBM*214"
+
+## Infinidat
+# InfiniBox
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="InfiniBox"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="InfiniBox"
+
+## Pure
+# FlashArray
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Pure Storage FlashArray"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Pure Storage FlashArray"
+
+
+##### EOF





[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux