User Tools

Site Tools


nndocs:nvme-of

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
nndocs:nvme-of [2024/02/23 20:18] – created (doesn't work yet) naptasticnndocs:nvme-of [2024/09/08 22:00] (current) – Move NQN info to sandbox naptastic
Line 1: Line 1:
 TODO Please put storage traffic on its own IB subnet and limit things sanely. TODO Please put storage traffic on its own IB subnet and limit things sanely.
- 
-NQNs: nqn.2014-08.rocks.narf.hostname is the whole thing. No :01 or any of that jazz, at least not yet. 
  
 The goal is to run everything over RDMA (InfiniBand specifically) and maybe have TCP as a fallback. (If TCP, then I probably want to try VMA... which I don't want to do.) The goal is to run everything over RDMA (InfiniBand specifically) and maybe have TCP as a fallback. (If TCP, then I probably want to try VMA... which I don't want to do.)
  
-=====Target=====+====Target====
 NVMe calls targets "subsystems". Good for them. NVMe calls targets "subsystems". Good for them.
  
-Create the subsystem, port and properties:+Create the subsystem: 
 +    /> subsystems/ create nqn.2014-08.rocks.narf.southpark
  
-    /> subsystems/ create nqn.2014-08.rocks.narf.southpark +Create a port and set its properties:
-    /> hosts/ create nqn.2014-08.rocks.narf.sadness+
     /> ports/ create 1     /> ports/ create 1
     /> ports/1/ set addr trtype=rdma     /> ports/1/ set addr trtype=rdma
-    Parameter trtype is now 'rdma'. 
     /> ports/1/ set addr adrfam=ipv4     /> ports/1/ set addr adrfam=ipv4
-    Parameter adrfam is now 'ipv4'. 
     /> ports/1/ set addr traddr=172.20.64.13     /> ports/1/ set addr traddr=172.20.64.13
-    Parameter traddr is now '172.20.64.13'. 
     /> ports/1/ set addr trsvcid=4420     /> ports/1/ set addr trsvcid=4420
-    Parameter trsvcid is now '4420'. 
     /> ports/1/subsystems create nqn.2014-08.rocks.narf.southpark      /> ports/1/subsystems create nqn.2014-08.rocks.narf.southpark 
  
 NVMe uses "namespaces" (which are numbers) instead of LUNs. Good for them. NVMe uses "namespaces" (which are numbers) instead of LUNs. Good for them.
  
-Create and share namespace: +Create, set, and enable namespace:
     /> subsystems/nqn.2014-08.rocks.narf.southpark/namespaces create 1     /> subsystems/nqn.2014-08.rocks.narf.southpark/namespaces create 1
     /> subsystems/nqn.2014-08.rocks.narf.southpark/namespaces/1 set device path=/dev/nvme0n1     /> subsystems/nqn.2014-08.rocks.narf.southpark/namespaces/1 set device path=/dev/nvme0n1
-    Parameter path is now '/dev/nvme0n1'. 
     /> subsystems/nqn.2014-08.rocks.narf.southpark/namespaces/1 enable     /> subsystems/nqn.2014-08.rocks.narf.southpark/namespaces/1 enable
-    The Namespace has been enabled.+ 
 +You can use a file as a backstore. The syntax is not at all obvious. Why "group=device"? Whatever: 
 +    />subsystems/nqn.2014-08.rocks.narf.hostname/namespaces/1 set group=device path=/path/to/some/host.img 
 + 
 +I haven't checked if this works for partitions or SATA drives, but I think it should? 
 + 
 +I haven't figured out ramdisks yet. There isn't a built-in system like in LIO.
  
 Create ACLs: Create ACLs:
 +    /> hosts/ create nqn.2014-08.rocks.narf.sadness
     /> subsystems/nqn.2014-08.rocks.narf.southpark/ set attr allow_any_host=0     /> subsystems/nqn.2014-08.rocks.narf.southpark/ set attr allow_any_host=0
     Parameter allow_any_host is now '0'.     Parameter allow_any_host is now '0'.
     /> subsystems/nqn.2014-08.rocks.narf.southpark/allowed_hosts create nqn.2014-08.rocks.narf.sadness      /> subsystems/nqn.2014-08.rocks.narf.southpark/allowed_hosts create nqn.2014-08.rocks.narf.sadness 
  
-=====Initiator=====+====Initiator====
 NVMe calls initiators "hosts". Good for them. NVMe calls initiators "hosts". Good for them.
  
-    [root]@[shark][13:07:44][~]# modprobe nvme-rdma +Load the module (should be able to list this in /etc/modules) 
-    [root]@[shark][08:42:11][~]# nvme discover -t rdma -a 172.20.64.13 -s 4420 +    # modprobe nvme-rdma
-    Failed to write to /dev/nvme-fabrics: Connection reset by peer +
-    failed to add controller, error failed to write to nvme-fabrics device +
- +
-And on southpark: +
- +
-    [956501.535293] nvme nvme1: Connect rejected: status 8 (invalid service ID). +
-    [956501.535302] nvme nvme1: rdma connection establishment failed (-104) +
- +
-Ok, let's try this from sadness. +
-Target configuration (nvmetcli) : +
- +
-    /hosts> cd / +
-    /> hosts/ create nqn.2014-08.rocks.narf.sadness +
-    /> subsystems/nqn.2014-08.rocks.narf.southpark/allowed_hosts/ create nqn.2014-08.rocks.narf.sadness+
  
-This doesn'work from inside nvmetcliThe newline character grunks up the whole thing.+Discovery: 
 +    # nvme discover -rdma -a 172.20.64.13 -s 4420
  
-    echo -n ipv4 > /sys/kernel/config/nvmet/ports/1/addr_adrfam+Log in: 
 +    nvme connect -t rdma -n nqn.2014-08.rocks.narf.southpark -a 172.20.64.13 -s 4420
  
-On sadness:+Rescan: 
 +    # nvme ns-rescan /dev/nvme1
  
-    root@sadness:~# nvme discover -t rdma -172.20.64.13 -s 4420 +  * If namespace you expect to appear isn't appearing, check the target to see if you forget to enable it
-    Failed to write to /dev/nvme-fabrics: Connection reset by peer+
  
-from dmesg on sadness:+Disconnect a subsystem: 
 +    # nvme disconnect -d /dev/nvme1
  
-    [88648.940995] nvme nvme1: rdma connection establishment failed (-104)+Gotta admit, that feels a lot nicer than logging out of iscsiadm.
nndocs/nvme-of.1708719533.txt.gz · Last modified: 2024/02/23 20:18 by naptastic