Thanks for the hint Sirmalinton. Today, I could solve the problem. Here is a short summary that might be of use for people running into similar issues.
Running lspci -k shows the "Non-Volatile memory controller" but it does not show the line "Kernel driver in use: nvme". Hence, I ran
echo nvme > /sys/bus/pci/devices/0002\:01\:00.0/driver_override
echo 0002:01:00.0 > /sys/bus/pci/drivers_probe
where 0002:01:00.0 is from the output of lspci -k. That worked and made the disk appear as files /dev/nvme0*.
To have a solution that works after reboot, I created a udev rule /etc/udev/rules.d/99-nvme-override.rules with the content:
ACTION=="add", SUBSYSTEM=="pci", ATTR{vendor}=="0x144d", ATTR{device}=="0xa80d", ATTR{driver_override}="nvme"
The vendor and device IDs are from lspci -nn. To trigger the probe, I created a systemd service /etc/systemd/system/nvme-rebind.service with the content:
[Unit]
Description=Rebind NVME controller to driver after override
After=udev.service
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 0002:01:00.0 > /sys/bus/pci/drivers_probe"
[Install]
WantedBy=multi-user.target
The service can be enabled via systemctl enable nvme-rebind.service.
Maybe one reason it does not work in the first place is because /lib/systemd/system/nvmefc-boot-connections.service is not started. The reason is that the condition
ConditionPathExists=/sys/class/fc/fc_udev_device/nvme_discovery
is not met.
Another reason, I can imagine is that the vendor/device ID combination is not registered to the nvme driver in the current kernel. As I mentioned before, it did work out of the box when I installed Armbian at first.