Jump to content

jeka

Members
  • Posts

    2
  • Joined

  • Last visited

  1. Thanks a lot @Pali! Actually I have bought another drive (SK hynix BC711 256Gb NVMe PCIe M.2 2242 (HFM256GD3HX015N)) and it is working flawlessly.
  2. Dear community, I'm trying to add some storage to espressobin board and idea to utilize existing mini pcie socket for that was very appealing. I have bought Mini PCI-E to NVME Adapter on aliexpress (PCI-E to NVME Adapter) and Samsung PM991a 256Gb drive. To my surprise it is almost worked immediately - drive is detected and appeared in system as nvme device. The next step would be to check if it works and I was hit by a failure immediately, Reading from drive using dd command seems to work for first 19 Mb's seems to work, but attempt to ready anything beyond 20 Mb mark produces drive freeze and then disconnect. Drive becomes unusable until power cycle. Command I have used: dd if=/dev/nvme0n1 of=/dev/null bs=1M status=progress count=20 Dmesg shows following output: [ 134.109735] nvme nvme0: I/O 288 QID 2 timeout, aborting [ 134.109797] nvme nvme0: I/O 289 QID 2 timeout, aborting [ 164.828756] nvme nvme0: I/O 288 QID 2 timeout, reset controller [ 195.547760] nvme nvme0: I/O 16 QID 0 timeout, reset controller [ 256.769932] nvme nvme0: Device not ready; aborting reset, CSTS=0x1 [ 256.786083] blk_update_request: I/O error, dev nvme0n1, sector 40704 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [ 256.786183] blk_update_request: I/O error, dev nvme0n1, sector 41472 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 [ 256.786367] nvme nvme0: Abort status: 0x371 [ 256.786381] nvme nvme0: Abort status: 0x371 [ 287.322517] nvme nvme0: Device not ready; aborting reset, CSTS=0x1 [ 287.322547] nvme nvme0: Removing after probe failure status: -19 [ 317.841253] nvme nvme0: Device not ready; aborting reset, CSTS=0x1 [ 317.841505] Buffer I/O error on dev nvme0n1, logical block 5088, async page read [ 317.841524] nvme0n1: detected capacity change from 500118192 to 0 This output is quite similar to broken APST support and workaround to add "nvme_core.default_ps_max_latency_us=0" to kernel parameters is usually advised. Unfortunately it didn't help in my situation. Right now I'm actually out of ideas how to troubleshoot the problem further and seeking for advice from community... System itself is up to date: root@espressobin:~# uname -a Linux espressobin 5.15.93-mvebu64 #23.02.2 SMP PREEMPT Fri Feb 17 23:51:39 UTC 2023 aarch64 GNU/Linux root@espressobin:~# cat /etc/deb debconf.conf debian_version root@espressobin:~# cat /etc/debian_version 11.6 root@espressobin:~# cat /etc/issue Armbian 22.11.1 Bullseye \l
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines