• Content Count

  • Joined

  • Last visited

Everything posted by Matthias

  1. Hallo, I've been using Armbian on the NanoPi M4 V2 for a while and now I'm thinking about switchen from the legacy kernel to the current kernel. Unfortunately that's not so easy to do when using armbian-config. You could think you can just select the current-kernel in the list but you won't find it there. The reason is simple: the legacy kernel belongs to the rk3399-family and the current kernel is rockchip64. Which is basically the same, but from an organisational point of view it is not (please forgive me if I do not show an adequate technical precision here, the details and the
  2. I played around with setting the governor to "conservative" or "ondemand". Result: "conservative" results in a stable system that allows copying of several GBs of data, "ondemand" results in an unstable system that fails after copying some GBs of data and corrupts its filesystem I also did a scripted load test that copied 100GB (fast: data source was tmpfs) of data to the hard drives with governor set to "conservative". Result: No reported file system errors and a MD5-sum-check of every file copied to the disks passed successfully. If anybody is interested and likes to repeat that test on
  3. And repeated the copying with kernel 5.4.11. The CPU frequencies were limited in the sysfs (govenor "conservative", scaling_max_freq) to 1.42/1.8GHz. I copied 30GB of data to the hard drive (this time BTRFS) and got the first errors (file system repors CRC-errors) while calculating the MD5-sums of the copied files. My conclusion: No indicators that the CPU-frequency spoils the data, but something seems to be odd with kernel 5.4.x and NanoPi M4V2. Or do you think the frequency needs to be reduced the hard way in the kernel? @piter75 Just noticed I only used the governor "conservative"
  4. Just did some copying of files using kernel 4.4.210: I copied around about 15GB of random data and about the same amount of other data (videos) on a ZFS-volume. Also copied about 5GB of data from SD to the ZFS-volume. MD5-checks indicate that there has not been any data corrupted. Cpufreq shows the maximum allowed frequency of the CPUs is 1.42/1.8GHz. So the combination of kernel 4.4.x and "low" CPU frequency seems to do the trick. I'll try again with a 5.4.x-kernel and throttled CPUs.
  5. Don't know exactly, I think it was 4.4.192.
  6. I didn't observe any crashes because of high load but what I encountered when compiling code were messages on the console indicating (indirectly) that there is something odd with the CPU (sorry, I don't remember the text). I tried to fix that using more powerful power supply, but that did not help. Not even the 12V supply via the SATA-hat. What did help was setting the governor for the CPU frequency to "conservative". Since I did that, I never got any strange messages anymore. So my conclusion was, that the board might have problems handling spikes or large changes in power consumption. B
  7. Don't worry. I remember darkly that when I tested it one month ago the legacy kernel was good at receiving and bad at sending while the current kernel was cood at sending and bad at receiving. See here:
  8. Interesting, I also had problems copying files using the 5.4 kernel on my NanoPi M4V2. I compiled it myself and copies files from the sd-card to and external hard drive connected via the SATA-hat. I used file systems on the hard drive that were able to detect corrupted data on the drive (ZFS and BTRFS) and every time I did a scrub after copying the files, failed CRC-checks within the data were reported, some of them even at the same position on two redundant drives (mirror). I'll try again with the legacy kernel (4.x). Would be greate if it works in this configuration. @piter75: Di
  9. That would explain some things. For example: During scrubbing a mirrored BTRFS reported broken CRCs of the same block of data on both disks. I observed that more than once within 40gb of data. Chances of this happening by accident are of course very low. But if the data gets corrupted at the source (RAM) this could easily happen. Configuring the timing of the RAM is way beyond my knowledge, so I'll stop here and wait for any improvements. As always: If there is something you would like to get tested: Don't hesitate to contact me.
  10. I need to recall my thesis about ZFS interfering with Samba and causing trouble. I cannot reproduce it anymore. For testing with NFS I installed the NFS client for Windows 10 wich seemed to interfere a lot with SMB. I don't know why, but after uninstalling it, everything was fine and I was able to download and upload files in large amounts. Of course having the offloading disabled. This is true for transferring data from and to the SD card. When using a hard drive connected to the SATA hat I get stopping network connections, broken file systems (ZFS and BTRFS two-disk-mirrors that show uncorre
  11. Ok @piter75, looks like things are a little more complicated: First of all I must apologise: My "fresh install" had at least one customisation I did not mention: The kernel had support for ZFS (via zfs-dkms). When using this installation Piter's use of ethtool does not bring any improvement. Then I remembered something: ZFS brings has support for SMB and NFS. Don't know why they mixed that but seems to be a relict from Solaris. So just to make sure I reinstalled the system without ZFS-support. The following observations were made using the "ZFS-less" system connected to a Windows 10 PC via gig
  12. That's good news, thank you for that hint, Piter. And good to see that my issue is reproducible on more than one board. I also read about that a couple of weeks ago. But as that was the time where we all struggled with the stability of the networking, it didn't bring any effect and I dropped that idea. I can give it another try this evening on the current state of the networking. Hopefully it has the effect I'm looking for.
  13. Checked NFS: Works fine, I was not able to observe any drops of transfer speed like mentioned above.
  14. I can check NFS, that's a good idea. Then I get a better impressing how "strongly" related the issue is to Samba. If Samba works on one board family and not on another, the only difference relevant to Samba I can think of is the kernel. Or did I miss something?
  15. Edit: Issue is solved, see here. It's an application problem but according to an extensive Google research this does not seem to be a common problem of Samba, so it might be an integration problem with the environment it is running in. The setup: Armbian 19.11.6 (Debian Buster) running the current kernel on a NanoPi M4V2, offering network shares via Samba. Clients are Windows 10 and a BananaPi running Armbian (Debian Buster 4.19.y kernel) as well. The observation: When downloading (large) files from the Samba share to Windows 10, the download speed very ofte
  16. @piter75 I just tested the values for rx_delay you provided with kernel 5.4.6-rockchip64. 0x16 seems to be the best value for me, 0x14 offers a similar quality. You can find the details in the table below. rx_delay Observation 0x0E No connection to device via SSH possible. Problems receiving data? 0x14 Receives data at ~ 930mbit/s, but on very rare occasions drops down to ~200mbit/s but then normalizes again 0x0C No connection to device via SSH possible. Problems receiving data?
  17. Just one thing I would like to mention here for the sake of documentation: The modification of the dependencies of the kernel headers should get into existing installations sooner or later by regular updates of packages (as soon as PR 1697 is merged). But the modification of /etc/environment won't, as far as I understand the build process of Armbian. That modification needs to be made manually on existing installations in order to get DKMS working. Sorry about all those tiny increments of information day by day, but I was a complete newbee to the build system of Armbian when I join
  18. Sure, I can do that. Thanks for the manual, looks quite straight forward. I was afraid I need to much "heavier" steps of the build process to check the different options. Just give me a day or two.
  19. I can confirm your patched legacy-kernel can receive and transmit at about 930mbps. Good job, @piter75! Unfortunately I cannot confirm high rates for receiving data using kernel 5.4.6 on my hardware. I used this image to test the current kernel: Armbian_19.11.4_Nanopim4v2_buster_current_5.4.6.img (downloaded from server, no local compilation)
  20. I finally complete and tested my fixes. Concerning the dependencies of the linux headers I added the following for kernels <=4.14: make, gcc, libc6-dev, libssl-dev. And the following for newer kernels: make, gcc, libc6-dev, bison, flex, libssl-dev The results of my tests on a NanoPi M4V2: Legacy kernel (4.14-rockchip): Works out perfectly. After installing the kernel headers, zfs-dkms can be installed flawlessly. The modules are compiled. Current kernel (mainline 5.4.6): ZFS (0.7.12) cannot be installed. I assume this is caused by the problem mentioned ab
  21. Update on the current state of my activities. All changes mentioned below are not yet on Armbian's master: I modified /etc/environment and removed the export of the variable ARCH. This fixes the problem about that variable mentioned at the begin of the thread. I added dependencies to the linux headers compiled during build of Armbian images. That way packages like "make" and "flex" are automatically installed when installing the header package. I was not able to test all supported kernels (and adjust the dependecies to all of them) yet, so this work is incomplete. So this part is s
  22. Some further observations of mine in case it helps identifying the problems using ethernet: After noticing that the throughput using the native ethernet connection (eth0) of the M4V2 is low, I checked it using iperf3. I tested the legacy kernel, the current kernel and the dev kernel. The results are quite interesting: Legacy kernel: Can receive data via TCP from anoter host with high speed (~940MBit/s) but the other way around is quite slow (<1MBit/s). Current/dev kernel: Can send data via TCP to another host with high speed (~950MBit/s) but the other way around is quite s
  23. Ok, now I think I understand the background why dkms-packages cannot be installed. The example from my previous post is still valid, but now I can give further explanations: apt update # Make sure all packages required to build zfs-dpkg are installed. # Easiest way to do it, is by installing the package itself. # As a side effect, this also provides all tools required to successfully compile the headers of the kernel. # Afterwards the dkms-packages can (and need to) be deinstalled again. apt install zfs-dkms apt remove spl-dkms zfs-dkms # The installed packages dkms has a problem: In the
  24. I have some news. After playing around a bit randomly to understand (a tiny part of) the internals of dpkg and dkms I did the following, knowing that we have some problems with the environment variable ARCH (see comments above): I checked the file /usr/lib/dkms/common.postinst. I replaced all occurences of ARCH by ARCH_. The idea behind that: In general ARCH is a global variable, in the script is used like a local variable: It it set to the architecture the script receives as parameter $4. Then I ran "apt install zfs-dkms". It succeeded. (just to make it clear: I ran it before the
  25. Thanks for the quick answer. I've absolutely no clue about the builder of Armbian and have never built Armbian by myself. But it sounds like an interesting challenge and the patches on your branch look quite understandable (as understandable as patches are...). Together with the information in this thread at least I have an idea of what needs to be done. So let's hope I get some enlightment to figure out the "how". Don't expect quick results.