Jump to content

ejolson

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by ejolson

  1. I found the file rk3328_ddr_400MHz_v1.16.bin at https://github.com/rockchip-linux/rkbin/...r/bin/rk33 which mitigates the performance loss a little bit compared to the 333 MHz setting. Stability still seems fine.
  2. This is a followup post to to confirm that the Rock64 board that was experiencing segmentation faults with the default 786 MHz memory clock on Armbian is now stable with the uboot-initialized 333 MHz setting. On a selection of computational benchmarks performance is about 77 percent of the original setting; for compilations with gcc more than 80 percent of the original performance is retained with the added advantage that the system doesn't crash and the compiler doesn't create segmentation faults. As far as I'm concerned this problem is solved and I'll try to mark it as such. Although one out of the two Rock64 single-board computers here required this configuration change, based on posts appearing on the Pine forum, my suspicion is a much less than 50 percent--perhaps closer to 10 or 20 percent--of systems in use actually exhibit this problem. Even so, I would favor a note on the Armbian support page for the Rock64 explaining how to work around system instability by reducing the DDR RAM memory clock.
  3. Here is a performance comparison between the original 786MHz memory clock and the reduced 333MHz clock when running John McCalpin's stream memory bandwidth benchmark. The performance reduction for the scale operation was measured to be about 2.26 times which is slightly less than the expected 786/333=2.36 factor loss expected by just dividing out the clocks. In real world applications much of this performance reduction might be mitigated by cache memory; however, it would seem having a 600MHz setting might be a better compromise between super slow and super unstable. At any rate, I'll be performing some stress tests over the next few days to determine whether the system is really stable now. For reference, I also have a thread running on the Pine64 forum https://forum.pine64.org/showthread.php?tid=11209&pid=76363#pid76363 and will update both as resolved as soon as I verify the system is now stable.
  4. It seems patching the SD card is not very difficult. I downloaded rk3328_ddr_333MHz_v1.16.bin rk3328_miniloader_v2.46.bin Then I followed the instructions on http://opensource.rock-chips.com/wiki_Boot_option typed as root # mkimage -n rk3328 -T rksd -d rk3328_ddr_333MHz_v1.16.bin idbloader16.img # cat rk3328_miniloader_v2.46.bin >>idbloader16.img # dd if=idbloader16.img of=/dev/mmcblk0 seek=64 conv=notrunc # sync and then rebooted. I'll report back later how much slower everything is later and whether the segmentation faults and kernel oops are gone. Thanks for all your work on Armbian!
  5. My understanding is that this is a common problem with the Rock64 v2 boards and simply the result of some optimistic overclocking that should never have been done in the first place. While such overclocking appears to be necessary to meet the minimum performance needed for playing back certain high-definition video, my usage for the Rock64 is not watching television but for it to function as a computer. This is essentially a new board that sat in storage for six months due to certain shelter at home rules. As returning it is not an option, I would instead like to reduce the memory frequency to a rate that runs reliably. I was having difficulty converting the instructions I read for Arch Linux to Armbian and thought I'd ask here. Before adding some more waste to the landfill, I'd like to give the 333 MHz option a thorough test. I understand your reluctance to assist. It must be irritating to continually deal with manufacturers that cut corners in unexpected ways with later revisions of their hardware. Any guidance how to get started rebuilding the uboot initialization code for Armbian with the desired memory frequency would be appreciated.
  6. I have two Rock64 single-board computers running Focal Fossa. One is stable and the other gets segmentation faults when compiling and sometimes a kernel oops. After some searching, my understanding is this can be fixed by slowing down the memory speed by installing rk3328_ddr_333MHz_v1.13.bin from the ayufan-rock64 GitHub repository into uboot along with possibly rk3328_miniloader_v2.46.bin as well. Unfortunately, I'm clueless how to do this. I'm running the latest Armbian Focal Fossa Server from August 19 with the 5.7.17 kernel. Do I need to build a new image or can I patch an existing SD card? I'm sorry if I there is already a thread on this. I searched and could not find enough information that would allow me to proceed on my own.
  7. Woohoo! It worked. I now have wireguard installed and an iSCSI block device mounted over wireguard with a swap partition and a big /usr/local and my home directory. I'll report some performance results soon. Thanks!
  8. Thanks for your reply. Sorry for starting this thread after the one of the FriendlyARM website. It took me a long time before figuring out that iSCSI support was missing from the kernel. It seems the errors produced by iscsid and systemd are not very clear what's wrong--probably the wrong logging level somewhere. At any rate, after a day and a half of installing Ubuntu in a QEMU session and running the suggested build script on my local desktop (an dual-core AMD A6-5400K APU), I have the files armbian-config_20.05.0-trunk_all.deb armbian-firmware_20.05.0-trunk_all.deb armbian-firmware-full_20.05.0-trunk_all.deb linux-dtb-legacy-s5p6818_20.05.0-trunk_arm64.deb linux-headers-legacy-s5p6818_20.05.0-trunk_arm64.deb linux-image-legacy-s5p6818_20.05.0-trunk_arm64.deb linux-source-legacy-s5p6818_20.05.0-trunk_all.deb linux-u-boot-legacy-nanopct3_20.05.0-trunk_arm64.deb and am wondering what to do with them. Which do I need to install to get the new kernel running?
  9. I am trying to set up Armbian as an iSCSI initiator using open-iscsi. The result is Mar 15 03:07:13 matrix iscsid[2207]: iSCSI logger with pid=2208 started! Mar 15 03:07:13 matrix systemd[1]: iscsid.service: Failed to parse PID from file /run/iscsid.pid: Invalid argument Mar 15 03:07:13 matrix iscsid[2208]: iSCSI daemon with pid=2209 started! Mar 15 03:07:13 matrix iscsid[2208]: can not create NETLINK_ISCSI socket Mar 15 03:08:43 matrix systemd[1]: iscsid.service: Start operation timed out. Terminating. Mar 15 03:08:43 matrix systemd[1]: iscsid.service: Failed with result 'timeout'. with a timeout of around 90 seconds. I'm running the kernel Linux matrix 4.14.171-s5p6818 #24 SMP Mon Feb 17 06:09:30 CET 2020 aarch64 GNU/Linux in the Buster image. It would appear from /boot/config-4.14.171-s5p6818 that # CONFIG_SCSI_ISCSI_ATTRS is not set # CONFIG_SCSI_SAS_ATTRS is not set # CONFIG_SCSI_SAS_LIBSAS is not set # CONFIG_SCSI_SRP_ATTRS is not set CONFIG_SCSI_LOWLEVEL=y # CONFIG_ISCSI_TCP is not set # CONFIG_ISCSI_BOOT_SYSFS is not set That ISCSI_TCP is not set for ISCSI_ATTRS. On a Raspberry Pi (also based on Buster) with the default Raspbian kernel the modules iscsi_tcp 20480 2 libiscsi_tcp 28672 1 iscsi_tcp libiscsi 57344 2 libiscsi_tcp,iscsi_tcp automatically load and all is well. However, I can't get Armbian to work. I think I might need to recompile the kernel, but am not very excited to do so. A web search also indicated that incorrectly configured selinux could cause the same problem, but I don't think selinux is running. Does anyone know what's wrong and how to fix it?
  10. I can partially confirm your claim that 12 Gflops is possible with a more efficient heatsink+fan. In particular, I'm currently testing the M3's bigger brother the FriendlyArm NanoPi T3 which has the same 8-core SOC but a different heatsink. Following the same build_instructions, I obtained 12.49 Gflops with version 2.2 of the linpack benchmark linked against version 0.2.19 of the OpenBLAS library. My cooling arrangement looks like this. With the cover on the heat is trapped and the system throttles; however, upon removing the cover and due to the giraffe the system runs at full speed. The Raspberry Pi 3 does about 6.2 Gflops with a similar cooling system.
  11. These concerns are why I have asked here if any one has one, how it works and whether they can run any benchmarks. I agree that heatsinks with fans are likely required for any parallel algorithm scaling studies. The CPU in the Banana Pi M3 and the pcDuino8 contain v7 cores. The improved floating point/NEON hardware of the v8 cores in the NanoPi M3 are preferable for numerical work. The work stealing algorithms used by the Cilk programming language should balance loads fairly well on the big.LITTLE architecture. At the same time all cores the same is better. At anyrate, maybe someone who has the NanoPi M3 can clarify to what extent the concerns you mention are real problems.
  12. It is definitely possible to introduce parallel processing with a quad core. My thread on the Raspberry Pi forum discusses compiling new versions on gcc with support for the MIT/Intel Cilk parallel programming extensions on ARM devices. The compiler is tested with parallel algorithms for sorting, prime number sieves, fast Fourier transforms, computing fractal basins of attractions for complex Newton methods, numerical quadrature and approximating solutions to high-dimensional systems of ODEs. It is of practical interest how well the implementation of each algorithm scales on physical hardware. Due to constraints of shared memory and I/O very few algorithms scale linearly with the number of cores over a very large range. This leads to an engineering problem that may trade off algorithmic efficiency for parallel scalability to achieve fastest execution times on multi core CPUs. With a quad core CPU the maximum theoretical scaling is four fold, while an eight fold increase is possible with an octo core. Modern compute nodes have 16 to 48 CPU cores and thousands of GPU cores. Thus, while it is possible to consider parallel optimization for a quad core, the problem is more interesting with eight cores.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines