• Content Count

  • Joined

  • Last visited

About ejolson

  • Rank

Recent Profile Visitors

380 profile views
  1. Woohoo! It worked. I now have wireguard installed and an iSCSI block device mounted over wireguard with a swap partition and a big /usr/local and my home directory. I'll report some performance results soon. Thanks!
  2. Thanks for your reply. Sorry for starting this thread after the one of the FriendlyARM website. It took me a long time before figuring out that iSCSI support was missing from the kernel. It seems the errors produced by iscsid and systemd are not very clear what's wrong--probably the wrong logging level somewhere. At any rate, after a day and a half of installing Ubuntu in a QEMU session and running the suggested build script on my local desktop (an dual-core AMD A6-5400K APU), I have the files armbian-config_20.05.0-trunk_all.deb armbian-firmware_20.05.0-trunk_all.deb armbian-firmware-full_20.05.0-trunk_all.deb linux-dtb-legacy-s5p6818_20.05.0-trunk_arm64.deb linux-headers-legacy-s5p6818_20.05.0-trunk_arm64.deb linux-image-legacy-s5p6818_20.05.0-trunk_arm64.deb linux-source-legacy-s5p6818_20.05.0-trunk_all.deb linux-u-boot-legacy-nanopct3_20.05.0-trunk_arm64.deb and am wondering what to do with them. Which do I need to install to get the new kernel running?
  3. I am trying to set up Armbian as an iSCSI initiator using open-iscsi. The result is Mar 15 03:07:13 matrix iscsid[2207]: iSCSI logger with pid=2208 started! Mar 15 03:07:13 matrix systemd[1]: iscsid.service: Failed to parse PID from file /run/iscsid.pid: Invalid argument Mar 15 03:07:13 matrix iscsid[2208]: iSCSI daemon with pid=2209 started! Mar 15 03:07:13 matrix iscsid[2208]: can not create NETLINK_ISCSI socket Mar 15 03:08:43 matrix systemd[1]: iscsid.service: Start operation timed out. Terminating. Mar 15 03:08:43 matrix systemd[1]: iscsid.service: Failed with result 'timeout'. with a timeout of around 90 seconds. I'm running the kernel Linux matrix 4.14.171-s5p6818 #24 SMP Mon Feb 17 06:09:30 CET 2020 aarch64 GNU/Linux in the Buster image. It would appear from /boot/config-4.14.171-s5p6818 that # CONFIG_SCSI_ISCSI_ATTRS is not set # CONFIG_SCSI_SAS_ATTRS is not set # CONFIG_SCSI_SAS_LIBSAS is not set # CONFIG_SCSI_SRP_ATTRS is not set CONFIG_SCSI_LOWLEVEL=y # CONFIG_ISCSI_TCP is not set # CONFIG_ISCSI_BOOT_SYSFS is not set That ISCSI_TCP is not set for ISCSI_ATTRS. On a Raspberry Pi (also based on Buster) with the default Raspbian kernel the modules iscsi_tcp 20480 2 libiscsi_tcp 28672 1 iscsi_tcp libiscsi 57344 2 libiscsi_tcp,iscsi_tcp automatically load and all is well. However, I can't get Armbian to work. I think I might need to recompile the kernel, but am not very excited to do so. A web search also indicated that incorrectly configured selinux could cause the same problem, but I don't think selinux is running. Does anyone know what's wrong and how to fix it?
  4. I can partially confirm your claim that 12 Gflops is possible with a more efficient heatsink+fan. In particular, I'm currently testing the M3's bigger brother the FriendlyArm NanoPi T3 which has the same 8-core SOC but a different heatsink. Following the same build_instructions, I obtained 12.49 Gflops with version 2.2 of the linpack benchmark linked against version 0.2.19 of the OpenBLAS library. My cooling arrangement looks like this. With the cover on the heat is trapped and the system throttles; however, upon removing the cover and due to the giraffe the system runs at full speed. The Raspberry Pi 3 does about 6.2 Gflops with a similar cooling system.
  5. These concerns are why I have asked here if any one has one, how it works and whether they can run any benchmarks. I agree that heatsinks with fans are likely required for any parallel algorithm scaling studies. The CPU in the Banana Pi M3 and the pcDuino8 contain v7 cores. The improved floating point/NEON hardware of the v8 cores in the NanoPi M3 are preferable for numerical work. The work stealing algorithms used by the Cilk programming language should balance loads fairly well on the big.LITTLE architecture. At the same time all cores the same is better. At anyrate, maybe someone who has the NanoPi M3 can clarify to what extent the concerns you mention are real problems.
  6. It is definitely possible to introduce parallel processing with a quad core. My thread on the Raspberry Pi forum discusses compiling new versions on gcc with support for the MIT/Intel Cilk parallel programming extensions on ARM devices. The compiler is tested with parallel algorithms for sorting, prime number sieves, fast Fourier transforms, computing fractal basins of attractions for complex Newton methods, numerical quadrature and approximating solutions to high-dimensional systems of ODEs. It is of practical interest how well the implementation of each algorithm scales on physical hardware. Due to constraints of shared memory and I/O very few algorithms scale linearly with the number of cores over a very large range. This leads to an engineering problem that may trade off algorithmic efficiency for parallel scalability to achieve fastest execution times on multi core CPUs. With a quad core CPU the maximum theoretical scaling is four fold, while an eight fold increase is possible with an octo core. Modern compute nodes have 16 to 48 CPU cores and thousands of GPU cores. Thus, while it is possible to consider parallel optimization for a quad core, the problem is more interesting with eight cores.