Jump to content

Firefly Station M3 (rk3588s)


balbes150

Recommended Posts

16 часов назад, blondu сказал:

2.I have a Wi-Fi 6E 2x2 MT7921K M.2 module renamed AMD RZ608 from GMKtec NucBox 5 does it work in M.2?

I haven't checked, but maybe for it to work, you need to first change the DTB and switch to NVMe\PCIe mode. But it is better to check with the manufacturer.

Link to comment
Share on other sites

Armbian & Khadas are rewarding contributors

Ok, I'm also waiting for a 128Gb Samsung module.

The boot mode is more strange for me for the ssd, it reads the boot from the MMCe if I don't have a boot directory in the eMMC with the correct partition id, it doesn't boot. is it possible to boot directly from the SSD?

Link to comment
Share on other sites

1 минуту назад, blondu сказал:

The boot mode is more strange for me for the ssd, it reads the boot from the MMCe

This is as it should be, this is the right launch option. For direct launch from SATA\NVMe, support for these devices in u-boot is needed, at the moment there is no such thing.

Link to comment
Share on other sites

I tried to compile drivers for 0572:c688 Conexant Systems (Rockwell), Inc. Geniatech T230 DVB-T2 TV Stick, I installed a generic header and received an error at:
Preparing to compile for kernel version 5.10.66
File not found: /lib/modules/5.10.66-station-m3/build/.config at ./scripts/make_kconfig.pl line 33, <IN> line 4.
make[1]: *** [Makefile:376: alliesconfig] Error 2
I mention that I have the rk35**s linux sdk can it be solved?

Link to comment
Share on other sites

5 минут назад, blondu сказал:

I tried to compile drivers for 0572:c688 Conexant Systems (Rockwell), Inc. Geniatech T230 DVB-T2 TV Stick,

Which header package have you installed, from a shared network repository? This package will not work with the core for M3. You can build your kernel (change the configuration) directly on the M3 in the Armbian system, clone my GIT and run the generally accepted kernel assembly procedure in it (with the choice of kernel configuration). But be prepared that the necessary driver may not be in the source code or it will require a fix for proper assembly.

Link to comment
Share on other sites

With SSD M.2 PCIe2.0 interface the writing and reading speeds are much lower.
I tested here:http://ix.io/49Y3 cu ssd SAMSUNG MZALQ128HBHQ-000L2 and with gnome-disk-utility.

 

ady@station-m3:~$ sudo hdparm -Tt /dev/nvme0n1p1

/dev/nvme0n1p1:
 Timing cached reads:   3388 MB in  2.00 seconds = 1694.54 MB/sec
 Timing buffered disk reads: 1104 MB in  3.00 seconds = 367.64 MB/sec

Edited by blondu
Link to comment
Share on other sites

I did a little test comparison of SATA and NVMe on M3. Yes, the speed of NVMe is lower than SATA. I think this is due to the fact that the rk3588s chip has "stripped down" functionality for PCIe\NVMe, so the speed is lower and therefore it is preferable to use SATA mode by default. SATA features are quite enough for fast work, it is significantly better than all Amlogic. For example, the complete assembly and packaging in the XZ format of the Armbian image on M3 takes only 20-30 minutes, which is faster than the intel shit i5-i7, which cost 2-3 times more expensive.

Link to comment
Share on other sites

On 9/8/2022 at 6:51 PM, blondu said:

367.64 MB/sec

 

Really low numbers.

 

1) hdparm uses 128K block size to test which was a lot last century when whoever hardcoded this in hdparm but today it's just a joke. Use iozone or fio with larger block sizes

2) Armbian doesn't care since years about low-level optimizations. Better search for 'io_is_busy' and 'dmc/governor' to get full speed (both storage and CPU performance or at least a 2000 higher 7-zip score): https://github.com/ThomasKaiser/Knowledge/blob/master/articles/Quick_Preview_of_ROCK_5B.md

3) check 'cat /sys/devices/system/cpu/cpufreq/policy?/ondemand/io_is_busy' – if I/O is processed by cpu6 or cpu7 it might be a lot slower compared even to the little cores 

4) A PCIe Gen2 lane allows for 5 GT/s, SATA 6Gbps has 120% the data rate and both use 8b10b coding. As such SATA should be faster with sequential transfer speeds

5) Sequential transfer speeds are BS if it's about an OS drive since here random I/O matters. And due to SATA relying on AHCI made for spinning rust last century NVMe should outperform SATA even in a single Gen2 lane config and even if silly 'benchmarks' like hdparm draw a different picture.

6) What is 'cat /sys/module/pcie_aspm/parameters/policy' telling?

Edited by tkaiser
Link to comment
Share on other sites

12 часов назад, blondu сказал:

What OS can be added and how is it selected?

While there is no support for HDMI output, the choice of OS\other kernel is possible in extlinux.conf and in EFI\GRUB, but only through the UART console. To add another kernel or OS, you can manually edit extlinux.conf, for EFI, the addition can be done as standard, as on a regular PC, add the necessary kernel files and run the GRUB update procedure, os_probe will find and add the necessary menu options. Important, I haven't managed to add support for installation on eMMC\NVMe in EFI mode yet, so when starting the installation, extlinux.conf mode will be used.

 

12 часов назад, blondu сказал:

I couldn't boot from the SSD

I checked the installation on SATA and everything works for me without any problems (now I have Armbian installed on SATA and I'm starting to build images for M3 on it (it works very fast). When starting the installation, select the bootloader placement mode on eMMC \ system on SATA\NVMe. Please note, the installation works only in extlinux.conf mode.

Due to the peculiarities of working with new media, when starting the installation on SATA\NVMe, the disk must be prepared in advance, run gparted and create an MBR (dos) partition table and it is desirable to create a partition with ext4. After that, you can run the armbian-config utility and perform the installation.

This can be fixed, but I still feel sorry to waste time on it, with what, in the near future I plan to rework the installation process for EFI mode.

 

 

Link to comment
Share on other sites

On 9/11/2022 at 12:00 PM, balbes150 said:

I did a little test comparison of SATA and NVMe on M3. Yes, the speed of NVMe is lower than SATA

 

How did you perform this comparison? Looking only at sequential transfer speeds? Or checking random I/O (which matters way more on an OS drive or when building software)? What does /sys/module/pcie_aspm/parameters/policy look like? powersave, right?

 

And for a quick comparison of schedutil, performance, ondemand and ondemand with io_is_busy see the comments below https://github.com/radxa/kernel/commit/55f540ce97a3d19330abea8a0afc0052ab2644ef

Link to comment
Share on other sites

2 часа назад, tkaiser сказал:

How did you perform this comparison?

 

I made a simple dumb comparison of running these combinations on two SATA and NVMe modules (both modules have an exact clone of the same system, the difference is only in DTB).


 

fio --name=write --ioengine=libaio --iodepth=4 --rw=write --bs=1M --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/tmp/test


fio --name=read --ioengine=libaio --iodepth=4 --rw=read --bs=1M --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/tmp/test


fio --name=randwrite --ioengine=libaio --iodepth=4 --rw=randwrite --bs=4K --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/tmp/test


fio --name=randread --ioengine=libaio --iodepth=4 --rw=randread --bs=4K --direct=1 --size=2G --numjobs=30 --runtime=60 --group_reporting --filename=/mnt/tmp/test

 

Link to comment
Share on other sites

56 minutes ago, balbes150 said:

I made a simple dumb comparison of running these combinations

But your image or lets better say the kernel you're using is made for Android and lacks optimisations.

 

As for NVMe what about 

echo default >/sys/module/pcie_aspm/parameters/policy

or removing CONFIG_PCIEASPM_POWERSAVE=y from kernel config?

 

And for I/O performance in general this would be needed since with my code fragments from half a decade ago that are still part of Armbian the 3rd CPU cluster isn't adjusted properly:

prefix="/sys/devices/system/cpu"
CPUFreqPolicies=($(ls -d ${prefix}/cpufreq/policy? | sed 's/freq\/policy//'))
if [ ${#CPUFreqPolicies[@]} -eq 1 -a -d "${prefix}/cpufreq" ]; then
	# if there's just a single cpufreq policy ondemand sysfs entries differ
	CPUFreqPolicies=${prefix}
fi
for i in ${CPUFreqPolicies[@]}; do
	affected_cpu=$(tr -d -c '[:digit:]' <<< ${i})
	echo ondemand >${prefix}/cpu${affected_cpu:-0}/cpufreq/scaling_governor
	echo 1 >${i}/cpufreq/ondemand/io_is_busy
	echo 25 >${i}/cpufreq/ondemand/up_threshold
	echo 10 >${i}/cpufreq/ondemand/sampling_down_factor
	echo 200000 >${i}/cpufreq/ondemand/sampling_rate
done

 

And based on sbc-bench results shared here the dmc/dfi nodes are enabled in device-tree (defaulting to dmc_ondemand) and as such this would restore 'full performance':

echo performance >/sys/class/devfreq/dmc/governor

 

Link to comment
Share on other sites

33 минуты назад, tkaiser сказал:

kernel you're using is made for Android and lacks optimisations.

So far, this is the only option that can be used for full-fledged work. The main core still has minimal support, it is not enough even for minimal work.

 

47 минут назад, tkaiser сказал:

As for NVMe what about 

I do not have access to the device yet (it is involved in a large amount of work). After the release, I will check these options.

 

1 час назад, tkaiser сказал:

And for I/O performance in general this would be needed since with my code fragments from half a decade ago that are still part of Armbian the 3rd CPU cluster isn't adjusted properly:

Sorry, I didn't understand what needs to be done? Is this code to include in scripts ?

Link to comment
Share on other sites

20 hours ago, balbes150 said:

The main core still has minimal support, it is not enough even for minimal work.

It's not about mainline kernel vs. BSP kernel but the latter having settings that might fit for Android (use cases like watching video, playing games) but not for Linux.

 

You seem to be using these defaults without questioning them. Rockchip's BSP kernel defaults to powersave for ASPM (Active State Power Management) which of course negatively affects NVMe performance. As such you need to either eliminate CONFIG_PCIEASPM_POWERSAVE=y from kernel config or need to execute somewhere after booting:

echo default >/sys/module/pcie_aspm/parameters/policy

 

Also the BSP kernel when the dmc/dfi device-tree nodes are enabled (seems to be the case with your RK3588 kernel fork since the 7-zip scores you and @blondu are sharing are below 14800 while they could be around 16500) defaults to dmc_ondemand governor which can be changed by doing this:

echo performance >/sys/class/devfreq/dmc/governor

(similar way as I've done this for RK3399 years ago: https://github.com/armbian/build/blob/fdf73a025ba56124523baefaf705792b74170fb8/packages/bsp/common/usr/lib/armbian/armbian-hardware-optimization#L241-L244 )

 

And this here:

prefix="/sys/devices/system/cpu"
CPUFreqPolicies=($(ls -d ${prefix}/cpufreq/policy? | sed 's/freq\/policy//'))
if [ ${#CPUFreqPolicies[@]} -eq 1 -a -d "${prefix}/cpufreq" ]; then
	# if there's just a single cpufreq policy ondemand sysfs entries differ
	CPUFreqPolicies=${prefix}
fi
for i in ${CPUFreqPolicies[@]}; do
	affected_cpu=$(tr -d -c '[:digit:]' <<< ${i})
	echo ondemand >${prefix}/cpu${affected_cpu:-0}/cpufreq/scaling_governor
	echo 1 >${i}/cpufreq/ondemand/io_is_busy
	echo 25 >${i}/cpufreq/ondemand/up_threshold
	echo 10 >${i}/cpufreq/ondemand/sampling_down_factor
	echo 200000 >${i}/cpufreq/ondemand/sampling_rate
done

 

is the exact replacement for lines 81-89 in armbian-hardware-optimization: https://github.com/armbian/build/blob/fdf73a025ba56124523baefaf705792b74170fb8/packages/bsp/common/usr/lib/armbian/armbian-hardware-optimization#L81-L89

 

Without this I/O performance with Armbian sucks on a variety of boards for example ODROID N2/N2+, VIM3 or now the RK3588/RK3588S based boards. Unfortunately @lanefu seems to be way too biased or limited in his thinking to understand this when he creates bizarre tickets that are rotting around somewhere: https://armbian.atlassian.net/browse/AR-1262

 

I really don't care whether these fixes will be incorporated into Armbian. But if you benchmark stuff the settings should be adjusted accordingly. And we (we as a broader community – not this place here) already know how ASPM settings negatively affect performance of PCIe devices (like for example NVMe SSDs): https://forum.radxa.com/t/rock-5b-debug-party-invitation/10483/86?u=tkaiser, we know which role io_is_busy has and what the benefits and drawbacks of the chosen dmc governor are.  

 

A quality NVMe SSD even when just connected with a single Gen2 lane should always outperform any SATA SSD if it's about what really matters: random I/O. If the SSD is cheap garbage or the settings are garbage it might look differently.

Edited by tkaiser
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines