Jump to content

Search the Community

Showing results for 'rock64'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Armbian
    • Armbian project administration
  • Community
    • Announcements
    • SBC News
    • Framework and userspace feature requests
    • Off-topic
  • Using Armbian
    • Beginners
    • Software, Applications, Userspace
    • Advanced users - Development
  • Standard support
    • Amlogic meson
    • Allwinner sunxi
    • Rockchip
    • Other families
  • Community maintained / Staging
    • TV boxes
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Support

Categories

  • Official giveaways
  • Community giveaways

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Matrix


Mastodon


IRC


Website URL


XMPP/Jabber


Skype


Github


Discord


Location


Interests

  1. from their website: https://libre.computer/products/boards/roc-rk3328-cc/ and all their specs, I don't see any hints that this changed... Looks more that 'jack' though about rock64 when posting in the libre blogpost.. From their "RPi compatibility" in case of dimensions, pin header etc. the only place where a barrel plug could be placed would be by removing 3.5mm jack and replace it with a barrel jack that fits there.. Prizes for Ram must be quite high: 1GB version 40$ 2GB version 50$ 4GB version 80$ Shipping a board but not provide at least one Image on your website is a bit... Seems that the collaboration with firefly wasn't ready when shipping has started. @blood did you bought it with the heatsink they provide? I like boards which have proper mounting holes for heatsinks . For initial testing I suggest you look here: http://forum.loverpi.com/categories/libre-computer-board-roc-rk3328-cc or there: http://en.t-firefly.com/doc/product/info/id/471.html (to make images on your own) https://pan.baidu.com/s/1c231RGG#list/path=%2F (baidu where a image is provided) The whole stuff looks like it's close to RKs buildscript, but it didn't arrive in RK official buildscript (yet).
  2. Honestly: what I would prefer doesn't really matter. Just a few notes: With latest 4.14 kernel there is HW accelerated crypto possible with Exynos 5422: https://wiki.odroid.com/odroid-xu4/software/disk_encryption -- no idea how easy it is to use though and you should keep in mind that Armbian's next branch still uses kernel 4.9 (the last try to switch to 4.14 ended up in bricked boards but once this is resolved this functionality should be available with Armbian too) RK3328 implements ARMv8 Crypto Extensions so at least software using AES will benefit from this automagically. @chwe pointed already to threads with performance numbers, you could also have a look in ODROID N1 thread to compare cryptsetup benchmark numbers (though made with RK3399 so you have to do the math on your own using RK3328/RK3399 openssl numbers to compare HC1 and Rock64 performance here) since you mentioned 'german customs fees': buying at Pollin should ease warranty handling and stuff: https://www.pollin.de/search?query=odroid hc1&channel=pollin-de Since you compare two boards running different architectures and since you seem to be keen on 'more DRAM' you should keep in mind that memory usage with a 64-bit userland can be much higher compared to running a 32-bit userland. See especially the links I reference here: https://www.raspberrypi.org/forums/viewtopic.php?f=91&t=192321#p1281326 In case the tasks you want to use are memory hungry running with an arm64 userland on Rock64 can result in lower performance compared to HC1 using the same amount of memory. So choosing an armhf userland instead might be an option. Not possible with Armbian right now but since the excellent ayufan images exist a good basis could be his armhf OMV release: https://github.com/ayufan-rock64/linux-build/releases (grab latest image marked as 'release' and avoid 'pre-release' if you're not an expert). AFAIR Nextcloud should be installed best in a Docker container then...
  3. you might read through @tkaiser detailed reviews: HC1: and for the rock sata cable: or since you're interested in crypto part of the rock64 tread: or general about 'nas' (there's a lot of performance benchmarking inside): And in case you want more information: that's how I find all those useful informations.
  4. Thank you, @Igor_K! One problem could be the E2E encryption feature of NC13, since the HC1 doesnt have hardware-encryption and the RK3328 of the ROCK64 is still pretty new. @tkaiser, do you have any informations about this? Which board would you prefer? I would love to hear your opinion on this. Sincerely, cyagon
  5. Hi there, I'm going to run a full bitcoin node on ARM board so on the first sight requirements are: 1) it is in stock now 2) 2 Gb RAM 3) CPU performance and Stability under high load 4) USB 2.0 (or better) for an external storage 5) ethernet on board as for now my short list is 1) Odroid XU4 or HC1 (C1 was used for bitnodes as far as I know) 2) ASUS Tinker 3) Orange Pi Prime (?) 4) NanoPi K2 5) Rock64 (4Gb) I'm not sure about OPI Prime b/c I've found some reports on the forum about stability issues. Also it is sold out at the moment. I'm open for your suggestions and thoughts. Thank you in advince.
  6. You can't with current Rock64 releases since ayufan defined the GPIO pin in device-tree and so it's not accessible any more from userspace. With his older images (0.5.1 and before) the following was possible: GPIO=2 USB_Power_Node=/sys/class/gpio/gpio${GPIO} [ -d ${USB_Power_Node} ] || echo ${GPIO} >/sys/class/gpio/export echo out >${USB_Power_Node}/direction hdparm -y /dev/sda sleep 2 echo 1 > ${USB_Power_Node}/value BTW: what I didn't know back then is that this GPIO controls power to all USB ports (USB3 included) but the way the pin currently is defined it's not possible to toggle power to the USB ports any more.
  7. I was more talking about bundling all this compiler stuff with default installations. Just a quick check which packages consume most space as already done a year ago: Edit: This is how it looks like with DietPi on Rock64:
  8. USB3 anomalies / problems When I tested this almost 2 weeks ago I did not pay attention close enough to the crappy write performance: 470 MB/s with 4 SSDs in parallel attached to all SATA and USB3 ports is just horribly low given that we have a 'per port' and a 'per port group' limitation of around 390 MB/s. What we should've seen is +650 MB/s taking the overhead into account. But 470 MB/s was already an indication that there's something wrong. Fortunately in the meantime an ODROID community member tested various mirror attemps with 2 Seagate USB3 disks, reported 'RAID 0 doubles disk IO' while in reality showing exactly the opposite: None of his three mirror attempts (mdraid, lvm and btrfs) reported write performance exceeding 50 MB/s which is insanely low for a RAID0 made out of two 3.5" disks (such lousy numbers are usually not even possible with 2 USB2 disks on separate USB2 ports). So let's take a look again: EVO840 and EVO750 both in JMS567 enclosures connected to each USB3 port. I simply created an mdraid RAID0 and measured sequential performance with 'taskset -c 5 iozone -e -I -a -s 500M -r 16384k -i 0 -i 1': kB reclen write rewrite read reread 512000 16384 85367 85179 312532 315012 Yep, there's something seriously wrong when accessing two USB3 disks in parallel. Only 85 MB/s write and 310 MB/s read is way too low especially for rather fast SSDs. 'iostat 1' output shows that each disk when writing remains at ~83 tps (transactions per second): https://pastebin.com/CvgA3ggQ Ok, let's try to get a clue what's bottlenecking. I removed the RAID0 and formatted both SSDs as ext4. First tests with only one SSD active at a time: kB reclen write rewrite read reread EVO840 512000 16384 378665 382100 388932 392917 EVO750 512000 16384 386473 385902 377608 383549 Now trying to start the iozone runs at the same time (of course iozone tasks sent to different CPU cores to avoid CPU bottlenecks, same applies to IRQs: that's /proc/interrupts after test execution): kB reclen write rewrite read reread EVO840 512000 16384 243482 215862 192638 160677 EVO750 512000 16384 214356 252474 168322 195164 So there is still some sort of a limitation but at least it's not as severe as in the mirror modes when all accesses to the two USB connected disks happen exactly in parallel. When looking closer we see another USB3 problem long known from N1's little sibling ROCK64 (any RK3328 device is a much nearer relative to N1 than any of the other ODROIDs): [ 3.433165] xhci-hcd xhci-hcd.7.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 3.433183] xhci-hcd xhci-hcd.7.auto: @00000000efc59440 00000000 00000000 1b000000 01078001 [ 3.441152] xhci-hcd xhci-hcd.8.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 3.441171] xhci-hcd xhci-hcd.8.auto: @00000000efc7e440 00000000 00000000 1b000000 01078001 [ 11.363314] xhci-hcd xhci-hcd.7.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 11.376118] xhci-hcd xhci-hcd.7.auto: @00000000efc59e30 00000000 00000000 1b000000 01078001 [ 11.385567] xhci-hcd xhci-hcd.8.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 11.395145] xhci-hcd xhci-hcd.8.auto: @00000000efc7ec30 00000000 00000000 1b000000 01078000 [ 465.710783] usb 8-1: new SuperSpeed USB device number 3 using xhci-hcd [ 465.807944] xhci-hcd xhci-hcd.8.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 465.817503] xhci-hcd xhci-hcd.8.auto: @00000000efc7ea90 00000000 00000000 1b000000 01078001 [ 468.601895] usb 6-1: new SuperSpeed USB device number 3 using xhci-hcd [ 468.671876] xhci-hcd xhci-hcd.7.auto: ERROR Transfer event for disabled endpoint or incorrect stream ring [ 468.671881] xhci-hcd xhci-hcd.7.auto: @00000000efc591f0 00000000 00000000 1b000000 01078001 I updated bootloader and kernel this morning and have no idea whether this was introduced (again?) just recently or existed already before: root@odroid:~# dpkg -l | egrep "odroid|bootini" ii bootini 20180226-8 arm64 boot.ini and its relatives for ODROID-N1 ii linux-odroidn1 4.4.112-16 arm64 Linux kernel for ODROID-N1 But I guess we're still talking about a lot of room for improvements when it's about XHCI/USB3, BSP kernel and RK3399 Edit: Strangely when I tested with USB3 when I received the N1 two weeks ago the RAID0 results weren't that low. Now I remembered what happened back then: I immediately discovered coherent pool size being too low and increased that to 2MB (gets removed every time the 'bootini' package will be updated). And guess what: that does the trick. I added 'coherent_pool=2M' to kernel cmdline and we're back at normal performance though there's still a ~390 MB/s overall limitation.
  9. You might run into problems since either the rk miniloader or u-boot spl on Rock64 images are configured for the Rock64's LPDDR3 DRAM and not the LPDDR4 the Renegade uses, so you will most likely have to build a new boot firmware with the right RK miniloader or submit patches to the u-boot spl with detection for the LPDDR4 stuff...
  10. I bought one of these too after the project reported that they'd be supporting Armbian. It arrived last week. I did try to boot the rock64 armbian image on it but didn't have much success: Starting kernel ... <hit enter to activate fiq debugger> Loading, please wait... starting version 229 Begin: Loading essential drivers ... done. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. Begin: Running /scripts/local-premount ... Scanning for Btrfs filesystems done. Begin: Waiting for root file system ... Begin: Running /scripts/local-block .... Begin: Running /scripts/local-block ... done. That last line then repeats a bunch of times and it reboots in a loop like that. All things considered I'm used to these sorts of boards shipping with terrible support (not throwing shade at Armbian; you guys rock - it's the vendors' fault) so I won't say I'm surprised or even disappointed. I'm not sure whether their promise of Armbian support was an assumption that this project would pick up their slack, or if they were actually going to contribute such support. Any news on how/if this is progressing?
  11. Preliminary 'performance' summary Based on the tests done above and elsewhere let's try to collect some performance data. Below GPU data is missing for the simple reason that I'm not interested in anything GPU related (or attaching a display at all). Besides used for display stuff and 'retro gaming' RK3399's Mali T860 MP4 GPU is also OpenCL capable. If you search for results (ODROID N1's SoC is available for some years now so you find a lot by searching for 'RK3399' -- for example here are some OpenCL/OpenCV numbers) please keep in mind that Hardkernel might use different clockspeeds for the GPU as well (with CPU cores it's just like that: almost everywhere around big/little cores are clocked with 1.8/1.4 GHz while the N1 settings use 2.0/1.5 GHz instead) CPU horsepower Situation with RK3399 is somewhat special since it's a HMP design combining two fast Cortex-A72 cores with four 'slow' A53. So depending on which CPU core a job lands execution time can vary by factor 2. With Android or 'Desktop Linux' workloads this shouldn't be an issue since there things are mostly single-threaded and the scheduler will move these tasks to the big cores automagically if performance is needed. With other workloads it differs: People wanting to use RK3399 as part of a compile farm might be disappointed and still prefer ARM designs that feature four instead of two fast cores (eg. RK3288 or Exynos 5422 -- for reasons why see again comments section on CNX) For 'general purpose' server use cases the 7-zip scores are interesting since giving a rough estimate how fast a RK3399 device will perform as server (or how many tasks you can run in parallel). Overall score is 6,500 (see this comparison list) but due to the big.LITTLE design we're talking about the big cluster scoring at 3350 and the little cluster at 3900. So tasks that execute on the big cores finish almost twice as fast. Keep this in mind when setting up your environment. Experimenting with cgroups and friends to assign certain tasks to specific CPU clusters will be worth the efforts! 'Number crunchers' who can make use of NEON instructions should look at 'cpuminer --benchmark' results: We get a total 8.80 kH/s rate when running on all 6 cores (big cores only: 4.10 kH/s, little cores only: 4.90 kH/s -- so again 'per core' performance almost twice as good on the big cores) which is at the same performance level of an RK3288 (4 x A17) but gets outperformed by an ODROID XU4 for example at +10kH/s since there the little cores add a little bit to the result. But this needs improved cooling otherwise an XU4 will immediately throttle down. The RK3399 provides this performance with way lower consumption and heat generation! Crypto performance: just awesome due to ARMv8 Crypto Extensions available and useable on all cores in parallel. Simply check cryptsetup results above and our 'openssl speed' numbers and keep in mind that if your crypto stuff can run in parallel (eg. terminating few different VPN sessions) you can almost add the individual throughput numbers (and even with 6 threads in parallel at full clockspeed the RK3399 just draws 10W more compared to idle) Talking about 'two fast and four slow CPU cores': the A53 cores are clocked at 1.5GHz so when comparing with RK3399's little sibling RK3328 with only 4xA53 (ROCK64, Libre Computer Renegade or Swiftboard/Transformer) the RK3399 when running on the 'slow' cores will compete or already outperform the RK3328 boards but still has 2 big cores available for heavy stuff. But since a lot of workloads are bottlenecked by memory bandwidth you should have a look at the tinymembench results collected above (and use some google-fu to compare with other devices) Storage performance N1 has 2 SATA ports provided by a PCIe attached ASM1061 controller and 2 USB3 ports directly routed to the SoC. The per port bandwidth limitation that also seems to apply to both port groups is around 390 MB/s (applies to all ports regardless whether SATA or USB3 -- also random IO performance with default settings is pretty much the same). But this is not an overall internal SoC bottleneck since when testing with fast SSDs on both USB3 and SATA ports at the same time we got numbers at around ~750MB/s. I just retested again with an EVO840 on the N1 at SATA and USB3 ports with a good UAS capable enclosure and as a comparison repeated the same test with a 'true NAS SoC': the Marvell Armada 385 on Clearfog Pro which provides 'native SATA' by the SoC itself: If we look carefully at the numbers we see that USB3 slightly outperforms ASM1061 when it's about top sequential performance. The two ASM1061 numbers are due to different settings of /sys/module/pcie_aspm/parameters/policy (defaults to powersave but can be changed to performance which not only results in ~250mW higher idle consumption but also a lot better performance with small block sizes). While USB3 seems to perform slightly better when looking only at irrelevant sequential transfer speeds better attach disks to the SATA ports for a number of reasons: With USB you need disk enclosures with good USB to SATA bridges that are capable of UAS --> 'USB Attached SCSI' (we can only recommend the following ones: ASMedia ASM1153/ASM1351, JMicron JMS567/JMS578 or VIA VL711/VL715/VL716 -- unfortunately even if those chipsets are used sometimes crappy firmwares need USB quirks or require UAS blacklisting and then performance sucks. A good example are Seagate USB3 disks) When you use SSDs you want to be able to use TRIM (helps with retaining drive performance and increases longevity). With SATA attached SSDs this is not a problem but on USB ports it depends on a lot of stuff and usually does NOT work. If you understand just half of what's written here then think about SSDs on USB ports otherwise better choose the SATA ports here And PCIe is also less 'expensive' since it needs less ressources (lower CPU utilization with disk on SATA ports and less interrupts to process, see the 800k IRQs for SATA/PCIe vs. 2 million for USB3 with exactly the same workload below): 226: 180 809128 0 0 0 0 ITS-MSI 524288 Edge 0000:01:00.0 226: 0 0 0 0 0 0 ITS-MSI 524288 Edge 0000:01:00.0 227: 277 0 2066085 0 0 0 GICv3 137 Level xhci-hcd:usb5 228: 0 0 0 0 0 0 GICv3 142 Level xhci-hcd:usb7 There's also eMMC and SD cards useable as storage. Wrt SD cards it's too early to talk about performance since at least the N1 developer samples do only implement the slowest SD card speed mode (and I really hope this will change with the final N1 version later) a necessary kernel patch is missing to remove the current SD card performance bottleneck.. The eMMC performance is awesome! If we look only at random IO performance with smaller block sizes (that's the 'eMMC as OS drive' use case) then the Hardkernel eMMC modules starting at 32GB size perform as fast as an SSD connected to USB3 or SATA ports. With SATA ports we get a nice speed boost by changing ASPM (Active State Power Management) settings by switching from the 'powersave' default to performance (+250mW idle consumption). Only then a SSD behind a SATA port on N1 can outperform a Hardkernel eMMC module wrt random IO or 'OS drive' performance. But of course this has a price: when SATA or USB drives are used consumption is a lot higher. Network performance Too early to report 'success' but I'm pretty confident we get Gigabit Ethernet fully saturated after applying some tweaks. With RK3328 it was the same situation in the beginning and maybe same fixes that helped there will fix it with RK3399 on N1 too. I would assume progress can be monitored here: https://forum.odroid.com/viewtopic.php?f=150&t=30126
  12. Yes, if the task require fast computation, the rock64 would be more interesting. -I'm not planning on doing any CPU-intensive stuff on my EspressoBIN (except from some queued compiling and probably distributed build offloading). I don't have the need for that, but I wouldn't mind if the software could take advantage of the hardware. -That could increase the use-cases for me. If that's the case, it's a bit sad - I understood from what I've read elsewhere on the net that the Topaz switch was connected via a 2.5Gbit SerDes. Did you get the information from Marvell or GlobalScale ? (As far as I understood, the Armada 3720 has two 1Gbit/2.5Gbit connections that can be connected to a switch or PHY - and one of those must be used to connect to the Topaz, which is capable of 2.5Gbit as well. So I would have expected that this was just a question of initializing hardware registers to select 2.5Gbit instead of 1Gbit - I'm only guessing here though). For this one I would respectfully disagree, though. There's of course the option to use a Mini-PCIe that gives you 2 GbE ports, or even using a Mini-PCIe splitter to utilize four GbE cards; however that's not the approach I'll try first (I may want to try it at a later point, just to see if it'll work). My attempt will be to use the Ugreen GbE adapter, which has a built-in USB 3.0 hub and then use a few extra Ugreen GbE adapters (which work with Armbian's Debian Legacy) to get more ports and hopefully saturate the USB3.0 as much as possible. I've heard from others that the GbE adapter with 3-port USB3.0 hub work with the EspressoBIN precompiled Ubuntu, so it might work with most Linux distros.
  13. AES crypto performance, checking for bogus clockspeeds, thermal tresholds As Armbian user you already might know that almost all currently available 64 bit ARM SoCs licensed ARM's ARMv8 crypto extensions and that AES performance especially with small data chunks (think about VPN encryption) is something where A72 cores shine: https://forum.armbian.com/topic/4583-rock64/?do=findComment&comment=37829 (the only two exceptions are Raspberry Pi 3 and ODROID-C2 where the SoC makers 'forgot' to license the ARMv8 crypto extensions) Let's have a look at ODROID N1 and A53@1.5GHz vs. A72@2GHz. I use the usual openssl benchmark that runs in a single thread. Once pinned to cpu1 (little core) and another time pinned to cpu5 (big core): for i in 128 192 256 ; do taskset -c 1 openssl speed -elapsed -evp aes-${i}-cbc 2>/dev/null; done | grep cbc for i in 128 192 256 ; do taskset -c 5 openssl speed -elapsed -evp aes-${i}-cbc 2>/dev/null; done | grep cbc As usual monitoring happened in another shell and when testing on the A72 I not only got a huge result variation but armbianmonitor also reported 'cooling state' reaching 1 already -- see last column 'C.St.' (nope, that's the PWM fan, see few posts below) Time big.LITTLE load %cpu %sys %usr %nice %io %irq CPU C.St. 06:00:44: 1992/1512MHz 0.46 16% 0% 16% 0% 0% 0% 51.1°C 1/3 So I added a huge and silent USB powered 5V fan to the setup blowing air over the board at an 45° angle to improve heat dissipation a bit (I hate those small and inefficient fansinks like the one on XU4 and the N1 sample now) and tried again. This time cooling state remained at 0 the internal fan did not start and we had no result variation any more (standard deviation low enough between multiple runs): Time big.LITTLE load %cpu %sys %usr %nice %io %irq CPU C.St. 06:07:03: 1992/1512MHz 0.46 0% 0% 0% 0% 0% 0% 30.0°C 0/3 06:07:08: 1992/1512MHz 0.42 0% 0% 0% 0% 0% 0% 30.0°C 0/3 06:07:13: 1992/1512MHz 0.39 0% 0% 0% 0% 0% 0% 30.0°C 0/3 06:07:18: 1992/1512MHz 0.36 0% 0% 0% 0% 0% 0% 30.0°C 0/3 06:07:23: 1992/1512MHz 0.33 0% 0% 0% 0% 0% 0% 30.0°C 0/3 06:07:28: 1992/1512MHz 0.38 12% 0% 12% 0% 0% 0% 32.2°C 0/3 06:07:33: 1992/1512MHz 0.43 16% 0% 16% 0% 0% 0% 32.2°C 0/3 06:07:38: 1992/1512MHz 0.48 16% 0% 16% 0% 0% 0% 32.8°C 0/3 06:07:43: 1992/1512MHz 0.52 16% 0% 16% 0% 0% 0% 33.9°C 0/3 06:07:48: 1992/1512MHz 0.56 16% 0% 16% 0% 0% 0% 33.9°C 0/3 06:07:53: 1992/1512MHz 0.60 16% 0% 16% 0% 0% 0% 33.9°C 0/3 06:07:58: 1992/1512MHz 0.63 16% 0% 16% 0% 0% 0% 34.4°C 0/3 06:08:04: 1992/1512MHz 0.66 16% 0% 16% 0% 0% 0% 34.4°C 0/3 06:08:09: 1992/1512MHz 0.69 16% 0% 16% 0% 0% 0% 34.4°C 0/3 06:08:14: 1992/1512MHz 0.71 16% 0% 16% 0% 0% 0% 35.0°C 0/3 So these are the single threaded PRELIMINARY openssl results for ODROID N1 differentiating between A53 and A72 cores: A53 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 103354.37k 326225.96k 683938.47k 979512.32k 1119100.93k aes-192-cbc 98776.57k 293354.45k 565838.51k 760103.94k 843434.67k aes-256-cbc 96389.62k 273205.14k 495712.34k 638675.29k 696685.91k A72 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 377879.56k 864100.25k 1267985.24k 1412154.03k 1489756.16k aes-192-cbc 317481.96k 779417.49k 1045567.57k 1240775.00k 1306637.65k aes-256-cbc 270982.47k 663337.94k 963150.93k 1062750.21k 1122691.75k The numbers look somewhat nice but need further investigation: When we compared with other A53 and especially A72 SoCs a while ago (especially the A72 numbers made on a RK3399 TV box only clocking at 1.8 GHz) the A72 scores above seem to low with all test sizes (see the numbers here with AES-128 on a H96-Pro) Cooling state 1 is entered pretty early (when zone0 exceeds already 50°C) -- this needs further investigation. And further benchmarking especially with multiple threads in parallel is useless until this is resolved/understood So let's check with Willy Tarreau's 'mhz' tool whether the CPU clockspeeds reported are bogus (I'm still using performance cpufreq governor so should run with 2 and 1.5 GHz on A72 and A53 cores): root@odroid:/home/odroid/mhz# taskset -c 1 ./mhz count=645643 us50=21495 us250=107479 diff=85984 cpu_MHz=1501.775 root@odroid:/home/odroid/mhz# taskset -c 5 ./mhz count=807053 us50=20330 us250=101641 diff=81311 cpu_MHz=1985.102 All fine so we need to have a look at memory bandwidth. Here are tinymembench numbers pinned to an A53 and here with an A72. As a reference some numbers made with other RK3399 devices few days ago on request: https://irclog.whitequark.org/linux-rockchip/2018-02-12#21298744; One interesting observation is throttling behaviour in a special SoC engine affecting crypto. When cooling state 1 was reached the cpufreq still remained at 2 and 1.5 GHz respectively but AES performance dropped a lot. So the ARMv8 crypto engine is part of BSP 4.4 kernel throttling strategies and performance in such a case does not scale linearly with repored cpufreq. In other words: for the next round of tests the thermal tresholds defined in DT should be lifted a lot. Edit: Wrong assumption wrt openssl numbers on A72 cores -- see next post
  14. Gigabit Ethernet performance RK3399 has an internal GbE MAC implementation combined with an external RTL8211 GbE PHY. I did only some quick tests which were well above 900 Mbits/sec but since moving IRQs to one of the A72 cores didn't improve scores it's either my current networking setup (ODROID N1 connected directly to an older GbE switch I don't trust that much any more) or necessary TX/RX delay adjustments. Anyway: the whole process should be well known and is documented so it's time for someone else to look into. With RK SoCs it's pretty easy to test for this with DT overlays: https://github.com/ayufan-rock64/linux-build/blob/master/recipes/gmac-delays-test/range-test And the final result might be some slight DT modifications that allow for 940 Mbits/sec in both directions with as less CPU utilization as possible. Example for RK3328/ROCK64: https://github.com/ayufan-rock64/linux-kernel/commit/2047dd881db53c15a952b1755285e817985fd556 Since RK3399 uses the same Synopsys Designware Ethernet implementation as currently almost every other GbE capable ARM SoC around and since we get maximum throughput on RK3328 with adjusted settings... I'm pretty confident that this will be the same on RK3399.
  15. ViltusVilks

    ROCK64

    Can You test docker container on rock64 on different storage - usb3, microsd, usb2 and ramdisk? Primary test setup will be - install docker, setup ssl certs, use nginx with phpfpm docker ready to go container, run some php scripts with possible memcache access. Why docker? Because it is easy to pass storage path for test without affecting host os for needed software. And later to scale everything using GbE Test area - real life test bench on ssl decr/encr, storage and ram perf and access timings. Some sort of php math calculations or use openssl library to load cpu cores.
  16. I'm terms of performer for my use case, my rock64 is faster than my espressobin. I use it mostly as a reverse proxy doing SSL/TLS termination. It's got more cores at a higher clock. Espressobin has a hardware crypto accelerator that is useless at the moment since there is no Linux support for it. Support from globalscale is pretty lack-luster. You won't get more than 1gbit from it either, it doesn't have that hardware to support it. There is a single 1gbit link from the soc to the switch. The only way to get faster network would be a PCIe card.
  17. I've been thinking about making a watchdog (using a STM32 as monitor/power manager) for my server (CubieBoard2) -I think you could make a similar 'work-around'. Since you're monitoring the board from another board, you could actually solder two wires onto the reset button's terminals ... Eg one to the GND and one to the actual RESET signal. The other end of those two wires could simply be soldered to a relay, which is "normally open". The relay can be controlled by a transistor, which is controlled by an opto-coupler, which in turn is controlled by the monitoring SBC. This solution can definitely be optimized to include just a few components (something like a transistor, resistors and a diode), but I made a suggestion of using the relay so that it'd be easy to catch on what's going on, plus if you're using an opto-coupler, you should be able to separate the boards electrically from any accidents. You could have the Rock64 charge a capacitor, which is slowly discharged over a resistor. The monitoring SBC could then poll this capacitor (measure the voltage using one of the ADC pins), to find out whether it's running or not.
  18. The tinker has 3.3/1.8, and so does Le Potato. That said, it is the source of the reboot bugs on the Tinker, and requires messy workarounds. (I had to add the device tree checks because of the MiQi crashing and burning with those hacks in place). Now, for all of that, the Rock64 has SPI flash, so U-boot can live there and boot your system from USB3 if you wish, which could be faster than any option discussed here. Oh, in case anyone thinks of running some 1.8v lines for a custom board or anything else: https://www.sdcard.org/developers/overview/low_voltage_signaling/index.html Any card predating these "LV" cards requires 3.3 volts to start, then a voltage switch is negotiated with the SD card.
  19. My Rock64s boot fine so far without pushing any buttons, booting from USB or network via SPI flash works too... if you look for community built images check out ayufan's github ... GbE issues have been fixed by tuning RGMII delays, SD card is limited to 25Mhz since it is wired up at fixed 3.3V (might change in future board revision, needs more evaluation), eMMC runs at 1.8V and supports higher speeds https://github.com/ayufan-rock64 https://github.com/ayufan-rock64/linux-build/releases
  20. I don't think I've ever pushed either of the two buttons on my rock64
  21. I cannot confirm this behaviour. My Rock64 board does not need that manual interaction in order to boot.
  22. I will need to experiment with something that ayufan rolled out for the Rock64 to tune the timings on GbE, that could cause the network to fail (Rock64 has/had some GbE instability as well) In the meantime, updated the available packages with current state of my fork, Rockchip repo updated to 4.4.112, I've patched to 4.4.115, however a couple chunks did not patch in properly and a couple claimed to already be in place, so not 100%
  23. No, there are a lot more but all of this doesn't really matter especially here in this forum. And the whole issue (people using this crappy tool called sysbench trying to benchmark hardware) is not even related to 32-bit vs. 64-bit but different compiler switches. Raspbian packages are built for the ARMv6 ISA so they can be executed on the horribly outdated single core RPis as well. Normal Debian/Ubuntu armhf packages are built for ARMv7 and you would need to switch to arm64 packages since only these packages are built with support for ARMv8 CPU cores (that's what's inside RPI 3 and 2 in the meantime). So comparing an OPi Win running an arm64 Ubuntu or Debian distro with an RPi 3 running any recent Arch, Gentoo, Fedora, OpenSuSE or even an arm64 Armbian (see my link -- I did this) you will see sysbench numbers that are pretty close. Numbers between the different distros will vary since the distro packages are built with different compiler versions and switches. And this is all the lousy sysbench tool in 'cpu test' mode is able to report since this whole test is just calculating prime numbers inside the CPU caches (and as soon as an ARMv8 CPU is allowed to run ARMv8 code this gets magnitudes faster). I don't know of a single real-world use case that would correlate with this pseudo benchmark (except of course if your job is calculating prime numbers, then you can rely on sysbench and if you're running on a RPi 3 should better stay away from Raspbian, DietPi and the other ARMv6 dinosaurs) But while sysbench is wrongly reporting an RPi 3 (or an OPi Win if you choose Xunlong's Raspbian images!) would be magnitudes slower than any of the recent ARMv8 boards with one specific workload the RPi 3 is really magnitudes slower (AES encryption -- think about VPN and disk encryption). Broadcom forgot to license ARMv8 crypto extensions so any other 64-bit (ARMv8) SBC is a better choice than RPi 3 if it's about AES (except ODROID-C2 and NanoPi K2 since their Amlogic S905 suffers from the same problem). See the numbers here: https://forum.armbian.com/topic/4583-rock64/?do=findComment&comment=37829 (OPi Win scores the same as Pinebook)
  24. Hi, I have received my SCISHION V88 Piano and I can confirm that it boots to a Ubuntu Mate desktop from a micro SD-card with the image in the first post of this topic. Like rob0809 I did nothing except insert the micro SD-card and power on Unfortunately after running for a few minutes there were some I/O errors which I must investigate but this is perhaps due to my cheap (Ansonchina) SD-card. I shall try other images and report back later. Updates: I tried this image: https://github.com/ayufan-rock64/linux-build/releases/download/0.5.15/xenial-mate-rock64-0.5.15-136-arm64.img.xz and it also booted directly but still with I/O errors. I replaced the Ansonchina 8GB SD-card with a Kingston 8GB one and there were no more I/O errors I tried the following images which also booted directly from the Kingston SD-card with no intervention: http://dietpi.com/downloads/images/DietPi_Rock64-ARMv8-Stretch.7z https://github.com/Raybuntu/LibreELEC.tv/releases/download/rb-leia23/LibreELEC-rock64.arm-rb-leia23.img.gz https://github.com/ayufan-rock64/linux-build/releases/download/0.5.15/xenial-i3-rock64-0.5.15-136-arm64.img.xz Most of these images require an external USB Ethernet adapter (internal Ethernet and WiFi don't seem to be supported). They also run much more slowly than I had expected. Unfortunately I don't think that this is just due to a lack of hardware graphics acceleration But this is just a start and at least they do run Later update: I found an easy hack to greatly speed up simple operations not involving heavy video or graphics. (Due to the lack of hardware acceleration, YouTube videos, for example, still play frame by frame.) I copied exactly the same disk image to both a micro SD-card and to a USB 3.0 storage device. I modified the label on the root partition of the SD-card so that it would use the USB disk as the rootfs instead. On most of these images the rootfs is labelled linux-root on partition 7. I used gparted to change the SD-card's partition 7 label to linux-rootX so that it would pick up the partition on the USB drive instead. I also used parted to correct for the size of both the devices and to increase the size of the root partition on the USB drive. However this is not strictly necessary. As a stability test and a realistic benchmark, I compiled natively the recent mainline v4.15 Linux kernel: "make defconfig && make -j 4 Image" finished successfully in slightly under 100 minutes. This is not too bad for a box costing about $40. Even later update: The best Ubuntu image that I have found is this one: https://dl.armbian.com/rock64/Ubuntu_xenial_default_nightly.7z This currently redirects to: https://dl.armbian.com/rock64/nightly/Armbian_5.34.171121_Rock64_Ubuntu_xenial_default_4.4.77.7z There is only one single partition so I had to: '- copy the image to a USB storage device - resize the partition - add the label linux-root - copy the appropriate files from the boot directory of this partition to replace the dtb, Image and initrd.img files on the boot partition of the SD-card I used previously - ensure that there is no partition labeled linux-root on the SD-card - after booting I installed ubuntu-mate-desktop but this is obviously not mandatory The advantages of this image are: - it will find the linux-root partition even if the USB storage device is on a hub (otherwise the USB drive monopolizes the USB 3.0 port) - YouTube video (nearly) works so there must be some hardware acceleration Unfortunately the video freezes from time to time. This doesn't seem to be due to a slow Internet connection. With this image the compilation time for the same kernel dropped to 72 minutes. I also tried this image but it did not seem to boot (at least there was only a blank screen on HDMI). https://github.com/ayufan-rock64/linux-build/releases/download/0.6.19/bionic-minimal-rock64-0.6.19-181-arm64.img.xz Maybe it is headless so I shall try again to ssh into it. (Or to login via a serial console but I have not yet needed to open the case.) Cheers, Chris
  25. Hmm that remind me of a test of the rock64 - where the rock64 is also much much faster in such a test: take a look at minute 11:30 BTW: I like this YouTube Channel : "Explaining Computers" - much SBC-Tests
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines