Jump to content

aprayoga

Members
  • Posts

    138
  • Joined

  • Last visited

Reputation Activity

  1. Like
    aprayoga got a reaction from hartraft in RK3399 Legacy Multimedia Framework   
    @JMCC Thanks for the hint. i'll take a look.
  2. Like
    aprayoga got a reaction from gprovost in Failing to boot from EMMC   
    @qrthi it seems something wrong during writing to eMMC.
    Your log show these lines
    Wrong image format for "source" command SCRIPT FAILED: continuing... How did you write the image? We recommend to use Etcher because it has write verification.
     
     
    The boot mode jumper only needed if for some reason the bootloader corrupted and to fix it you need to completely bypass the boot device.
    If the bootloader (u-boot) just fine, it will load Armbian from the sdcard first.
    The "Card did not respond to voltage select" is because you removed the sdcard.
    If you put Armbian sdcard, the lines would be
    Hit any key to stop autoboot: 0 switch to partitions #0, OK mmc1 is current device Scanning mmc 1:1... Found U-Boot script /boot/boot.scr 3185 bytes read in 6 ms (517.6 KiB/s) ## Executing script at 00500000 Boot script loaded from mmc 1  
    My suggestion,
    1. Download latest Armbian image.
    2. Since you already have u-boot on your eMMC, no need to download and write the helios64_sdcard_u-boot-only.img.xz into sdcard.
    3. Power on and enter UMS recovery mode
    4. Write the image using Etcher. No need to extract the image. Etcher can handle it just fine.
    5. Reboot Helios64.
     
     
  3. Like
    aprayoga reacted to qtmax in Helios4 performance concerns   
    I found the offending commit with bisect: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5ff9f19231a0e670b3d79c563f1b0b185abeca91
     
    Reverting it on 5.10.10 restores the write performance, while not hurting read performance (i.e. keeps it high).
     
    I'm going to submit the revert to upstream very soon — it would be great if it also could be applied as a patch to armbian.
     
    (I would post a link here after I submit the patch, but this forum won't allow me to post more than one message a day — is there any way to lift this limitation?)
     
    @gprovost, could you reproduce the bug on your NAS?
  4. Like
    aprayoga reacted to Heisath in Only LED8 light up...!?   
    @Mangix probably not, if he hasn't used it for a few months.
     
     
    @pierre LED 8 is the power led, if nothing else shows (not even LED1 / 2 which are system / error) I'd assume your board is not even starting and your SD card might be corrupt. 
    For further debugging you'll need to attach a computer to the micro usb and listen with some serial terminal to the output (and post it here). Or you could burn another SD card and try with that one.
     
    For LED meaning: https://wiki.kobol.io/helios4/hardware/
    For Setup (also includes connecting serial terminal): https://wiki.kobol.io/helios4/install/
  5. Like
    aprayoga reacted to apocrypha in helios4 Use USB Gadget Ethernet option available?   
    Is helios4 usb gadget Ethernet enabled?
    If it is available, how can it be activated?
    If the usb gadget Ethernet is not default build , which helios4 kernel build options should I modify?
  6. Like
    aprayoga reacted to lalaw in DAS Mode and Capture One   
    Quick update here, I was running some benchmarks today.  Off the top of my head:
     
    1. local SSD was getting something like 1200 MB/s write and 1500 MB/s read
    2. USB connected HDD was getting ~175 MB/s write and 175 MB/s read
    3. NAS (not helios) connected at 1 Gbps was getting ~100 MB/s read/write
    4. Helios in DAS mode and on Raid 10 was getting approximately 170 write and 250 read.
    5. Helios in DAS mode on a single drive was getting 190 write and 220 read. 
     
    For the capture 1 use case, I think a solid contender.  Unfortunately, as I described in the other thread, I can't get my whole array presented in DAS mode (it wants to give me 200GB for a 18TB array).
     
     
    Updated: also ran on a single drive instead of the array.  Slightly better write numbers with slightly worse read numbers.  Also, frustratingly, drive capacity was off -- this time showing 1.2 TB instead of 10.
  7. Like
    aprayoga reacted to brunof in Missing RAID array after power loss   
    Good morning. Sorry for my English.
    I have the exact same problem, but with a difference when I use the commands: 
    mdadm /dev/md0 --assemble /dev/sd[abc]
     
    root@Server-Archivos:~# mdadm /dev/md127 --assemble /dev/sd[abc]
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
     
    I leave the same information to know if you can help me.
     
     
    This is the syslog.
     
     
  8. Like
    aprayoga reacted to freed00m in smartctl tests are always cancelled by host.   
    Hello,
     
    I've bought 3 new identical drives and put 2 of them in the Helios64 and 1 onto my desktop.
    I ran smartctl longtest+shortest on both drives simultanously but all of them were aborted/interupted by host.
     
    `# /usr/sbin/smartctl -a /dev/sda`
    Output: http://ix.io/2OLt
     
    On my desktop the longtest from smartctl succeeds without error and all 3 drives received the same care, I just bough them and installed them, so unlikely the drives are physically damaged. 
     
    The complete diagnostic log: http://ix.io/2OBr
     
    So anyone got an idea why are my SMART extended tests being canceled?
     
    Note: I've tried even the trick with running background task to prevent drives from some vendor sleep `while true; do dd if=/dev/sda of=/dev/null count=1; sleep 60; done`
  9. Like
    aprayoga reacted to Benjamin Arntzen in Crazy instability :(   
    Hi there,
     
    I love my Helios64 to death. I'm running MooseFS CE and MooseFS PRO.
     
    The problem is I'm absolutely hammering it with I/O and it's consistently hard crashing, and no fix I've tried helps.
     
    Should I revert to the older kernel version or something? Or is this a useful test case...
  10. Like
    aprayoga got a reaction from gprovost in Mystery red light on helios64   
    Hi @snakekick, @clostro, @silver0480, @ShadowDance
    could you add following lines to the beginning of /boot/boot.cmd
    regulator dev vdd_log regulator value 930000 regulator dev vdd_center regulator value 950000 run
    mkimage -C none -A arm -T script -d /boot/boot.cmd /boot/boot.scr and reboot. Verify whether it can improve stability.
     
     
     
    You could also use the attached boot.scr
     
     
    boot.scr
  11. Like
    aprayoga reacted to ShadowDance in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    Hey, sorry I haven't updated this thread until now.
     
    The Kobol team sent me, as promised, a new harness and a power-only harness so that I could do some testing:
    Cutting off capacitors from the my original harness did not make a difference The new (normal) harness had the exact same issue as the original one With the power-only harness and my own SATA cables, I was unable to reproduce the issue (even at 6 Gbps) Final test was to go to town on my original harness and cut the connector in two, this allowed me to use my own SATA cable with the original harness and there was, again, no issue (at 6 Gbps) Judging from my initial results, it would seem that there is an issue with the SATA cables in the stock harness. But I should try to do this for a longer period of time -- problem was I didn't have SATA cables for all disks, once I do I'll try to do a week long stress test. I reported my result to the Kobol team but haven't heard back yet.
     
    Even with the 3.0 Gbps limit, I still occasionally run into this issue with the original harness, has happened 2 times since I did the experiment.
     
    If someone else is willing to repeat this experiment with a good set of SATA cables, please do contact Kobol to see if they'd be willing to ship out another set of test harnesses, or perhaps they have other plans.
     
    Here's some pics of my test setup, including the mutilated connector:
     

  12. Like
    aprayoga reacted to qtmax in Helios4 performance concerns   
    Thanks for your replies! For some reason this forum rate limits me down to one message a day. Is there a way to lift this limitation?
     
    I see now that many people recommend against SSH. I think it's a combination of implementation drawbacks (because strong hardware also has poor SSH speed) and weak hardware (because I still can reach higher speeds with two x86 machines). Opportunities to try are CESA (although cryptodev looks highly experimental and requires manual selection of ciphers, and the numbers in the linked thread are still far from perfect) and HPN-SSH (some patchset that I found).
     
    What protocol would you suggest instead? I need encrypted (no FTP?) data transfer at a rate close to 1 Gbit/s (no SSH?), it should be easy to connect from linux over internet (no NFS?), and have no stupid limitations on the allowed characters in the file names (no SMB?). Does webdav over https sounds better, or are there better options?
     
     
    @Igor, I did some RAID read/write tests yesterday, and I found a serious performance regression between kernel 5.8.18 and 5.9.14 (currently bisecting). The read speed improved, but the write speed is garbage with kernel 5.9.
     
    Here is the script that I used for tests, and here are the performance numbers that I got:
     
    root@helios4 ~ # uname -r 5.8.18-mvebu root@helios4 ~ # ./test-raid.sh Direct: read: 255 MB/s write: 221 MB/s RAID0: read: 284 MB/s write: 229 MB/s RAID10 far: read: 264 MB/s write: 181 MB/s RAID10 near: read: 272 MB/s write: 184 MB/s RAID6: read: 289 MB/s write: 116 MB/s root@helios4 ~ # uname -r 5.9.14-mvebu root@helios4 ~ # ./test-raid.sh Direct: read: 256 MB/s write: 145 MB/s RAID0: read: 396 MB/s write: 107 MB/s RAID10 far: read: 321 MB/s write: 62.5 MB/s RAID10 near: read: 355 MB/s write: 62.2 MB/s RAID6: read: 387 MB/s write: 62.1 MB/s The write speed in all tests (even directly writing to a single disk) has severely dropped. Even RAID0 write became slower than the direct write. At the same time, the read speed in RAID tests increased, about which I can't care less as long as it's bigger than 125 MB/s.
     
    The issue is not fixed with kernel 5.10. Do you have any idea how to get back the old performance with new kernels? 60 MB/s would be a bottleneck even with a fast protocol on the network.
     
    Also, I'm quite surprised to see that RAID10 (near) is faster (both read and write) than RAID10 (far). Far should be 2x faster than near on read (reading from 4 disks instead of 2), and write speeds should be about the same.
     
    Even more I'm surprised about RAID6, which has the fastest read, and, as expected, the slowest write (although the XOR offload is in use). Why would RAID6 read be faster than, for example, RAID10 (near) read if they both utilize two disks?
     
    Does anyone has an insight that could explain this weirdness?
     
     
    I'll have mixed data of three main types: large files written once and very rarely read, smaller (a few MBs) files written once and occasionally read, a lot of tiny files accessed often (both read and write). I'd like to optimize for the first two use cases (tiny files are tiny enough, so that the speed doesn't matter much), while both write and read speeds matter for me, even though write happens only once. It'll be single-stream most of the time, although two concurrent writers are possible, but should be rare.
     
    Also thanks for the SnapRAID suggestion, this is something entirely new to me.
  13. Like
    aprayoga reacted to Mydogboris in Network eth0 will not stay online   
    Hello,
       I installed OMV on my Helios64 and created a share.  The device stays online and works well for hours and then at some point eth0 goes offline and no longer communicates.  The only way to restore connectivity is with a reboot.  I looked at the logs and dont see anything jumping out at me as to why this happens but I could very easily be missing something or the log didnt capture the event.  I have not yet tried to console in through the USBc port when this occurs but will try next time.  Has anyone else seen this issue or fixed it? 
  14. Like
    aprayoga got a reaction from Zageron in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    @yay thank you for additional testing. We will take note at the MTU value.
  15. Like
    aprayoga got a reaction from mortomanos in Reboot not working properly with OMV   
    @mortomanos Could you contact us to support@kobol.io and see how we can help you from there in case there is a need to return the board.
     
  16. Like
    aprayoga reacted to Igor in Labels on pull requests   
    FYI. Merge requests with labels: "beta" or "need testings" are automatically included in next nightly builds.
  17. Like
    aprayoga reacted to yay in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    Nvm, I found PR 2567. With those changes applied, I can cheerfully report 2.35Gbps net TCP throughput via iperf3 in either direction with an MTU of 1500 and tx offloading enabled. Again, thanks so much!
     
    One way remains to reproducibly kill the H64's eth1: set the MTU to 9124 on both ends and run iperf in both directions - each with their own server and the other acting as the client. But as the throughput difference is negligible (2.35Gbps vs 2.39Gbps, both really close to the 2.5Gbps maximum), I can safely exclude that use case for myself. I can't recall whether that even worked on kernel 4.4 or whether I even tested it back then.
     
    For whatever it's worth, here's the 5.9 log output from when said crash happens:
    Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2 Jan 21 22:51:55 helios64 kernel: r8152 4-1.4:1.0 eth1: Tx status -2  
  18. Like
    aprayoga got a reaction from Zageron in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    We are working on this issue.
    The same Realtek driver working fine on an Ubuntu laptop (amd64), Helios4 (armhf), and Helios64 LK 4.4.
    We are looking into USB host controller driver, it seems there are some quirks not implemented on mainline kernel.
  19. Like
    aprayoga got a reaction from allen--smithee in Controlling SATA Power Rail   
    @allen--smithee
    if you need to modify the bootloader, you can start from these lines
    https://github.com/armbian/build/blob/master/patch/u-boot/u-boot-rockchip64-mainline/add-board-helios64.patch#L1671-L1687
    You can add parameter checking there. Maybe check certain U-Boot environment variable, maybe check for certain file in the filesystem.
    You could also remove it completely and modify /boot/boot.scr instead, to enable/disable the HDD power.
     
    Another thing to explore, you could remove the afromentioned u-boot function and also remove the power rail nodes from linux device tree
    https://github.com/armbian/build/blob/master/patch/kernel/rockchip64-current/add-board-helios64.patch#L239-L259
    and then when you need to enable it, just access the gpio from sysfs.
     
  20. Like
    aprayoga got a reaction from gprovost in Controlling SATA Power Rail   
    @allen--smithee
    if you need to modify the bootloader, you can start from these lines
    https://github.com/armbian/build/blob/master/patch/u-boot/u-boot-rockchip64-mainline/add-board-helios64.patch#L1671-L1687
    You can add parameter checking there. Maybe check certain U-Boot environment variable, maybe check for certain file in the filesystem.
    You could also remove it completely and modify /boot/boot.scr instead, to enable/disable the HDD power.
     
    Another thing to explore, you could remove the afromentioned u-boot function and also remove the power rail nodes from linux device tree
    https://github.com/armbian/build/blob/master/patch/kernel/rockchip64-current/add-board-helios64.patch#L239-L259
    and then when you need to enable it, just access the gpio from sysfs.
     
  21. Like
    aprayoga got a reaction from clostro in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    We are working on this issue.
    The same Realtek driver working fine on an Ubuntu laptop (amd64), Helios4 (armhf), and Helios64 LK 4.4.
    We are looking into USB host controller driver, it seems there are some quirks not implemented on mainline kernel.
  22. Like
    aprayoga got a reaction from yay in eth1 (2.5) vanished: Upgrade to 20.11.3 / 5.9.14-rockchip64   
    We are working on this issue.
    The same Realtek driver working fine on an Ubuntu laptop (amd64), Helios4 (armhf), and Helios64 LK 4.4.
    We are looking into USB host controller driver, it seems there are some quirks not implemented on mainline kernel.
  23. Like
    aprayoga got a reaction from tikey in M.2 SSD "Crucial MX500 1TB CT1000MX500SSD4" not detected   
    @tikey Could you check whether LED8 is on or blinking?
  24. Like
    aprayoga got a reaction from gprovost in DAS Mode and Capture One   
    You can use DAS mode on Kernel 5.9 using device tree overlay
    It is not as seamless as in LK 4.4 where you can just change to cable to switch the usb role but it work.
    I gave you a like, it should lift the limitation.
     
    1. correct
    2. Yes, you'd need to use filesystem that can be recognized by both, Linux and Mac.
    With FAT32 there is maximum partition size and maximum file size (4GB) limitation.
    With exFAT, it might put some load to cpu during NAS because of FUSE but it might be change in future where in kernel module used instead of FUSE.
    4. I don't have experience with SnapRaid.
     
     
    There is another method you can explore to use the DAS mode, expose the system as MTP device. It should not have issue with exclusive access to block device and the filesystem type.
  25. Like
    aprayoga reacted to tikey in M.2 SSD "Crucial MX500 1TB CT1000MX500SSD4" not detected   
    Thanks for your answers and sorry for the late reply - unfortunately, I can only write 1 post/day in this forum so I can't even answer to your comments and questions.
    So the SSD I'm using is a M2 which means that I couldn't just put it in another slot. But after @djurnys comment, I checked everything again and it turns out that I had the SSD the wrong way around. I didn't even think it would fit 2 ways but physically it does, it just doesn't work. I was first trying with a PCI SSD which is out of spec for the helios so I thought I overlooked some other compatibility issue. So thanks for your help and sorry about the noise. (But I can now confirm that the Crucial MX500 1TB is indeed working nicely also in the M2 version.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines