Jason Fisher

  • Posts

  • Joined

  • Last visited

 Content Type 


Member Map




Posts posted by Jason Fisher

  1. @JMCC - testing on rockpro64 with ayufan's bionic here, almost got it .. working on getting the drm_getunqiue patch applied to ayufan's 4.4.167 rock64/rockpro64 kernel now and all should work.  This should get kodi-gbm working with rock64 also.


    sidenote - was also able to install/boot ayufan's kernel on Armbian rockpro64 bionic.  It has /dev/mali, etc, and should work there also once this patch is applied.

    EDIT: new kernel is up with patch applied -- this kernel should work with armbian bionic rockpro64 rk3399 media-script.


  2. Just wanted to confirm that everything is working well with ZFS here on ARM64.  Successfully running two MediaRAID 8x eSATA/USB3 enclosures off of a USB3 hub, 16 drives/20TB .. 10+ year old raidz2 pools that has moved between a dozen machines .. VM under Windows, Mac, Linux .. adozen controllers (PMP SATA, eSATA, USB3) survived 7 total drives dying, no data loss.  4GB RAM is plenty for media streaming -- 2.6GB RAM was fine with some tuning (VM in Windows that ran Plex and shared ZFS back to host).


    Benchmarks eventually .. but downloading with sabnzbd/sonarr, using Chromium, and streaming 1080p to Plex at just under 3GB RAM in use.

  3. On 4/29/2018 at 7:58 AM, fossxplorer said:

    Amazing results of RK3399. Thx for sharing such details with us. I'd like to order RockPro64 when it comes available again.


    My use case is NAS, and i think this could be a good board for it. Pine only sell a PCIe-SATA adapter with 2 SATA ports, but an adapter with more SATA ports would be nice since it can handle 1.6GB/s. So for my spinning disks, i could use up to 8 disks obviously. Hope we will be able to compile ZFS on this, as i wasn't on another RK3328 device.


    EDIT1: Also i could imagine using this board as my desktop, but wonder if we have the necessary driver etc for GPU? That would also be awesome to have such a low power desktop.



    I posted my ZFS solution here- able to compile/use the standard zfs-dkms packages with some modifications:



  4. On 3/23/2017 at 1:00 PM, zador.blood.stained said:

    First, generic Ubuntu/Debian DKMS packages may not work on Armbian since we provide kernel versions that we can provide for different devices and not ones ubuntu/Debian has.

    Second, DKMS recompilation on kernel upgrade is broken, so please keep this in mind.


    If you want ZFS you need to find the latest zfs sources (this?) and either compile them on the device (if zfs supports out of tree builds) or use the Armbian build system on x64 Ubuntu Xenial to create a patched kernel packages.

    Please note that kernel headers package should be already installed so you don't need to install "linux-headers-$(uname -r)" if build instructions/tutorials suggest doing that.


    I have a ZFS solution here that uses dkms/zfs with the standard packages and a few modifications after the initial dkms build fails:


  5. On 5/3/2018 at 1:48 PM, Mustafa85 said:

    sorry Jason, but if I can not find my kernel-headers (/usr/src/ is empty directory) , what is the right way to download a specific kernel header not the generic one ?

    If you use armbian-config, one of the options is 'install kernel headers' -- however this did not match the kernel I had installed in my case (rock64 stretch image) so I ended up adding the beta repo to sources.list and then updating to the latest 4.4.x kernel that also had 4.4.x headers availble.  (the newer dev kernel did not work for me)

  6. After fighting with this with a few distros..  I am close here:


    1. First, go to your linux-headers directory (e.g. /usr/src/linux-headers-4.4.129-rk3328/) and:


    make scripts


    If that fails (classmap.h missing for example), there is a problem with the headers package.  You may need to manually edit scripts/Makefile and comment out the line with selinux -- there is a problem with the kernel config not matching the header deb in my case.


    # subdir-$(CONFIGS_SECURITY_SELINUX) += selinux


    2. From the same headers directory, verify that include/generated/utsrelease.h matches the full kernal name from: uname -a


    - My headers (from beta repo) had 4.4.129 as utsrelease.h and 4.4.129-rk3328 as the kernel version/release.  This causes modules to be built incorrectly and their vermagic to not match--you can use modprobe --force to get around this, but ill-advised to use that for more than a quick test with something like zfs.  My utsrelease.h now reads: 4.4.129-rk3328


    3. Edit /var/lib/dkms/spl/<yourversion>/source/configure and search for KUID, for lines that look like this:


    kuid_t userid = KUIDGT_INIT(0);

    kgid_t groupid = KGIDT_INIT(0);


    Erase those two lines, the test itself is broken on our environments.  Leave the lines surrounding it.  These appear twice in the configure script--remove both sets.   


    Now look for these lines:


    kuid_t userid = 0;

    kgid_t groupid = 0;


    Change them to:


    kuid_t userid;

    kgid_t groupid;


    (I filed a bug here https://github.com/zfsonlinux/spl/issues/653#issuecomment-333973785 )


    Now you can run:


    dkms build spl/<yourversion>

    dkms install --force spl/<yourversion>

    modprobe spl

    # should be no errors here, check dmesg otherwise


    dkms build zfs/<yourversion>

    dkms install --force zfs/<yourversion>


    When configure runs, you should now see:

    checking whether kuid_t/kgid_t is available... yes; mandatory


    Some notes:


    - I ended up switching to beta (sources.d) and doing an update/upgrade/dist-upgrade first because there were no kernel headers for the WIP rock64 image I installed.


    - If you use kernel 4.11.x you will need to use SPL/ZFS 0.7.x.  I am using the standard zfs-dkms/spl-dkms packages for now..  install SPL and let it fail, then dkms install it after the steps above and then install the ZFS package.


    Just started with Armbian today .. in retrospect it looks like it has drawn me in.. ;)


  7. I think my needs/goals with the ROCK64 or an RK3399 would ultimately fit into this list:


    - Stability


    - Fast enough storage to saturate gigabit ethernet/WiFi


    - Two ports with port multiplier support (prefer SATA/SATA via PCI Express, but two USB 3.0 is ~OK)


    - Low enough latency to satisfy user experience (large git repo, traversing filesystem normally, general responsiveness) -- i.e. prefer to avoid overhead of USB3-to-SATA and feel this affects the experience more than the raw throughput possible


    - ZFS -- I have tested a dozen PCI/PCI Express SATA/USB3/motherboard controllers and many years of ZFS implementations, good and bad, completely abused these two arrays and would never willingly go back to MDRAID or trust a single drive


    - ~5 watt or less SBC idle with USB device attached (not counting power of drives/external controller)


    I am currently at around 160MB/sec average or so on two 8x spinning disk arrays currently connected via an old Silicon Image 3132 PCI Express x1 controller - rated for 2.5 Gbps .. it has been the most stable controller combo for the JMicron USB3/SATA combo port multiplier chipset built into the array chassis.  USB3 (yes, 8 drives over 1x USB 3.0) has actually been more stable over the years depending on the USB3 controller used -- in terms of dropouts and performance consistency, but the latency on file operations for that performance is consistently abysmal..


    I don't see much of a use case here for the high end of SSD throughput .. unless you are editing 4k video.  RED Raw 4k is 162MB/sec at 30fps.  But I think that will have to wait a generation here.  But of course there are other advantages .. I suppose I would prefer an SSD that was 2x the size and 4x slower max throughput for the power advantages alone.


    It would be interesting to put an additional (n-tier?) caching file system layer over attached storage that caches to eMMC.

  8. Appreciate the responses.  Going with the Rock64 for now due to the support available.  That N54L looks really interesting.


    Tuning ARC/cache will help -- I have been successfully dual-booting Windows/Linux with ZFS and arch, using a VM in Windows to access/share ZFS back with good enough performance for my needs (~100MB/sec, Plex server) on a 2.5GB RAM instance after tuning ARC and a few other things, so I think it's going to come down to the stability of the USB3 chipset. (or SATA FIS)  https://wiki.freebsd.org/ZFSTuningGuide


    The Firefly RK3399 has a PCIe M.2 to SATA board available with two ports on an ASMedia ASM1061 card which I believe supports FIS -- http://shop.t-firefly.com/goods.php?id=52 -- but once you figure additional time needed on the RK3399 to get things going currently, it seems like it's much more expensive than simply $199 vs ~$60.. ;)


    I will be experimenting with a few setups on the storage end, but ultimately this will be a portable NAS/Plex/mobile office server briefcase boombox build using a $20 Lepai amplifier, powered by LiFePO4 batteries (https://www.amazon.com/dp/B01M4P35Z9 ), charged by solar (https://www.amazon.com/dp/B074G1CN6N ) and can be used as speakers/media server for a portable DLP/Android projector.  Moving off-grid/tiny cabin next year and downsizing/combining functionality.  A second briefcase will be an LTE receiver/repeater/WiFi hotspot that can be positioned independently where there is a better line to a tower -- LTE with the Firefly seems interesting but need to confirm that it will work with Verizon and Sprint. 


    Something like this for the form factor:




  9. I'm looking for a RK3399/RK3328 for a battery-powered mobile NAS/music player briefcase build.  I want 4GB RAM for possibly using ZFS and replacing general home NAS functions.




    $135 with free shipping for 4GB RAM/32GB eMMC RK3399 model -- seems not a bad deal considering accessories.  Powered by 5v 2a but I would replace that with 5v 4a for stability I think.  


    Is there a reason to pay $199 for the Firefly RK3399 4GB model over this?  


    I will be initially testing USB3 with a JMicron-based Mediasonic ProBox-type enclosure, which supports eSATA PMP and USB3 JBOD -- I have two 8-bay models that have worked well with ZFS for 8? years: started with MacZFS/Hackintosh, now archlinux, 2x raidz2, 5 Seagate+2 WD+1 HGST drive failure in that time.   Using Linux since 1.2.6 so no issues tinkering/helping where needed ..