iav

  • Posts

    28
  • Joined

  • Last visited

Reputation Activity

  1. Like
    iav reacted to gprovost in Very noisy fans   
    I shared the design of the thermal pad, showing that the whole area below heatsink is cover with thermal pad by default.
     

     
    @allen--smithee That's very valuable feedback, I think we will need to review our approach on thermal pad.
    One factor we had to take in consideration is also ease of installation on the factory line.
  2. Like
    iav reacted to allen--smithee in Very noisy fans   
    @gprovost
    Is it possible to know the characteristics of the thermal bridge?
    (W.mK et thickness)
     
    @bunducafe
    You can use your helios64 without fan with a CPU limitation following :
    # 816Mhz and 1Ghz (eco)
    # 1.2Ghz CPU 100% Temp <82 ° (safe)
    # 1.41Ghz CPU 100% Temp <90 ° (Limit)
     
    not viable :
    # 1.6Ghz CPU100% Temp > 95 ° reboot and reset fancontrol config /etc and there I regret that it is not a K.
    Nevertheless, it is reassuring to see that the protection method works.
     
    From my own personal experience, it is not necessary to change the fans immediately
    and you will quickly agree thereafter that the noise of the fans is erased by the noise of a single rotating hard disk.
    But if you use it also as a desk with SSD like me, you would definitely want to use it without limitation.
    In that event you must familiarize yourself with Fancontrol otherwise you will also be disappointed with your Noctua or any fan.
  3. Like
    iav reacted to tkaiser in Marvell based 4 ports mPCI SATA 3.0   
    The 4 port things are based on Marvell 88SE9215 (this is a 4 port SATA to single PCIe 2.x lane controller, so suitable for mPCIe where only one lane is available). I got mine from Aliexpress, Igor got his for twice the price from Amazon, you also find them sometimes at local resellers, just search for '88SE9215'.
     
    (there's also a x2 variant called 88SE9235 -- just like there's ASM1062 vs. ASM1061 -- but at least behind mPCIe there's no advantage in choosing this over 88SE9215, it would need M.2 or a real PCIe port to make use of the second available lane)
  4. Like
    iav reacted to michabbs in [TUTORIAL] First steps with Helios64 (ZFS install & config)   
    Focal / root on eMMC / ZFS on hdd / LXD / Docker
     
    I received my Helios64 yesterday, installed the system, and decided to write down my steps before I forget them. Maybe someone will be interested. :-)
     
    Preparation:
    Assembly your box as described here. Download Armbian Focal image from here and flash it to SD card. You may use Ether. Insert the SD card to your Helios64 and boot. After 15-20s the box should be accessible via ssh. (Of course you have to find out it's IP address somehow. For example check your router logs or use this.)  
    First login:
    ssh root@IP Password: 1234  
    After prompt - change password and create your daily user. You should never login as root again. Just use sudo in the future. :-)
    Note: The auto-generated user is member of "disk" group. I do not like it. You may remove it so: "gpasswd -d user disk".
     
    Now move your system to eMMC:
    apt update apt upgrade armbian-config # Go to: System -> Install -> Install to/update boot loader -> Install/Update the bootloader on SD/eMMC -> Boot from eMMC / system on eMMC  
    You can choose root filesystem. I have chosen ext4. Possibly f2fs might be a better idea, but I have not tested it.
    When finished - power off, eject the sd card, power on. 
    Your system should now boot from eMMC.
     
    If you want to change network configuration (for example set static IP) use this: "sudo nmtui".
    You should also change the hostname:
    sudo armbian-config # Go to: Personal -> Hostname  
     
    ZFS on hard disk:
    sudo armbian-config # Go to Software and install headers. sudo apt install zfs-dkms zfsutils-linux # Optional: sudo apt install zfs-auto-snapshot # reboot  
    Prepare necessary partitions - for example using fdisk or gdisk.
    Create your zfs pool. More or less this way:
    sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789  
    Reboot and make sure the pool is imported automatically. (For example by typing "zpool status".)
    You should now have working system with root on eMMC and ZFS pool on HDD.
     
    Docker with ZFS:
    Prepare the filesystem:
    sudo zfs create -o mountpoint=/var/lib/docker mypool/docker-root sudo zfs create -o mountpoint=/var/lib/docker/volumes mypool/docker-volumes sudo chmod 700 /var/lib/docker/volumes # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/docker-root sudo zfs set com.sun:auto-snapshot=true mypool/docker-volumes  
    Create /etc/docker/daemon.json with the following content:
    { "storage-driver": "zfs" }  
    Add /etc/apt/sources.list.d/docker.list with the following content:
    deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable # deb-src [arch=arm64] https://download.docker.com/linux/ubuntu focal stable  
    Install Docker:
    sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io #You might want this: sudo usermod -aG docker your-user  
     
    Voila! Your Docker should be ready! Test it: "docker run hello-world".
     
    Option: Install Portainer:
    sudo zfs create rpool/docker-volumes/portainer_data # You might omit the above line if you do not want to have separate dataset for the docker volume (bad idea). docker volume create portainer_data docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce  
    Go to http://yourip:9000 and configure.
     
     
    LXD with ZFS:
     
    sudo zfs create -o mountpoint=none mypool/lxd-pool sudo apt install lxd sudo lxc init # Configure ZFS this way: Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: mypool/lxd-pool [...] #You might want this: sudo usermod -aG lxd your-user # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/lxd-pool sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/containers sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/custom sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/virtual-machines  
    That's it. Lxd should work now on ZFS. :-)
     
  5. Like
    iav reacted to zador.blood.stained in Prepare v5.1 & v2016.04   
    - build script: support for Xenial as a build host is 95% ready.
    - build script: implemented automatic toolchain selection
    step 1: if kernel or u-boot needs specific GCC version, add this requirement in LINUXFAMILY case block in configuration.sh variables KERNEL_NEEDS_GCC and UBOOT_NEEDS_GCC, supported relationships are "<", ">" and "==", GCC version needs to be specified as major.minor.
    i.e. 
    UBOOT_NEEDS_GCC="< 4.9" step 2: unpack toolchains (i.e. Linaro) in $SRC/toolchains/ For running Linaro 4.8 on x64 build host you need to enable x86 support:
    dpkg --add-architecture i386 apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386 libtinfo5:i386 zlib1g:i386 step 3: run compilation process and check results. Report discovered bugs here on forum on on GitHub.
  6. Like
    iav reacted to ebin-dev in Feature / Changes requests for future Helios64 board or enclosure revisions   
    The next thing I would buy is a drop in replacement board for the helios64 with:
    rk3588 ECC RAM  nvme PCI-e port 10GBase-T port
  7. Like
    iav reacted to SIGSEGV in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  8. Like
    iav reacted to SIGSEGV in Why I prefer ZFS over btrfs   
    This discussion is interesting because I can relate to both sides.
    As an end user - having multiple filesystems to choose from is great, choosing the right one is where I need to spend my time choosing the right tool for the task. Having a ZFS DKMS is great, but if after an update my compiler is missing a kernel header, I won't be able to reach my data. Shipping a KMOD (with the OS release) might give me access to my data after each reboot/upgrade. The ECC/8GB RAM myth is getting old but it has garnered enough attention that most newbies won't read or search beyond the posts with most views.
     
    Armbian as a whole is always improving - a few weeks ago iSCSI was introduced for most boards (and helped me replace two x86_64 servers with one Helios64) - and an implementation of ZFS that works out of the box will be added soon enough. That being said, this is a community project - if a user wants 24x7 incident support, then maybe the ARM based SBCs + Armbian are not the right choice for them. Having a polite discussion with good arguments (like this thread) is what gets things moving forward - We don't have to agree with all the points, but we all want to have stable and reliable systems as our end goal.
  9. Like
    iav reacted to grek in ZFS on Helios64   
    Hey,
    I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
     
    As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 
    I wrote few scripts maybe someone of You it can help in someway. 
    Thanks @jbergler and @ShadowDance , I used your idea for complete it.
    The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 
    for example:
     
    root@helios64:~# zfs --version
    zfs-0.8.4-2~bpo10+1
    zfs-kmod-2.0.0-rc6
    root@helios64:~# uname -a
    Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
    root@helios64:~#
     
    I tested it with kernel 5.9.10 and with today clean install with 5.9.11
     
    First we need to have docker installed ( armbian-config -> software -> softy -> docker )
     
    Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 
    mkdir zfs-builder
    cd zfs-builder
    vi Dockerfile
     
    FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10  
    Build docker image for building purposes.
     
    docker build --tag zfs-build-ubuntu-bionic:0.1 .  
    Create script for ZFS build
     
    vi build-zfs.sh
     
    #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""  
    chmod +x build-zfs.sh
    screen -L -Logfile buildlog.txt ./build-zfs-final.sh
     
     
     
     
     
  10. Like
    iav reacted to TRS-80 in Self-hosting micro- (or regular) services, containers, homelab, etc.   
    I have been having some long running, on and off discussion with @lanefu in IRC (and elsewhere), because I know this is his area of expertise.  However he is busy, and I can appreciate he might not want to spend his free time talking about stuff he gets paid to do M-F. 
     
    Then I also realized, certainly I am not the only one interested in Armbian in order to be doing such things.  So I thought I would make a thread about this, instead, to open the discussion to a larger group people here in the broader community.
     
    I have my own thoughts, concerns, things I want to do, and have already done, but I thought I would make a more general thread where like minded people can discuss their experiences, what has worked (or not), what services you might be running (or would like to), etc.
     
    I guess I will begin by saying I have been running some services for family and friends, first on a Cubietruck since maybe 2017 (or earlier?) and then later on some ODROID-XU4 in addition.  I run XMPP server (Prosody) which has been very hassle free "just works" as well as some experiments into Home Automation and other things which have had some, well let's just say mixed results. 
     
    Currently my interest is in containers and like technologies, but I am very new to it.  I been reading a lot, but there is so much to know and I can't help but feel I am a bit late to the game.  That's by design though, as I don't like being (too) early of a technology adopter, generally speaking.  Well, certainly not with such "hyped" technologies like containers, anyway.
     
    In fact I thought it was a bunch of baloney and hype for a long time, but even cranky old cynic like me now sees some advantages of abstracting things, changing servers/services from "pets" to "cattle" if you will, for reproducibility, redundancy, and other reasons.
     
    Anyway, I am prone to walls of text of idiotic ramblings, so I will stop here and give some others a chance.    Hope everyone is enjoying their holiday weekend so far. 
  11. Like
    iav reacted to TRS-80 in Why I prefer ZFS over btrfs   
    I am not a big fan of btrfs in general.  But rather than shit up other people's threads where they mention it, I decided to create my own here where I can lay out my arguments, and then I can link to it instead and people can read it if they want.
     
    Some people are not going to like to hear this, so I will back it up by saying I have done fair amount of research into these advanced file systems over the last several years.  It is a topic of great interest for me.  At the same time, I view all knowledge as provisional, and maybe some things changed in the meantime, so discussion and factual corrections are welcomed.
     
    I will also try and spare the reader my typical walls of text by breaking this up into some sections. 
     
    Why write this now?
     
    Finally we are getting some hardware (thanks to Kobol, and others) which finally make this feasible (on ARM, which I feel is preferable for NAS when compared to x86), so I am seeing an influx of people into the Kobol Club, some of them mentioning that they are using btrfs.  So I thought maybe now is a good time to write this stuff down.
     
    Free Software vs Open Source
     
    FWIW, anyone who knows me can tell you I am much more of "Free Software" than an "Open Source" proponent.  The reasons for that are in the links, but I mention this because btrfs could be considered the "Free Software" preferred choice over ZFS which would be the "open source" option.  And yet I still prefer ZFS.  I am just trying to highlight that I don't "hate" btrfs, in fact I root for them to succeed.  But there are just too many technical and other reasons why I prefer ZFS.  So I will try and lay out the rest of those now.
     
    Data Loss
     
    Probably my biggest argument against btrfs.  I just read way too many reports of data loss on btrfs.  Which is totally unacceptable IMO in one of these "advanced filesystems" which are supposed to prevent exactly that.
     
    In fairness, I have heard btrfs proponents say that only applies to certain RAID modes (5?), and if you mirror it's not a problem.  However any mention at all of data loss have just totally put me off from btrfs.
     
    Development Resources
     
    Another very strong argument.  Lawrence Livermore National Laboratory have received massive federal funding to develop ZFS on Linux for many years now (of course the project nowadays is officially re-branded as "Open ZFS" but in truth almost all the development went into ZoL in the past, and still continues that way, regardless of the name change).
     
    When Oracle bought Sun, many of original people jumped ship including the original ZFS authors Matt Ahrens and others.  Guess where they went to work?  At LLNL, to work on ZoL, which LLNL use for their supercomputers.  And they have been improving it for many years now!
     
    It's just going to be very hard for btrfs to compete with those sort of development resources, IMO.  I realized this some years ago already, even when ZoL was considered much less stable than it is today (back then the conventional wisdom was to run ZFS only on Solaris-like OS, but this is no longer the case).  But I saw the trajectory and the resources behind it and a few years later here we are and I was exactly right.  And I see no change in any relevant circumstances (feds still funding LLNL, etc.) to change this trajectory in any way.
     
    In Closing
     
    I guess I did not have as many arguments as I thought.  But I think those are enough.  I may come back and edit this later if more things come to me.  In the meantime, I welcome discussion.
     
    Cheers! 
  12. Like
    iav reacted to Igor in Armbian v20.11.y Bugfix release(s)   
    Merged.
  13. Like
    iav reacted to Salamandar in unable to build zfs module on buster rockchip64   
    Hi,
    I'm still unable to build the module correctly. The ZFS kernel module is build for kernel '4.4.213-rockchip64' (the headers version name) but the kernel is '4.4.213-rk3399'.
     
    spl: version magic '4.4.213-rockchip64 SMP mod_unload aarch64' should be '4.4.213-rk3399 SMP mod_unload aarch64'  
    I'm not sure this can be fixed without editing kernel headers, either :
    /usr/src/linux-headers-4.4.213-rockchip64/include/config/kernel.release /usr/src/linux-headers-4.4.213-rockchip64/include/generated/utsrelease.h  
     
    @ShadowDance Are you using the "current" 5.9 kernel ? Maybe this has been fixed for this kernel ?
     
    BTW as the procedure is a bit tiring to do by hand, I wrote some scripts to automate this (build module on Ubuntu, utils on Buster) : https://github.com/Salamandar/armbian_zfs_compiler/
     
    MOD EDIT:  Above is third party script, as always, make sure you understand what is going on before running it on your system.
     
    Having gotten that disclaimer out of the way, thanks for your contribution, @Salamandar!
     
    I took the liberty of marking this as solution.  I also made note of the link to refer others to, later.
     
    Cheers,
    TRS-80
  14. Like
    iav reacted to Q4kn in Le Potato Serial Getty on ttys0 starts, stops restarts   
    systemctl stop serial-getty@ttyS0.service systemctl disable serial-getty@ttyS0.service This help me, It is fine!
  15. Like
    iav reacted to opfer15 in ODroid-N2 RTC not work in linux-image-current-meson64=20.02.8 5.4.28-meson64   
    Reporting back
    Working out of the box now with latest focal image. Thanks for the effort
  16. Like
    iav got a reaction from opfer15 in ODroid-N2 RTC not work in linux-image-current-meson64=20.02.8 5.4.28-meson64   
    Now RTC patch for N2 in armbian build system.
    Hope problem is solved.
  17. Like
    iav reacted to xwiggen in Security FIx   
    On my headless machines I usually skip user creation and enable SSH pubkey login only after transferring my key (https://wiki.archlinux.org/index.php/SSH_keys)
     
    IMHO the reason for not allowing root login (whether SSH or getty) is both to prevent wreaking havoc on the system by accident and not running processes elevated by default (which is a good security measure -- privileged account management). I'm just too lazy to type sudo as I do only configuration/package/service management on the machines.
  18. Like
    iav reacted to chewitt in No watchdog on N2 with current (5.8) and dev (5.9) kernels   
    https://patchwork.kernel.org/project/linux-amlogic/patch/20201030180057.23886-1-christianshewitt@gmail.com/
     
    ^ tested by the maintainers .. enjoy :)
  19. Like
    iav reacted to Curmudgeon in Bring up for Odroid N2 (Meson G12B)   
    I got the kernel from users.armbian.com/balbes150/arm-64 via forum.armbian.com/topic/12162-single-armbian-image-for-rk-aml-aw-aarch64-armv8 but I am currently reconsidering whether to continue using this kernel or not because balbes150 seems to have taken on an unfavourable, possibly even malevolent attitude towards AMlogic SoC's.
  20. Like
    iav got a reaction from Werner in Security FIx   
    Security theater.
    If it can be good enough for physical console/terminal-oriented distributives not mean it will be good for all forewer.
     
    Maximum can be acceptable — create user on first logon (can be skipped!) and propose to lock ssh root login after first successful user logon.
    All more "secured" scenarios will produce unusable inaccessible devices.
  21. Like
    iav got a reaction from lanefu in Armbian self-builded image with btrfs root not bootable on oDroid-N2   
    TL;DR
    Where in armbian build system correctly can be make N2-only related changes:
    1. /boot ext4 volume have to have symlink  . as boot, can be created with command ln -s ./ boot
    2. kernel env variable set in boot.ini contain part but it shouldn't. This substring need to be removed if btrfs choosen.
    3. /boot ext4 partition have only 4 MB free. I am not sure but think it's too small; for example there no space to run update-initramfs -u. I think it should be larger, like 500 MB.
     
    Long:
    I want to run my oDroid-N2 with root on BTRFS filesystem.
    I use manual on armbian image build and did
    time ./compile.sh BOARD=odroidn2 BRANCH=current RELEASE=focal BUILD_MINIMAL=no BUILD_DESKTOP=no KERNEL_ONLY=no KERNEL_CONFIGURE=yes ROOTFS_TYPE=btrfs INSTALL_HEADERS=no BUILD_KSRC=no then image Armbian_20.08.0-trunk_Odroidn2_focal_current_5.7.16.img was written to microSD card with BalenaEtcher program, inserted into N2 with serial console connected.
    cap1.log
     
    Then I create simlink ./ boot on boot volume, and get cap2.log
     
    Then I remove rootflags=data=writeback from bootargs and successfully boot into shell prompt, cap3.log
    changes:
    diff -u boot.ini.org boot.ini.run --- boot.ini.org 2020-08-20 23:08:36.000000000 +0300 +++ boot.ini.run 2020-08-20 23:26:29.000000000 +0300 @@ -20,7 +20,7 @@ fi # Default Console Device Setting -setenv condev "console=${uartconsole} console=tty1 loglevel=1" # on both +setenv condev "console=${uartconsole} console=tty1 loglevel=5" # on both # Auto Detection of Monitor settings based on your Screen information setenv display_autodetect "true" @@ -115,7 +115,7 @@ if ext4load mmc ${devno}:1 0x44000000 /boot/armbianEnv.txt || fatload mmc ${devno}:1 0x44000000 armbianEnv.txt || ext4load mmc ${devno}:1 0x44000000 armbianEnv.txt; then env import -t 0x44000000 ${filesize}; fi # Boot Args -setenv bootargs "root=${rootdev} rootwait rootflags=data=writeback rw rootfstype=${rootfstype} ${condev} ${amlogic} no_console_suspend fsck.repair=yes net.ifnames=0 elevator=noop hdmimode=${hdmimode} cvbsmode=576cvbs max_freq_a53=${max_freq_a53} max_freq_a73=${max_freq_a73} maxcpus=${maxcpus} voutmode=${voutmode} ${cmode} disablehpd=${disablehpd} ${bootsplash} cvbscable=${cvbscable} overscan=${overscan} consoleblank=0" +setenv bootargs "root=${rootdev} rootwait rw rootfstype=${rootfstype} ${condev} ${amlogic} no_console_suspend fsck.repair=yes net.ifnames=0 elevator=noop hdmimode=${hdmimode} cvbsmode=576cvbs max_freq_a53=${max_freq_a53} max_freq_a73=${max_freq_a73} maxcpus=${maxcpus} voutmode=${voutmode} ${cmode} disablehpd=${disablehpd} ${bootsplash} cvbscable=${cvbscable} overscan=${overscan} consoleblank=0" # Set load addresses setenv dtb_loadaddr "0x1000000"  
    cap1.log cap2.log cap3.log
  22. Like
    iav got a reaction from Werner in Armbian IRC chat   
    Or via matrix bridge: `#armbian:libera.chat`
  23. Like
    iav reacted to Igor in THE testing thread   
    Small update.
     
    We are able to create one cycle of a full sbc-bench, io and network testing in around 1h on all devices at once. Tests are still distant from perfection, but slowly we are getting some overview which will tell where to look closely https://dl.armbian.com/_test-reports/2020-04-14_09.06.24.html
     
     
     
     
  24. Like
    iav reacted to tkaiser in SD card performance   
    2018 SD card update
     
    It's 2018 now, SD Association's A1 'performance class' spec is out over a year now and in the meantime we can buy products trying to be compliant to this performance class. SD cards carrying the A1 logo must be able to perform at least 1500 random read input-output operations per second (IOPS) with 4KB block size, 500 random write IOPS and 10 MB/s sustained sequential performance (see here for more details and background info)
     
    Why is this important? Since what we do on SBC at least for the rootfs is mostly random IO and not sequential IO as it's common in cameras or video recorders (that's the stuff SD cards have been invented to be used with in the beginning). As an SBC (or Android) user we're mostly interested in high random IO performance with smaller blocksizes since this is how 'real world' IO patterns mostly look like. Prior to A1 and A2 performance classes there was no way to know how SD cards perform in this area prior to buying. Fortunately this has changed now.
     
    Last week arrived an ODROID N1 dev sample so I bought two SanDisk A1 cards with 32GB capacity each. An el cheapo 'Ultra A1' for 13€ (~$15) and an 'Extreme A1' for 23€. I wanted to buy a slightly more expensive 'Extreme Plus A1' (since even more performance and especially reliability/longevity) but ordered the wrong one  Please keep in mind that the 'Extreme Plus' numbers shown below are made with an older card missing the A1 logo.
     
    Let's look how these things perform, this time on a new platform: RK3399 with an SD card interface that supports higher speed modes (requires kernel support and switching between 3.3V to 1.8V at the hardware layer). So results aren't comparable with the numbers we generated the last two years in this and other threads but that's not important any more... see at the bottom.
     
    A1 conformance requires at least 10 MB/s sequential performance and 500/1500 (write/read) IOPS with 4K blocksize. I tested also with 1K and 16K blocksizes for the simple reason to get an idea whether 4K results are useful to determine performance with smaller or larger blocksizes (since we already know that the vast majority of cheap SD cards out there shows a severe 16K random write performance drop which is the real reason so many people consider all SD cards being crap from a performance point of view).
     
    I tested with 7 cards, 4 of them SanDisk, two Samsung and the 'Crappy card' being a results mixture of a 4GB Kingston I started to test with and old results from a 4GB Intenso from two years ago (see first post of this thread). The Kingston died when testing with 4K blocksize and the performance of all these crappy 'noname class' cards doesn't vary that much:
    1K w/r 4K w/r 16K w/r Crappy card 4GB 32 1854 35 1595 2 603 Samsung EVO+ 128GB 141 1549 160 1471 579 1161 Ultra A1 32GB 456 3171 843 2791 548 1777 Extreme A1 32GB 833 3289 1507 3281 1126 2113 Samsung Pro 64GB 1091 4786 1124 3898 478 2296 Extreme Plus 16GB 566 2998 731 2738 557 2037 Extreme Pro 8GB 304 2779 323 2754 221 1821 (All results in IOPS --> IO operations per second) For A1 compliance we only need to look at the middle column and have to expect at least 500/1500 IOPS minimum here. The 'Crappy card' fails as expected, the Samsung EVO+ too (but we already knew that for whatever reasons newer EVO+ or those with larger capacity perform worse than the 32GB and 64GB variants we tested two years ago), the Samsung Pro shows the best performance here while one of the 4 SanDisk also fails. But my Extreme Pro 8GB is now 3 years old, the other one I had showed signs of data corruption few months ago and when testing 2 years ago (see 1st post in this thread) random write performance was at 800. So most probably this card is about to die soon and the numbers above are partially irrelevant..
     
    What about sequential performance? Well, 'Crappy card' also not able to meet specs and all the better cards being 'bottlenecked' by ODROID N1 (some of these cards show 80 MB/s in my MacBook's card reader but Hardkernel chose to use some safety headroom for good reasons and limits the maximum speed for improved reliability)
    MB/s write MB/s read Crappy card 4GB 9 15 Samsung EVO+ 128GB 21 65 Ultra A1 32GB 20 66 Extreme A1 32GB 59 68 Samsung Pro 64GB 61 66 Extreme Plus 16GB 63 67 Extreme Pro 8GB 50 67 Well, sequential transfer speeds are close to irrelevant with single board computers or Android but it's good to know that boards that allow for higher SD card speed modes (e.g. almost all ODROIDs and the Tinkerboard) also show an improvement in random IO performance if the card is a good one. The ODROID N1 was limited to DDR50 (slowest SD card mode) until today when Hardkernel unlocked UHS capabilities so that my cards (except of 'Crappy card') could all use SDR104 mode. With DDR50 mode sequential performance is limited to 22.5/23.5MB/s (write/read) but more interestingly random IO performance also differs. See IOPS results with the two SanDisk A1 cards, one time limited to DDR50 and then with SDR104:
    1K w/r 4K w/r 16K w/r Ultra A1 DDR50 449 2966 678 2191 445 985 Ultra A1 SDR104 456 3171 843 2791 548 1777 1K w/r 4K w/r 16K w/r Extreme A1 DDR50 740 3049 1039 2408 747 1068 Extreme A1 SDR104 833 3289 1507 3281 1126 2113 We can clearly see that the larger the blocksize the more the interface speed influences also random IO performance (look especially at 16K random reads that double with SDR104)
     
    Some conclusions:
    When comparing results above the somewhat older Samsung Pro performs pretty similar to the Extreme A1. But great random IO performance is only guaranteed with cards carrying the A1 logo (or A2 soon) so it might happen to you that buying another Samsung Pro today results in way lower random IO performance (see Hardkernel's results with a Samsung Pro Plus showing 224/3023 4k IOPS which is way below the 1124/3898 my old Pro achieves with especially write performance 5 times worse and below A1 criteria) We still need to focus on the correct performance metrics. Sequential performance is more or less irrelevant ('Class 10', 'UHS' and so on), all that matters is random IO (A1 and A2 soon). Please keep in mind that you can buy a nice looking UHS card from 'reputable' brands like Kingston, Verbatim, PNY and the like that might achieve theoretical 80MB/s or even 100MB/s sequential performance (you're not able to benefit from anyway since your board's SD card interface will be the bottleneck) but simply sucks with random IO performance. We're talking about up to 500 times worse performance when trusting in 'renowned' brands and ignoring performance reality (see 16k random writes comparing 'Crappy card' and 'Extreme A1') Only a few vendors on this planet run NAND flash memory fabs, only a few companies produce flash memory controllers and have the necessary know-how in house. And only a few combine their own NAND flash with their own controllers to their own retail products. That's the simple reason why at least I only buy SD cards from these 4 brands: Samsung, SanDisk, Toshiba, Transcend The A1 performance speed class is a great and necessary improvement since now we can rely on getting covenant random IO performance. This also helps in fighting counterfeit flash memory products since even if fraudsters in the meantime produce fake SD cards that look real and show same capacity usually these fakes suck at random IO performance. So after testing new cards with either F3 or H2testw it's now another iozone or CrystalDiskMark test to check for overall performance including random IO (!) and if performance sucks you simply return the cards asking for a refund. TL;DR: If you buy new SD cards choose those carrying an A1 or A2 logo. Buy only good brands (their names start with either S or T). Don't trust in getting genuine products but always expect counterfeit stuff. That's why you should only buy at sellers with a 'no questions asked' return/refund policy and why you have to immediately check your cards directly after purchase. If you also care about reliability/resilience buy more expensive cards (e.g. the twice as expensive Extreme Plus A1 instead of Ultra A1) and choose larger capacities than needed.
     
    Finally: All detailed SD card test results can be found here: https://pastebin.com/2wxPWcWr As a comparison performance numbers made with same ODROID N1, same settings but vendor's orange eMMC modules based on Samsung eMMC and varying only in size: https://pastebin.com/ePUCXyg6
  25. Like
    iav reacted to martinayotte in ODroid-N2 RTC not work in linux-image-current-meson64=20.02.8 5.4.28-meson64   
    I'm one of the main Armbian devs, so, yes, I will add the DT overlay in builds in the near future ...