Jump to content

iav

Members
  • Posts

    47
  • Joined

  • Last visited

Reputation Activity

  1. Like
    iav reacted to rpardini in Odroid M1   
    https://github.com/armbian/build/pull/5606
     
    Testing image:
    https://github.com/rpardini/armbian-release/releases/download/23.08.18-rpardini-434/Armbian_23.08.18-rpardini-434_Odroidm1_bookworm_edge_6.5.0-rc6.img.xz
     
    #### `odroidm1`: de-infest Petitboot 🔥 use Kwiboo's 2023.10 u-boot; UMS works; bump kernel to 6.5-rcX
    - `odroidm1`/`edge` (`rk3568-odroid`): bump to 6.4.y, update config, rebase patches to 6.4.10
    - `odroidm1`/`edge` (`rk3568-odroid`): bump to 6.5-rc6; manually fix RK808 breakage in .config; externalize overlays; use Makefile autopatcher; rebase patches
    - `odroidm1`/`edge` (`rk3568-odroid`): drop 6.3 and 6.4 patches
    - `odroidm1`: de-infest Petitboot 🔥 use Kwiboo's 2023.10 u-boot; UMS works
      - using Kwiboo's `rk3568-2023.10` branch  with BINMAN-handled blobs
      - patches (defconfig unless indicated):
        - boot usb first (rockchip-common)
        - blink leds & keep red one one on preboot; reset SPI env once after deinfesting from Petitboot
        - change usb_host0_xhci to otg (u-boot dtsi)
        - enable DM_GADGET, UMS 🔥 and RockUSB
      - **usage instructions**:
        - build & burn image to SD card
        - insert SD card into board
        - **hold the recovery (RCY) button** and power on the board
        - watch board boot
        - **de-infest Petitboot**: use `armbian-install` to install bootloader to MTD
          - if you don't, you'll need to hold the recovery button every boot
          - optionally: use `armbian-install` to install OS to eMMC/NVMe/USB
        - power-off board
        - remove SD card (new u-boot always boots SD first!)
        - boot into your newly de-infested machine
          - boot order: USB, SD, MMC, NVME, SCSI
      - de-infested machine can now boot (directly) from USB/SATA/NVMe, possibly via EFI:
        - Armbian UEFI-arm64
        - Fedora 38 aarch64
        - HASSOS for ODROID-M1
        - Talos arm64
        - others...
      - extra: new u-boot by Kwiboo (with GMAC patches) gives us stable MAC address
        - although it is based on cpuid#, doesn't match the HK sticker on the board
          - run `fw_setenv ethaddr XX:XX:XX:XX:XX:XX` to change eth addr in SPI flash environment if needed
      - `odroidm1`: update DDR/BL31 blobs (this depends on https://github.com/armbian/rkbin/pull/20)
  2. Like
    iav reacted to prahal in Helios64 u-boot does not build anymore after we bumped to 2022.07   
    PR https://github.com/armbian/build/pull/4480
  3. Like
    iav reacted to balbes150 in The boot process and various devices   
    Tnx.
    Now I was only interested in the general launch of the system using EFI, you confirmed its operation. I am not checking the operation of the system (kernel) yet, perhaps additional patches \ settings for the kernel are needed, but this requires access to the hardware. By the way, you can disable UEFI mode and switch to using extlinux.conf, with which full output and control via UART should work. Remove the "boot" flags from the first FAT partition and the bootloader will stop using EFI. Or you can remove all files from the FAT partition.
  4. Like
    iav reacted to Matthijs Kooijman in Improving / simplifying first-run services using systemd features   
    Hi all,
     
    While building a custom image that needs to do some things at the end of the first bootup and reboot, I ran into some issues with armbian-firstrun.service. It currently has Type=simple, which means dependencies will be started when armbian-firstrun is *started*, and there is no clean way to wait until after it has *finished*. Looking to fix this, I noticed some other things that could be improved in this area, such as using systemd's first-boot-complete.target and ConditionFirstBoot to more simply manage first-boot-only services.
     
    I'm considering implementing some changes and submitting them in a PR, but before I do that, I'd like to get some feedback on the approach and whether it seems worth investing time into. I considered creating a github issue about this, but given it is sort-of a feature request and the github new issue wizard seems intended to redirect away from github issues, I thought to instead post here. If a github issue seems more appropriate, I'll gladly repost there.
     
    In any case, to solve the particular problem I was having (a service that needs to run after the full first boot has completed, including resizing and firstrun script), here's three incremental changes that I would think would make sense (just 1. would be the minimal to solve this problem, 1. + 2. would solve it more generally, and 1. + 2. + .3 would simplify the code a bit maybe).

    1. Make armbian-firstrun.service `Type=oneshot`, so you can declare `After=armbian-firstrun.service` and `After=armbian-resize-filesystem.service`. This also means that system boot is delayed until this service is completed, but it does not prevent other boot services to run in parallel, so I would not consider this an issue (if it is, then removing the `Before=getty...` stuff could be considered, which is currently pointless anyway (you will not get a login prompt until armbian-firstrun.service is started, but it will not wait for completion). Note that using `Type=simple` in armbian-firstrun.service originates from https://github.com/armbian/build/commit/ee8d396fa639cd89e08fdd6646c32308f3b25f4f, but there is no rational for this choice.
    2. Give armbian-firstrun.service and armbian-resize-filesystem `Wants=first-boot-complete.target` and `Before=first-boot-complete.target` so `first-boot-complete.target` does not fire until `armbian-firstrun` is done, allowing others to just do `After=first-boot-complete.target` without having to explicitly reference specific services.
    3. Give armbian-firstrun.service and armbian-resize-filesystem `ConditionFirstBoot=yes` to let systemd ensure they are only called on the first boot. This allows removing the `systemctl disable` calls in these respective scripts as well, but needs the `/etc/machine-id` file to be removed (see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=900366#20), but that's probably a good idea anyway (to be regenerated on first boot, currently all machines using the same image have the same machine id).
     
    One thing (feature or seeming problem maybe) with using ConditionFirstBoot as suggested above, is that systemd ensures that if the first boot is interrupted, the first boot services are ran again on the subsequent boots, until they have all ran completely (to ensure they all had a chance to ran). This is implemented by (roughly) running ConditionFirstBoot services only if `/etc/machine-id` does not exist, and only writing it to disk *after* all `first-boot-complete.target` services have completed (also when they failed, I think). This means that if the boot is interrupted before *all* first-boot services are ran, all of them will be ran again on the next boot and needs these services to be ok with being ran twice (this is also currently the case when the boot is interrupted when either or both services are still running, except that if either service completes, then that particular service will not be retried, but others might be). See also https://man7.org/linux/man-pages/man5/machine-id.5.html
     
    As a related optimization, if 1. above is applied I think we could also add a `Before=ssh.service` to armbian-firstrun.service and remove the sshd restart from `armbian-firstrun`, so systemd just delays sshd startup until the keys are generated (but only if armbian-firstrun is active, so on subsequent boots sshd is started as normal).
     
    Also, regardless of all of the above, I see that the armbian-firstrun deletes the ssh host keys and then forces regenerating them by calling dpkg-reconfigure. However, it would seem safer to me to actually delete the ssh keys at the end of image generation somewhere (maybe in `post_debootstrap_tweaks()`?), to really rule out the possibility of machines all using the same (publicly known) keys from the image file.
     
    So, how do these sound?
  5. Like
    iav reacted to JMCC in Why I prefer ZFS over btrfs   
    It depends on which level we are talking. I assume AWS Graviton servers have a big team of paid staff to take care of both the hardware and the software, for example.
     
    Now, if we are talking about $40 ARM SBC's, I think "hobbyists" and people in "playground mode" is a good way to describe the main public of those devices. If we also add "students", I think we have a complete description of the kind of use you can reasonably expect for that kind of boards, which the manufacturers make them for.  And, particularly, we describe the kind of people who willingly contribute to a project aimed to make an OS for these boards.
     
    On the other hand, I consider absolutely unrealistic to expect the same level of support for such a project as you expect for one that is backed by a big company with a big staff. As unrealistic as expecting a group of volunteers, who spend their free time on the project out of good will and because they like it, to focus on a reduced number of boards they are not interested in, and only for the NAS use case, "just because I say so".
     
    And it is not only unrealistic, but IMO also unnecessary. I find it hard to imagine a system administrator pulling an OPi Zero out of his pocket in front of the corporate board and saying "Hey, this is going to be our new support for the crucial data. Don't worry, it is going to run Armbian out of a Sandisk Extreme Pro, which has been proven to be the most reliable SD card in many forum posts".
     
    ----------
     
    Now, focusing on our topic, I tried btrfs a couple years ago, attracted mainly because of the snapshots feature. But when I was trying to compile some software I needed, and discovered it was not even mature enough to allow creation of swapfiles on it, I just decided to go back to good ol' ext4. I haven't tried again since, maybe I should 
  6. Like
    iav reacted to tkaiser in Why I prefer ZFS over btrfs   
    Guess why you hear so often about data loss with btrfs?
     
    Three main reasons:
     
    1) Shooting the messenger: Putting a btrfs on top of hardware raid, mdraid or lvm is as 'great' as doing the same with ZFS but 'Linux experts' and NAS vendors did it and still do it. Data corruption at the mdraid layer resulting in a broken btrfs on top and guess who's blamed then? https://github.com/openmediavault/openmediavault/issues/101#issuecomment-473920806
     
    2) btrfs lives inside the kernel and as such you need a recent kernel version to escape old and well known bugs. Now look at kernel versions in popular distros like Debian and you get the problem. With ZFS that's different, most recent ZoL versions still run well with horribly outdated kernels. Hobbyists look into btrfs wiki, see that a certain issue has been fixed recently so they're gonna use it forgetting that the 'everything outdated as hell' distro (AKA Debian) they're using is still on an ancient 4.9 kernel or even worse.
     
    3) Choice of hardware. ZFS is most of the time used on more reliable hardware than btrfs and that's mostly due to some BS spread by selected FreeNAS/TrueNAS forum members who created the two urban myths that 'running ZFS without ECC RAM will kill your data' (nope) and 'you'll need at least 8GB of RAM for ZFS' (also BS, I'm running lots of VMs with ZFS with as less as 1GB). While both myths are technically irrelevant a lot of people believe into and therefore invest in better hardware. Check which mainboards feature ECC memory and you'll realize that exactly those mainboards are the ones with proper onboard storage controllers. If you believe you need at least 8 GB of RAM this also rules out a lot of crappy hardware (like eg. vast majority of SBCs).
     
    The main technical challenge for majority of modern filesystem attempts (even for journaled ext4) is correct write-barrier semantics (more on this in the referenced github issue above). Without those both ZFS and btrfs will fail in the same way. So using an SBC with flaky USB host controller combined with crappy USB-to-SATA controller in external enclosure is recipe for desaster with both ZFS and btrfs. ZFS is more mature than btrfs but guess what has happened when ZFS was rather new over a decade ago: people lost their pools due to broken write-barrier semantics caused by erratic HBAs and drive cache behavior (reporting back 'yeah, I've written the sector to spinning rust' while keeping data in some internal cache and after a power loss the pool was gone).
     
    The last time I wasted time with ZFS on ARM it was a true desaster. Over a year ago I tried to run several ODROID HC2 as Znapzend targets for a bunch of fileservers but always ran into stability issues (freezes) after a couple of days/weeks. Switched to btrfs just to check whether all the HC2 might have a hardware problem: a year later realized by accident that the HC2's in question all had an uptime of 300 days or above (I stopped doing kernel updates on isolated Armbian machines since Igor bricked already way too much of my boards over the past years).
     
    Do I prefer btrfs over ZFS? Nope. Being a real ZFS fanboi almost all the storage implementations here and at customers are based on ZFS. On x86 hardware due to not only relying on OpenZFS team but also the guys at Canonical who care about 'ZFS correctly built and fully working with the kernel in question' (on Debian installations pulling in the Proxmox kernel which is latest Ubuntu kernel with additional tweaks/fixes).
     
    And that's the main difference compared to the situation with Armbian. There is no such team that takes care about you being able to access your ZFS pools flawlessly after the next kernel update. Which is the reason why the choice of filesystem (btrfs or ZFS) at my place depends on x86 (ZFS) or ARM (btrfs). And that's not because of the CPU architecture but solely due to with x86 being able to rely on professionals who take care about the updates they roll out vs. situation on ARM with a team of hobbyists who still try harder and harder to provide OS images for as much different SBC as possible so it's absolutely impossible to properly test/support all of them. Playground mode.
  7. Like
    iav reacted to gprovost in Very noisy fans   
    I shared the design of the thermal pad, showing that the whole area below heatsink is cover with thermal pad by default.
     

     
    @allen--smithee That's very valuable feedback, I think we will need to review our approach on thermal pad.
    One factor we had to take in consideration is also ease of installation on the factory line.
  8. Like
    iav reacted to allen--smithee in Very noisy fans   
    @gprovost
    Is it possible to know the characteristics of the thermal bridge?
    (W.mK et thickness)
     
    @bunducafe
    You can use your helios64 without fan with a CPU limitation following :
    # 816Mhz and 1Ghz (eco)
    # 1.2Ghz CPU 100% Temp <82 ° (safe)
    # 1.41Ghz CPU 100% Temp <90 ° (Limit)
     
    not viable :
    # 1.6Ghz CPU100% Temp > 95 ° reboot and reset fancontrol config /etc and there I regret that it is not a K.
    Nevertheless, it is reassuring to see that the protection method works.
     
    From my own personal experience, it is not necessary to change the fans immediately
    and you will quickly agree thereafter that the noise of the fans is erased by the noise of a single rotating hard disk.
    But if you use it also as a desk with SSD like me, you would definitely want to use it without limitation.
    In that event you must familiarize yourself with Fancontrol otherwise you will also be disappointed with your Noctua or any fan.
  9. Like
    iav reacted to tkaiser in Marvell based 4 ports mPCI SATA 3.0   
    The 4 port things are based on Marvell 88SE9215 (this is a 4 port SATA to single PCIe 2.x lane controller, so suitable for mPCIe where only one lane is available). I got mine from Aliexpress, Igor got his for twice the price from Amazon, you also find them sometimes at local resellers, just search for '88SE9215'.
     
    (there's also a x2 variant called 88SE9235 -- just like there's ASM1062 vs. ASM1061 -- but at least behind mPCIe there's no advantage in choosing this over 88SE9215, it would need M.2 or a real PCIe port to make use of the second available lane)
  10. Like
    iav reacted to michabbs in [TUTORIAL] First steps with Helios64 (ZFS install & config)   
    Focal / root on eMMC / ZFS on hdd / LXD / Docker
     
    I received my Helios64 yesterday, installed the system, and decided to write down my steps before I forget them. Maybe someone will be interested. :-)
     
    Preparation:
    Assembly your box as described here. Download Armbian Focal image from here and flash it to SD card. You may use Ether. Insert the SD card to your Helios64 and boot. After 15-20s the box should be accessible via ssh. (Of course you have to find out it's IP address somehow. For example check your router logs or use this.)  
    First login:
    ssh root@IP Password: 1234  
    After prompt - change password and create your daily user. You should never login as root again. Just use sudo in the future. :-)
    Note: The auto-generated user is member of "disk" group. I do not like it. You may remove it so: "gpasswd -d user disk".
     
    Now move your system to eMMC:
    apt update apt upgrade armbian-config # Go to: System -> Install -> Install to/update boot loader -> Install/Update the bootloader on SD/eMMC -> Boot from eMMC / system on eMMC  
    You can choose root filesystem. I have chosen ext4. Possibly f2fs might be a better idea, but I have not tested it.
    When finished - power off, eject the sd card, power on. 
    Your system should now boot from eMMC.
     
    If you want to change network configuration (for example set static IP) use this: "sudo nmtui".
    You should also change the hostname:
    sudo armbian-config # Go to: Personal -> Hostname  
     
    ZFS on hard disk:
    sudo armbian-config # Go to Software and install headers. sudo apt install zfs-dkms zfsutils-linux # Optional: sudo apt install zfs-auto-snapshot # reboot  
    Prepare necessary partitions - for example using fdisk or gdisk.
    Create your zfs pool. More or less this way:
    sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789  
    Reboot and make sure the pool is imported automatically. (For example by typing "zpool status".)
    You should now have working system with root on eMMC and ZFS pool on HDD.
     
    Docker with ZFS:
    Prepare the filesystem:
    sudo zfs create -o mountpoint=/var/lib/docker mypool/docker-root sudo zfs create -o mountpoint=/var/lib/docker/volumes mypool/docker-volumes sudo chmod 700 /var/lib/docker/volumes # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/docker-root sudo zfs set com.sun:auto-snapshot=true mypool/docker-volumes  
    Create /etc/docker/daemon.json with the following content:
    { "storage-driver": "zfs" }  
    Add /etc/apt/sources.list.d/docker.list with the following content:
    deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable # deb-src [arch=arm64] https://download.docker.com/linux/ubuntu focal stable  
    Install Docker:
    sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io #You might want this: sudo usermod -aG docker your-user  
     
    Voila! Your Docker should be ready! Test it: "docker run hello-world".
     
    Option: Install Portainer:
    sudo zfs create rpool/docker-volumes/portainer_data # You might omit the above line if you do not want to have separate dataset for the docker volume (bad idea). docker volume create portainer_data docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce  
    Go to http://yourip:9000 and configure.
     
     
    LXD with ZFS:
     
    sudo zfs create -o mountpoint=none mypool/lxd-pool sudo apt install lxd sudo lxc init # Configure ZFS this way: Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: mypool/lxd-pool [...] #You might want this: sudo usermod -aG lxd your-user # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/lxd-pool sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/containers sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/custom sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/virtual-machines  
    That's it. Lxd should work now on ZFS. :-)
     
  11. Like
    iav reacted to zador.blood.stained in Prepare v5.1 & v2016.04   
    - build script: support for Xenial as a build host is 95% ready.
    - build script: implemented automatic toolchain selection
    step 1: if kernel or u-boot needs specific GCC version, add this requirement in LINUXFAMILY case block in configuration.sh variables KERNEL_NEEDS_GCC and UBOOT_NEEDS_GCC, supported relationships are "<", ">" and "==", GCC version needs to be specified as major.minor.
    i.e. 
    UBOOT_NEEDS_GCC="< 4.9" step 2: unpack toolchains (i.e. Linaro) in $SRC/toolchains/ For running Linaro 4.8 on x64 build host you need to enable x86 support:
    dpkg --add-architecture i386 apt-get install libc6:i386 libncurses5:i386 libstdc++6:i386 libtinfo5:i386 zlib1g:i386 step 3: run compilation process and check results. Report discovered bugs here on forum on on GitHub.
  12. Like
    iav reacted to ebin-dev in Feature / Changes requests for future Helios64 board or enclosure revisions   
    The next thing I would buy is a drop in replacement board for the helios64 with:
    rk3588 ECC RAM  nvme PCI-e port 10GBase-T port
  13. Like
    iav reacted to SIGSEGV in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  14. Like
    iav reacted to SIGSEGV in Why I prefer ZFS over btrfs   
    This discussion is interesting because I can relate to both sides.
    As an end user - having multiple filesystems to choose from is great, choosing the right one is where I need to spend my time choosing the right tool for the task. Having a ZFS DKMS is great, but if after an update my compiler is missing a kernel header, I won't be able to reach my data. Shipping a KMOD (with the OS release) might give me access to my data after each reboot/upgrade. The ECC/8GB RAM myth is getting old but it has garnered enough attention that most newbies won't read or search beyond the posts with most views.
     
    Armbian as a whole is always improving - a few weeks ago iSCSI was introduced for most boards (and helped me replace two x86_64 servers with one Helios64) - and an implementation of ZFS that works out of the box will be added soon enough. That being said, this is a community project - if a user wants 24x7 incident support, then maybe the ARM based SBCs + Armbian are not the right choice for them. Having a polite discussion with good arguments (like this thread) is what gets things moving forward - We don't have to agree with all the points, but we all want to have stable and reliable systems as our end goal.
  15. Like
    iav reacted to grek in ZFS on Helios64   
    Hey,
    I think a dedicated topic for ZFS on Helios 64 is needed. Many people want to have it
     
    As we know, and many of us playing with dev builds for helios ,it is no easy to work with ZFS 
    I wrote few scripts maybe someone of You it can help in someway. 
    Thanks @jbergler and @ShadowDance , I used your idea for complete it.
    The problem is with playing with OMV and ZFS plugin. then dependencies remove our latest version of ZFS. I tested. everything survived but ZFS is downgraded to latest version from repo 
    for example:
     
    root@helios64:~# zfs --version
    zfs-0.8.4-2~bpo10+1
    zfs-kmod-2.0.0-rc6
    root@helios64:~# uname -a
    Linux helios64 5.9.11-rockchip64 #trunk.1 SMP PREEMPT Thu Nov 26 01:32:45 CET 2020 aarch64 GNU/Linux
    root@helios64:~#
     
    I tested it with kernel 5.9.10 and with today clean install with 5.9.11
     
    First we need to have docker installed ( armbian-config -> software -> softy -> docker )
     
    Create dedicated directory with Dockerfile ( we will customize ubuntu:bionic image with required libraries and gcc 10 . in next builds we can skip this step and just customize and run build-zfs.sh script) 
    mkdir zfs-builder
    cd zfs-builder
    vi Dockerfile
     
    FROM ubuntu:bionic RUN apt update; apt install build-essential autoconf automake bison flex libtool gawk alien fakeroot dkms libblkid-dev uuid-dev libudev-dev libssl-dev zlib1g-dev libaio-dev libattr1-dev libelf-dev python3 python3-dev python3-setuptools python3-cffi libffi-dev -y; apt install software-properties-common -y; add-apt-repository ppa:ubuntu-toolchain-r/test; apt install gcc-10 g++-10 -y; update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 10; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 10  
    Build docker image for building purposes.
     
    docker build --tag zfs-build-ubuntu-bionic:0.1 .  
    Create script for ZFS build
     
    vi build-zfs.sh
     
    #!/bin/bash #define zfs version zfsver="zfs-2.0.0-rc6" #creating building directory mkdir /tmp/zfs-builds && cd "$_" rm -rf /tmp/zfs-builds/inside_zfs.sh apt-get download linux-headers-current-rockchip64 git clone -b $zfsver https://github.com/openzfs/zfs.git $zfsver-$(uname -r) #create file to execute inside container echo "creating file to execute it inside container" cat > /tmp/zfs-builds/inside_zfs.sh <<EOF #!/bin/bash cd scratch/ dpkg -i linux-headers-current-*.deb zfsver="zfs-2.0.0-rc6" cd "/scratch/$zfsver-$(uname -r)" sh autogen.sh ./configure make -s -j$(nproc) make deb mkdir "/scratch/deb-$zfsver-$(uname -r)" cp *.deb "/scratch/deb-$zfsver-$(uname -r)" rm -rf "/scratch/$zfsver-$(uname -r)" exit EOF chmod +x /tmp/zfs-builds/inside_zfs.sh echo "" echo "####################" echo "starting container.." echo "####################" echo "" docker run --rm -it -v /tmp/zfs-builds:/scratch zfs-build-ubuntu-bionic:0.1 /bin/bash /scratch/inside_zfs.sh # Cleanup packages (if installed). modprobe -r zfs zunicode zzstd zlua zcommon znvpair zavl icp spl apt remove --yes zfsutils-linux zfs-zed zfs-initramfs apt autoremove --yes dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/kmod-zfs-$(uname -r)*.deb dpkg -i "/tmp/zfs-builds/deb-$zfsver-$(uname -r)"/{libnvpair1,libuutil1,libzfs2,libzpool2,python3-pyzfs,zfs}_*.deb echo "" echo "###################" echo "building complete" echo "###################" echo ""  
    chmod +x build-zfs.sh
    screen -L -Logfile buildlog.txt ./build-zfs-final.sh
     
     
     
     
     
  16. Like
    iav reacted to TRS-80 in Self-hosting micro- (or regular) services, containers, homelab, etc.   
    I have been having some long running, on and off discussion with @lanefu in IRC (and elsewhere), because I know this is his area of expertise.  However he is busy, and I can appreciate he might not want to spend his free time talking about stuff he gets paid to do M-F. 
     
    Then I also realized, certainly I am not the only one interested in Armbian in order to be doing such things.  So I thought I would make a thread about this, instead, to open the discussion to a larger group people here in the broader community.
     
    I have my own thoughts, concerns, things I want to do, and have already done, but I thought I would make a more general thread where like minded people can discuss their experiences, what has worked (or not), what services you might be running (or would like to), etc.
     
    I guess I will begin by saying I have been running some services for family and friends, first on a Cubietruck since maybe 2017 (or earlier?) and then later on some ODROID-XU4 in addition.  I run XMPP server (Prosody) which has been very hassle free "just works" as well as some experiments into Home Automation and other things which have had some, well let's just say mixed results. 
     
    Currently my interest is in containers and like technologies, but I am very new to it.  I been reading a lot, but there is so much to know and I can't help but feel I am a bit late to the game.  That's by design though, as I don't like being (too) early of a technology adopter, generally speaking.  Well, certainly not with such "hyped" technologies like containers, anyway.
     
    In fact I thought it was a bunch of baloney and hype for a long time, but even cranky old cynic like me now sees some advantages of abstracting things, changing servers/services from "pets" to "cattle" if you will, for reproducibility, redundancy, and other reasons.
     
    Anyway, I am prone to walls of text of idiotic ramblings, so I will stop here and give some others a chance.    Hope everyone is enjoying their holiday weekend so far. 
  17. Like
    iav reacted to TRS-80 in Why I prefer ZFS over btrfs   
    I am not a big fan of btrfs in general.  But rather than shit up other people's threads where they mention it, I decided to create my own here where I can lay out my arguments, and then I can link to it instead and people can read it if they want.
     
    Some people are not going to like to hear this, so I will back it up by saying I have done fair amount of research into these advanced file systems over the last several years.  It is a topic of great interest for me.  At the same time, I view all knowledge as provisional, and maybe some things changed in the meantime, so discussion and factual corrections are welcomed.
     
    I will also try and spare the reader my typical walls of text by breaking this up into some sections. 
     
    Why write this now?
     
    Finally we are getting some hardware (thanks to Kobol, and others) which finally make this feasible (on ARM, which I feel is preferable for NAS when compared to x86), so I am seeing an influx of people into the Kobol Club, some of them mentioning that they are using btrfs.  So I thought maybe now is a good time to write this stuff down.
     
    License
     
    Anyone who knows me can tell you I am quite a proponent of GPL (I am much more of a "Free Software" than an "Open Source" person for example).  The reasons for that are in the links, but I mention this because btrfs could be considered the "Free Software" preferred choice, as it is licensed GPL, while ZFS has a lot of questions around the CDDL license.  And yet I still prefer ZFS.  I am just trying to highlight that I don't "hate" btrfs, in fact I root for them to succeed.  But there are just too many technical and other reasons why I prefer ZFS.  So I will try and lay out the rest of those now.
     
    Data Loss
     
    Probably my biggest argument against btrfs.  I just read way too many reports of data loss on btrfs.  Which is totally unacceptable IMO in one of these "advanced filesystems" which are supposed to prevent exactly that.
     
    In fairness, I have heard btrfs proponents say that only applies to certain RAID modes (5?), and if you mirror it's not a problem.  However any mention at all of data loss have just totally put me off from btrfs.
     
    Development Resources
     
    Another very strong argument.  Lawrence Livermore National Laboratory have received massive federal funding to develop ZFS on Linux for many years now (of course the project nowadays is officially re-branded as "Open ZFS" but in truth almost all the development went into ZoL in the past, and still continues that way, regardless of the name change).
     
    When Oracle bought Sun, many of original people jumped ship including the original ZFS authors Matt Ahrens and others.  Guess where they went to work?  At LLNL, to work on ZoL, which LLNL use for their supercomputers.  And they have been improving it for many years now!
     
    It's just going to be very hard for btrfs to compete with those sort of development resources, IMO.  I realized this some years ago already, even when ZoL was considered much less stable than it is today (back then the conventional wisdom was to run ZFS only on Solaris-like OS, but this is no longer the case).  But I saw the trajectory and the resources behind it and a few years later here we are and I was exactly right.  And I see no change in any relevant circumstances (feds still funding LLNL, etc.) to change this trajectory in any way.
     
    In Closing
     
    I guess I did not have as many arguments as I thought.  But I think those are enough.  I may come back and edit this later if more things come to me.  In the meantime, I welcome discussion.
     
    Cheers! 
     
     
  18. Like
    iav reacted to Igor in Armbian v20.11.y Bugfix release(s)   
    Merged.
  19. Like
    iav reacted to Salamandar in unable to build zfs module on buster rockchip64   
    Hi,
    I'm still unable to build the module correctly. The ZFS kernel module is build for kernel '4.4.213-rockchip64' (the headers version name) but the kernel is '4.4.213-rk3399'.
     
    spl: version magic '4.4.213-rockchip64 SMP mod_unload aarch64' should be '4.4.213-rk3399 SMP mod_unload aarch64'  
    I'm not sure this can be fixed without editing kernel headers, either :
    /usr/src/linux-headers-4.4.213-rockchip64/include/config/kernel.release /usr/src/linux-headers-4.4.213-rockchip64/include/generated/utsrelease.h  
     
    @ShadowDance Are you using the "current" 5.9 kernel ? Maybe this has been fixed for this kernel ?
     
    BTW as the procedure is a bit tiring to do by hand, I wrote some scripts to automate this (build module on Ubuntu, utils on Buster) : https://github.com/Salamandar/armbian_zfs_compiler/
     
    MOD EDIT:  Above is third party script, as always, make sure you understand what is going on before running it on your system.
     
    Having gotten that disclaimer out of the way, thanks for your contribution, @Salamandar!
     
    I took the liberty of marking this as solution.  I also made note of the link to refer others to, later.
     
    Cheers,
    TRS-80
  20. Like
    iav reacted to Q4kn in Le Potato Serial Getty on ttys0 starts, stops restarts   
    systemctl stop serial-getty@ttyS0.service systemctl disable serial-getty@ttyS0.service This help me, It is fine!
  21. Like
    iav reacted to opfer15 in ODroid-N2 RTC not work in linux-image-current-meson64=20.02.8 5.4.28-meson64   
    Reporting back
    Working out of the box now with latest focal image. Thanks for the effort
  22. Like
    iav got a reaction from opfer15 in ODroid-N2 RTC not work in linux-image-current-meson64=20.02.8 5.4.28-meson64   
    Now RTC patch for N2 in armbian build system.
    Hope problem is solved.
  23. Like
    iav reacted to chewitt in No watchdog on N2 with current (5.8) and dev (5.9) kernels   
    https://patchwork.kernel.org/project/linux-amlogic/patch/20201030180057.23886-1-christianshewitt@gmail.com/
     
    ^ tested by the maintainers .. enjoy :)
  24. Like
    iav reacted to Curmudgeon in Bring up for Odroid N2 (Meson G12B)   
    I got the kernel from users.armbian.com/balbes150/arm-64 via forum.armbian.com/topic/12162-single-armbian-image-for-rk-aml-aw-aarch64-armv8 but I am currently reconsidering whether to continue using this kernel or not because balbes150 seems to have taken on an unfavourable, possibly even malevolent attitude towards AMlogic SoC's.
  25. Like
    iav got a reaction from lanefu in Armbian self-builded image with btrfs root not bootable on oDroid-N2   
    TL;DR
    Where in armbian build system correctly can be make N2-only related changes:
    1. /boot ext4 volume have to have symlink  . as boot, can be created with command ln -s ./ boot
    2. kernel env variable set in boot.ini contain part but it shouldn't. This substring need to be removed if btrfs choosen.
    3. /boot ext4 partition have only 4 MB free. I am not sure but think it's too small; for example there no space to run update-initramfs -u. I think it should be larger, like 500 MB.
     
    Long:
    I want to run my oDroid-N2 with root on BTRFS filesystem.
    I use manual on armbian image build and did
    time ./compile.sh BOARD=odroidn2 BRANCH=current RELEASE=focal BUILD_MINIMAL=no BUILD_DESKTOP=no KERNEL_ONLY=no KERNEL_CONFIGURE=yes ROOTFS_TYPE=btrfs INSTALL_HEADERS=no BUILD_KSRC=no then image Armbian_20.08.0-trunk_Odroidn2_focal_current_5.7.16.img was written to microSD card with BalenaEtcher program, inserted into N2 with serial console connected.
    cap1.log
     
    Then I create simlink ./ boot on boot volume, and get cap2.log
     
    Then I remove rootflags=data=writeback from bootargs and successfully boot into shell prompt, cap3.log
    changes:
    diff -u boot.ini.org boot.ini.run --- boot.ini.org 2020-08-20 23:08:36.000000000 +0300 +++ boot.ini.run 2020-08-20 23:26:29.000000000 +0300 @@ -20,7 +20,7 @@ fi # Default Console Device Setting -setenv condev "console=${uartconsole} console=tty1 loglevel=1" # on both +setenv condev "console=${uartconsole} console=tty1 loglevel=5" # on both # Auto Detection of Monitor settings based on your Screen information setenv display_autodetect "true" @@ -115,7 +115,7 @@ if ext4load mmc ${devno}:1 0x44000000 /boot/armbianEnv.txt || fatload mmc ${devno}:1 0x44000000 armbianEnv.txt || ext4load mmc ${devno}:1 0x44000000 armbianEnv.txt; then env import -t 0x44000000 ${filesize}; fi # Boot Args -setenv bootargs "root=${rootdev} rootwait rootflags=data=writeback rw rootfstype=${rootfstype} ${condev} ${amlogic} no_console_suspend fsck.repair=yes net.ifnames=0 elevator=noop hdmimode=${hdmimode} cvbsmode=576cvbs max_freq_a53=${max_freq_a53} max_freq_a73=${max_freq_a73} maxcpus=${maxcpus} voutmode=${voutmode} ${cmode} disablehpd=${disablehpd} ${bootsplash} cvbscable=${cvbscable} overscan=${overscan} consoleblank=0" +setenv bootargs "root=${rootdev} rootwait rw rootfstype=${rootfstype} ${condev} ${amlogic} no_console_suspend fsck.repair=yes net.ifnames=0 elevator=noop hdmimode=${hdmimode} cvbsmode=576cvbs max_freq_a53=${max_freq_a53} max_freq_a73=${max_freq_a73} maxcpus=${maxcpus} voutmode=${voutmode} ${cmode} disablehpd=${disablehpd} ${bootsplash} cvbscable=${cvbscable} overscan=${overscan} consoleblank=0" # Set load addresses setenv dtb_loadaddr "0x1000000"  
    cap1.log cap2.log cap3.log
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines