Jump to content

Search the Community

Showing results for tags 'helios64'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Armbian
    • Armbian project administration
  • Community
    • Announcements
    • SBC News
    • Framework and userspace feature requests
    • Off-topic
  • Using Armbian
    • Beginners
    • Software, Applications, Userspace
    • Advanced users - Development
  • Standard support
    • Amlogic meson
    • Allwinner sunxi
    • Rockchip
    • Other families
  • Community maintained / Staging
    • TV boxes
    • Amlogic meson
    • Allwinner sunxi
    • Marvell mvebu
    • Rockchip
    • Other families
  • Support

Categories

  • Official giveaways
  • Community giveaways

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Matrix


Mastodon


IRC


Website URL


XMPP/Jabber


Skype


Github


Discord


Location


Interests

  1. Hello, i have a litte problem with my armbian/helios. I use a usb hdd for "backup" (snapraid parity) that only need to run every 2 days so that i unmount the drive and put this in standby (hdparm -y /dev) All goes well but after some time the drive awake again (without mount) and i dont know how i can prefent this. does anyone have an idea where I can look thanks for help.
  2. Focal / root on eMMC / ZFS on hdd / LXD / Docker I received my Helios64 yesterday, installed the system, and decided to write down my steps before I forget them. Maybe someone will be interested. :-) Preparation: Assembly your box as described here. Download Armbian Focal image from here and flash it to SD card. You may use Ether. Insert the SD card to your Helios64 and boot. After 15-20s the box should be accessible via ssh. (Of course you have to find out it's IP address somehow. For example check your router logs or use this.) First login: ssh root@IP Password: 1234 After prompt - change password and create your daily user. You should never login as root again. Just use sudo in the future. :-) Note: The auto-generated user is member of "disk" group. I do not like it. You may remove it so: "gpasswd -d user disk". Now move your system to eMMC: apt update apt upgrade armbian-config # Go to: System -> Install -> Install to/update boot loader -> Install/Update the bootloader on SD/eMMC -> Boot from eMMC / system on eMMC You can choose root filesystem. I have chosen ext4. Possibly f2fs might be a better idea, but I have not tested it. When finished - power off, eject the sd card, power on. Your system should now boot from eMMC. If you want to change network configuration (for example set static IP) use this: "sudo nmtui". You should also change the hostname: sudo armbian-config # Go to: Personal -> Hostname ZFS on hard disk: sudo armbian-config # Go to Software and install headers. sudo apt install zfs-dkms zfsutils-linux # Optional: sudo apt install zfs-auto-snapshot # reboot Prepare necessary partitions - for example using fdisk or gdisk. Create your zfs pool. More or less this way: sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789 Reboot and make sure the pool is imported automatically. (For example by typing "zpool status".) You should now have working system with root on eMMC and ZFS pool on HDD. Docker with ZFS: Prepare the filesystem: sudo zfs create -o mountpoint=/var/lib/docker mypool/docker-root sudo zfs create -o mountpoint=/var/lib/docker/volumes mypool/docker-volumes sudo chmod 700 /var/lib/docker/volumes # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/docker-root sudo zfs set com.sun:auto-snapshot=true mypool/docker-volumes Create /etc/docker/daemon.json with the following content: { "storage-driver": "zfs" } Add /etc/apt/sources.list.d/docker.list with the following content: deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable # deb-src [arch=arm64] https://download.docker.com/linux/ubuntu focal stable Install Docker: sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io #You might want this: sudo usermod -aG docker your-user Voila! Your Docker should be ready! Test it: "docker run hello-world". Option: Install Portainer: sudo zfs create rpool/docker-volumes/portainer_data # You might omit the above line if you do not want to have separate dataset for the docker volume (bad idea). docker volume create portainer_data docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce Go to http://yourip:9000 and configure. LXD with ZFS: sudo zfs create -o mountpoint=none mypool/lxd-pool sudo apt install lxd sudo lxc init # Configure ZFS this way: Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: mypool/lxd-pool [...] #You might want this: sudo usermod -aG lxd your-user # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/lxd-pool sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/containers sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/custom sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/virtual-machines That's it. Lxd should work now on ZFS. :-)
  3. I'm trying to get desktop showing thru USB-C connector with appropriate usb-c to hdmi cable. I don't find errors in Xorg.0.log, but still I get no signal to my TV. I've installed desktop via armbian-config. Never done USB-C to any display, so I think I need some pointers to debug and get it to work... Did not find howtos or alike.
  4. Hi!, I am playing around with customizing my boot script (to install later my custom distro), and I am fighting with the Recovery. I took a long time to understand I have to press the recovery button during a "long time", then release it very quickly to get the UMS mode and not falling into the Maskrom mode. Indeed, when U-Boot detects the recovery button keypress, it proposes UMS, then very shortly after the Maskrom mode. Is it possible to configure U-Boot to wait longer on the UMS mode, let's say 5 seconds, and after that delay switch to Maskrom mode as latest failover? For info: I have Armbian running on EMMC, and I play within the /boot folder, then roll-back my trials in Recovery/UMS mode. Kind regards, Xavier Miller.
  5. My Helios64 was running fine with 5 HDDs for a couple of weeks, then I suddenly couldn't access the shares anymore and I noticed only two HDDs were running (the rest having no indicator lights). HDD slot 3, 4 and 5 appears "dead", no power up on boot, no indicator lights. I have checked both +12V and +5V rails and they are present, everything is plugged in. I can move the HDDs from slot 3, 4 and 5 to slot 1 and 2 and they will power up just fine. The hard drives also work in other machines. I'm a bit lost as to what might be wrong, any ideas?
  6. is it intended to integrate openzfs-2.0.0 and if so, is the timing already known?
  7. Anybody have an idea what these message being written to dmesg?! Linux Hell64 5.9.11-rockchip64 #20.11.1 SMP PREEMPT Fri Nov 27 21:59:08 CET 2020 aarch64 GNU/Linux -snip- [Wed Dec 9 04:37:51 2020] NOHZ: local_softirq_pending 08 [Wed Dec 16 22:54:43 2020] NOHZ: local_softirq_pending 08 [Thu Dec 17 02:35:38 2020] NOHZ: local_softirq_pending 08 Anything to be alarmed about?
  8. My NAS do very little overnight so my NAS4 shuts down in the evening and shortly afterwards a physical time switch cuts the power - reinstating power in the morning at which time the NAS4 automatically reboots. I want to do the same with the NAS64 - but even though I've read this (https://wiki.kobol.io/helios64/auto_poweron/) I'm not sure how to go about it. I can't understand if it only works if the UPS has died, or if I have to run a script before shutdown or both. Could you please clarify? Thx ... Nick
  9. Setup: Helios64 with Armbian 20.08.8 Buster with Linux 4.4.213-rk3399. Will change to Linux 5.8 soon though. 5 x 8 TB drives. Not using battery. I’ve had this issue ever since beginning to use my Helios. It first happened when I tried building a RAID with OpenMediaVault: It would crash and reboot while building. Sometimes after only 10 minutes, sometimes after 8+ hours of working. The same is true for video transcoding nowadays: it will sometimes crash after a little while, sometimes after a few consecutive hours of working, but it will never be able to work much longer than 12 hours on such a task. Copying large files from the internet has a similar risk of crashing it. Also, it will not save the crash in the logs. For some reason, the timeframe when it crashed was always gone from the logs. Nowadays it’s even worse: My logs only go until December 11th and even if I clear them on the OMV interface, after the next reboot, they are there again and it refuses to log anything new. While the server is performing a demanding task, most figures in the ssh screen will be red: System Load at around 170%, CPU temp at 70°C etc. CPU usage on the OMV interface will be around 97%. At first I thought it was normal but a friend of mine told me servers should absolutely never reboot on their own; that this is an indication something is not right. My impression from the above-described behavior is that somehow my machine isn’t able to limit the amount of CPU used. I expected the server to become slower under more CPU stress, but instead it seems to not regulate well at all and overwork itself. Or maybe some part of the software/hardware is faulty and will randomly cause a crash. As you may have noticed, I’m rather new to a lot of things here. I have no idea how to even begin troubleshooting this so I’ll need some pointers from you. What could be the cause for this and what tests can I do to narrow it down?
  10. I'm using Armbian 20.11.1 Focal (5.9.11-rockchip64) on my Helios64. When there's heavier load via ethernet (SFTP to a SATA Ultrastar 12TB disk) for something like more than an hour, the entire system will freeze. Not sure if it actually does freeze but I lose all SSH connections and cannot connect anymore via SSH unless I restart the Helios64. Happened today 4 times and it's totally reproducible. Haven't tested another drive yet. What makes this difficult to debug is that under /var/log/syslog or kern.log or faillog nothing relevant is being written as to in what state the device is. Is this a known issue? How can I find out what the problem is?
  11. I got my box yesterday and assembled it (looks gorgeous, and reasonably easy to assemble). Without external power and with battery connected, green led 1 is on. With external power, orange led 9 in addition to that. FTDI chip is recognized regardless of power status, I can connect to the console. After pressing power button, green LEDs 2 and 3 and blue LEDs 4 and 5 light up immediately, and after that nothing else happens. Nothing on the serial console, no LEDs change status. Tried disconnecting the battery. Then without external power, all LEDs are dark (but FTDI chip is still visible), with external power orange LED 9 blinks, the rest is exactly the same: four more LEDs light up on pressing the power button, and no other signs of life. Looks to me as if either u-boot is absent / corrupted / inaccessinle, or the SoC does not enter boot mode. No jumpers connected. There is no CR1225 (the board arrived without it. But with the 18660 pack). Any advise? (I would be comfortable to play with low level boot tools).
  12. Hi there, So I'm having a problem connecting the helios to the router. I try different lan cables and different ports. It was working everything fine yesterday, I did a reboot and no more connection. I'm completely new to this. I was installing home assistance and plex before this happened but I believe I didn't mess with anything to do with network. Don't know what I might have done. I can connect to helios via putty and after logging in the part where ip should be there is none. Any advise would be most appreciated. Again I'm completely new to this.
  13. After installing the hardware and software, my hdd 1 is inactive. I swapped my disk 1 and 5, 1 is still inactive. I checked my connections in the box but I don't see any problems. Could this be due to a software error?
  14. Hello, Sorry for my bad english, I'm French, I'm using "Armbian_20.11.4_Helios64_buster_current_5.9.14.img.xz" During the startup sequence I get the following errors: GPT 0x3380ec0 signature is wrong recovery gpt... GPT 0x3380ec0 signature is wrong recovery gpt fail! LoadTrust Addr:0x4000 No find bl30.bin No find bl32.bin Load uboot, ReadLba = 2000 Load OK, addr=0x200000, size=0xdd6b0 RunBL31 0x40000 NOTICE: BL31: v1.3(debug):42583b6 NOTICE: BL31: Built : 07:55:13, Oct 15 2019 NOTICE: BL31: Rockchip release version: v1.1 INFO: GICv3 with legacy support detected. ARM GICV3 driver initialized in EL3 INFO: Using opteed sec cpu_context! INFO: boot cpu mask: 0 INFO: plat_rockchip_pmu_init(1190): pd status 3e INFO: BL31: Initializing runtime services WARNING: No OPTEE provided by BL2 boot loader, Booting device without OPTEE initialization. SMC`s destined for OPTEE will return SMC_UNK ERROR: Error initializing runtime service opteed_fast INFO: BL31: Preparing for EL3 exit to normal world INFO: Entry point address = 0x200000 INFO: SPSR = 0x3c9 U-Boot 2020.07-armbian (Dec 15 2020 - 08:45:45 +0100) SoC: Rockchip rk3399 Reset cause: RST DRAM: 3.9 GiB PMIC: RK808 SF: Detected w25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB MMC: mmc@fe320000: 1, sdhci@fe330000: 0 Loading Environment from MMC... *** Warning - bad CRC, using default environment How to solve them?
  15. Hello, I recently updated my Helios64 board from 5.9.11 to 5.9.14 using `apt upgrade` and since then I'm unable to boot my board which is stuck on `Starting kernel ...`. Here is the log of the what got upgraded. I tried to reinstall the latest release to eMMC through u-boot (`Armbian_20.11.3_Helios64_buster_current_5.9.14.img` and also tried `Armbian_20.11_Helios64_buster_current_5.8.17.img`) but I still get the same message `Starting kernel ...`. I also tried to boot from SD card by using the jumper P10 but I'm still stuck to `Starting kernel ...`. I also tried to install it using Maskrom Mode. Everything runs correctly but still stuck on the `Starting kernel ...`. ~/a/rkbin-master ❯❯❯ lsusb -d 2207:330c Bus 002 Device 014: ID 2207:330c Fuzhou Rockchip Electronics Company RK3399 in Mask ROM mode ~/a/rkbin-master ❯❯❯ sudo tools/rkdeveloptool db rk3399_loader_v1.24_RevNocRL.126.bin Downloading bootloader succeeded. ~/a/rkbin-master ❯❯❯ sudo tools/rkdeveloptool wl 0 ../Armbian_20.11_Helios64_buster_current_5.8.17.img Write LBA from file (100%) ~/a/rkbin-master ❯❯❯ sudo tools/rkdeveloptool rd Reset Device OK. Here is the log when I boot my board. I tried everything I found in the wiki but without any success, and I'm running out of ideas. Do you have an idea why the kernel is not booting ? Regards,
  16. I followed in detail the installation tutorial but I'm stuck at the installation of OMV. I can't access after the installation using the ip address or the generic link helios.local. I did try to reinstall the configuration. To go through a boot on the Micro SD... I don't know if there would be a command line to verify that OMV is installed? Or another way to install it?
  17. Hi, I bought a 2-pin piezo buzzer to use for alarm messages. It is connected to the P4 header on the board. When I execute the "beep" command in the console I only get short click noises. Are special kernel drivers necessary for a correct function?
  18. Hello, When it's planned the support of wakeonlan? I want to keep my nas powered off, and power it trough a raspberry pi when I need It. Do you think is planned on the next days or in the next mounths? Thanks
  19. HI all I just received my Helios64 and built it this weekend. I followed the instruction and everything was going well. After installing OMV i tried to access it via the network and realised that my unit wasnt connecting over lan. I have tried both ports on the back and i am unable to connect just get the flashing orange light (no green light). I have tried using different cable and different ports on the router, a network restart, system restart but noting. If anyone can offer a suggestion of what else to try it would be appreciated.
  20. I've got my helios64 recently, and I wonder what type of FS to choose for main storage. I have five 4TB disk drives. I've installed OMV, made raid5 and format it with XFS. I use XFS because I work with this FS on Centos on my x86 servers, I don't use ext4 because I don't like when sometimes it starts to check filesystem during boot process. My setup is working perfect for a couple of weeks. I don't have any reboots or freezes or any issues that I've read on this forum. Now I think about migrating from XFS to BTRFS or ZFS. The main reason is to have file check-summing and scrub. Does anybody have btrfs with raid5 or ZFS with raidz1 on Helios64? Do you have any problems with it? Also, as I know, Synology uses btrfs on top of mdraid. Is it ok to use it like this on Helios64? I've read debates about this filesystems on this forum, but I would like to know if anybody is really using it on Helios64. This is my version: Linux helios64 5.9.11-rockchip64 #20.11.1 SMP PREEMPT Fri Nov 27 21:59:08 CET 2020 aarch64 GNU/Linux
  21. I ran the 5.8/5.9 kernel for weeks without any issues (no RAM unstability, no crash, no ethernet issue). Now I reinstalled the thing and came back to a 4.4 kernel to have USB-C support. The thing is… Once every hours, the 1GBps Ethernet disconnects. Once connected through serial port, I can check the port status and it's shown as `Unknown`. I can `ip link set down eth0; ip link set up eth0` to set it back up. Am I the only one with this issue ? Is it well known on 4.4 and solved on 5.9 ?
  22. My Helios64 has been running without issue for quite a while -- at least a month. Now -- no power. Last logs on the SD card looked to be early in the morning yesterday. D9 is solid amber with battery, flashing without. No other LEDs on the board are illuminated. I've removed from the case and disconnected the previously attached drives just in case. On initial power the fans spin briefly and then ... nothing further. I've tapped and held various combinations of power and reset, etc -- no change. Can confirm with a voltmeter the original supply is making +12v. Can confirm a separate known working ATX supply on J10 has a constant +12V and the board shows the same behavior. Have tried jumpering P15 to force ATX priority -- still nothing. thoughts? Is it dead? --T
  23. If anyone in the US is looking for a Helios64 but don't want to wait for the next batch, I'm prolly selling mine. I ended up using an older windows desktop to build a server; took all of an hour to set up. I wanted to use this because it uses a lot less power and I wanted the computer for another project, but I don't have the time to screw with it trying to get it to actually function. If no one wants it, I'm just going to strip the case down as a HDD enclosure and cut my losses.
  24. In both Buster and Focal 20.11.1 on helios64 commands directed to a disk increment fields in /proc/diskstats This is incorrect and does not occur in the x86 version of xubuntu Focal. Specifically (see https://www.kernel.org/doc/Documentation/iostats.txt for field descriptions.): Check Power Mode increments "number of reads completed" Smart Return Status, Smart Read Data, and Smart Read Log increment "number of reads completed" and "number of sectors read" The smartd disk monitor uses these commands which makes the disk appear to be active to programs like hd-idle. I hope this is the correct place to report this bug. Thanks, Jeff
  25. I have the Helios with two 10T disks mounted as a BTRFS raid 1 array. This setup is stable, although I have not stressed it a lot, except for when I do a btrfs scrub. It is using the 20.11.1 Buster image on SDcard. The disks have about 900GB of files. A btrfs scrub causes a crash with the following kernel output. Is this a known bug?
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines