Jump to content

tionebrr

Members
  • Posts

    23
  • Joined

  • Last visited

Reputation Activity

  1. Like
    tionebrr reacted to meymarce in Kobol Team is pulling the plug ;(   
    I am still sad about the news. However I wish the team the best and some more free time and hopefully you will regain some of that passion. You have come a long way from an originally failed Kickstarter campaign for the helios4 (which still came to life) to the helios64. The work you have done for the open source community is impressive.
    A personal thank you to @gprovost @aprayoga and also Dmitriy Sidorovich (which I have not seen on the forums).
     
    I would be very much interested in how everything worked behind the scenes. Like, how do find a factory, how does sourcing the parts work, how many prototypes were there?. This is something that I guess few think about and may have as little clue as I have. It'd be super cool if you could share some of that, obv. just if you don't mind and as far as maybe NDAs allow for.
     
  2. Like
    tionebrr got a reaction from gprovost in Kobol Team is taking a short Break !   
    Good move guys. Have a good one.
  3. Like
    tionebrr reacted to Igor in Kobol Team is taking a short Break !   
    ... because hardware wants to provide a working solution instead of piloting brand new SoC that might have unusable software support for years to come. We all will be frustrated, tensted, sales will go down ... WHY? 
     
    It is hard to predict more without deep investigation, but users / buyers expect top notch SW support and are willing to pay nothing for that. But someone has to pay that bill. Costs of software support is enormous, much bigger than HW design and that fact doesn't allow such products to happen. That is certainly not fault of a few people that tries to create a wonderful new hardware or folks that supports users on a voluntary basis.

    Even with reasonable well supported many years old RK3399 problems are still present in greater way we all would like.
     
     
    Wish you fast recovery. I also need that 
  4. Like
    tionebrr reacted to cu6apum in RK3399 - dts - i2s clocking - need help   
    Thanks.
    Actually everything's been done by a kind genius, mr. Zhang from the very Rockchip. He is a very responsive and caring person! I had only to slightly touch the simple-card-utils.c so the driver sets the clock accordingly to the sample rate (there was only fixed multiplier available OOB).
     
    The docs, as I cited above, say nothing on whether clkin_i2s can be an input. Yes, it can, and even on the same pin. We don't even need to mod the code, everything's done in devicetree!
    We just have to declare this clock in the DTS, and (in my case) set it as gpio-mux-clock, not a fixed one. This was my insane idea, I didn't believe it could work, but it did. The whole clock tree of this CPU looks beautifully logical, one can do miracles with it, once he catches the idea.
    Finally we have a precious hi-end data source for a serious grade DAC. The difference from Sitara that I love is 6 cores, PCIE for m.2 SSD, MIPI for a nice display, and lots of nice peripheria.
     
    So, the DTS:
    + osc45m: osc45m { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <45158400>; + }; + + osc49m: osc49m { + compatible = "fixed-clock"; + #clock-cells = <0>; + clock-frequency = <49152000>; + }; + clkin_i2s: clkin_i2s { - compatible = "fixed-clock"; + compatible = "gpio-mux-clock"; #clock-cells = <0>; - clock-frequency = <12288000>; clock-output-names = "clkin_i2s"; + clocks = <&osc45m>, <&osc49m>; + select-gpios = <&gpio0 RK_PB6 GPIO_ACTIVE_HIGH>; };  
  5. Like
    tionebrr reacted to allen--smithee in Helios64 - freeze whatever the kernel is.   
    @tionebrr
    Source : https://www.rockchip.fr/RK3399 datasheet V1.8.pdf
     
    1.2.1 Microprocessor
     Dual-core ARM Cortex-A72 MPCore processor and Quad-core ARM Cortex-A53MPCore processor, both are high-performance, low-power and cached application processor
     Two CPU clusters.Big cluster with dual-coreCortex-A72 is optimized for high-performance and little cluster with quad-core Cortex-A53 is optimized for low power.
    <... >
     PD_A72_B0: 1st Cortex-A72 + Neon + FPU + L1 I/D cache of big cluster
     PD_A72_B1: 2nd Cortex-A72+ Neon + FPU + L1 I/D cache of big cluster
    <... >
     PD_A53_L0: 1st Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L1: 2nd Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L2: 3rd Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L3: 4th Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
    <...>
     
    3.2 Recommended Operating Conditions
    The below table describes the recommended operating condition for every clock domain.
    Table 3-2 Recommended operating conditions
    Parameters Symbol Min Typ Max Units
    Supply voltage for Cortex A72 CPU BIGCPU_VDD 0.80 0.90 1.25 V
    Supply voltage for Cortex A53 CPU LITCPU_VDD 0.80 0.90 1.20 V
    Max frequency of Cortex A72 CPU 1.8 GHz
    Max frequency of Cortex A53 CPU 1.4 GHz
     
  6. Like
    tionebrr reacted to gprovost in Information on Autoshutdown / sleep / WOL?   
    WoL feature is not yet supported because suspend mode was not yet supported until now.
     
    Suspend mode has been recently added for Pinebook Pro (RK3399) in kernel mainline, so just a question of time before we got this on Helios64.
     
    https://github.com/armbian/build/blob/master/patch/kernel/rockchip64-current/board-pbp-add-suspend.patch
  7. Like
    tionebrr reacted to clostro in Mystery red light on helios64   
    Just wanted to report that the CPU frequency mod has been running stable under normal use for 15 days now (on 1Gbe connection). Haven't tried the voltage mod.
     
    I'll switch to the February 4th Buster 5.10 build soon.
     
     
    edit: 23 days, and I shut it down for sd card backup and system update. cpu freq mod is rock solid.
  8. Like
    tionebrr got a reaction from gprovost in Mystery red light on helios64   
    @gprovost It still crashes but only after a lot more time (days). It may be a separate issue because those crashes are not ending in a blinking red led. I even suspect that the system is still partially up when it happens, but didn't verified. I've found an old RPi 1 in a drawer yesterday, so I will be able to tell more the next time it happens.
  9. Like
    tionebrr reacted to Igor in ZFS on Helios64   
    ZFS was just upgraded to v2.0.1 https://armbian.atlassian.net/browse/AR-614 which is a preparation for pushing a kernel on and above 5.10.y. Upgrade was tested on Armbian Buster/Bullseye/Focal/Hirsute. My test ZPOOL was always loaded without any troubles.

    We only support upgrade troubles - for ZFS functional problems, please check / file a bug report here https://github.com/openzfs/zfs/issues
     
  10. Like
    tionebrr reacted to gprovost in Wake on lan: what is missing to make it work   
    @aprayoga will help answer in details @retrack question on what is the blocker right now to have WoL working on Heliso64. But in a nutshell is because suspend mode is not supported yet properly.
     
    I think we all understood that the question is addressed to us Kobol team, no one was expecting the core Armbian team to follow up on this.
    So let's move on and not hold any grudge against each other...we are only one to blame to not have answer quickly enough this thread.
     
    @Igor Maybe we could delete this exchange above, put that being us.
  11. Like
    tionebrr reacted to Igor in Wake on lan: what is missing to make it work   
    Armbian is all about "Kobol issue", kernel, low level support. I am not saying or judging how big role we are playing in this particular case, but we are supporting them, they are supporting us. In this "Kobol" issue. And Kobol do support us in - I believe - best possible way they can.

    Armbian is not yet another Linux distribution like Debian, like Ubuntu, like Arch, like Manjaro and many small ones ... which are just using low level support and distribute their distribution. We provide Debian / Ubuntu userland, slightly modified, improved and fixed.
     
    But core of the project is build engine. A lightweight "Yocto" / "buildroot" which gives you opportunity to build your own Linux distro. We are focused to ARM single board computers, so you can only build it for those. For hardware that is supported, but its relatively easy to add another supported hardware - if supported in u-boot / kernel. It's just a matter of few config files. But the problem is maintenance - if you don't have it, things surely starts to break down. We tag boards with "supported", where we pay attention and maintain them vs. others, mainly community, random person or vendor only supported builds. They are a part of Armbian Linux as unofficial builds. 
     

    Things are mixed very much but I rather back down. I have enough of other troubles. I only answer on "how to help". We are asking for help - money is not the biggest issue since we are actually having our own jobs and are the core sponsor, but working on common goals. I would like to help with this interesting issue, if you help us with the boring ones and help us to hire help to relieve overloaded and overstressed volunteers or to tell you to just give us a break and stop asking for more ... That's my primary motive of speaking up.
     

    It is a drop-in replacement for one server only. Same internet access, same power consumption. Network access upgrade is also planned, but not an urgent issue. We have several servers around, mainly donated, bare metal and virtualized, but so far nobody provided us better alternative to have dedicated machinery to build lets say 500 images really really fast. And once again and again when things fails ... Thredripper is still the cheapest way. You are welcome to become engaged in https://armbian.atlassian.net/browse/AR-444 We still need to design and maintain our infrastructure and talk with partners in this area. Developing and maintaining this is already a full time position, but currently covered by a few people on a side.
     

    It's a sum of unavoidable costs and our work which needs to be done to keep this system running. This is estimated on 50-80 working hours every day. Responding on this is included even I usually would skip sticking my nose into. I can easily blow whole day every day if I react on emails, PMs and few forum posts. Sometimes I actually do ... 
     
     
    No hard feelings, but I have life to catch up and many other problems are on the list so can't deal with this problem, even its appealing from the technical perspective. I am actually a bit frustrated that I am unable to help.
  12. Like
    tionebrr got a reaction from Werner in Copy speed; What can I expect?   
    You can monitor your ZFS speed by issuing "zpool iostat x" with x being the time between each report in seconds. You can put some crazy value there, like 0.001s.
    There is one catch... I think the read/write values are the cumulative speed of each disk (with the parity), and not the true usable data R/W of the pool.


    On my helios, I'm getting about 140MBps true reads and 80MBps true writes for a raidz2 on 4 very old and already dying hdd.
    2x   https://www.newegg.com/hitachi-gst-ultrastar-a7k1000-hua721010kla330-1tb/p/N82E16822145164  
    2x   https://www.newegg.com/seagate-barracuda-es-2-st31000340ns-1tb/p/N82E16822148278


    WRITING TO from dd point of view:

     
    The same write from iostat point of view




    And wow, ZFS is learning fast... Files get cached on RAM. I get crazy reading speeds after successive reads on the same file:




    I also tried to copy a large folder from an NTFS USB SSD to ZFS and the write speed reported by zpool iostat was about 140MBps. So yeah. This is not a definitive answer about write speed. But our measurements are on the same ballpark at least.
    I believe the write speed could be optimized but only at the cost of general purpose usability. For my use cases it is enough speed and redundancy. I won't go into a speed race to find later that I trapped myself in a corner case.
     
    Edit: it is looking like the caching is actually done by the kernel in this case (I have not configured any ZFS cache).

    If someone want to do make an automated read/write test script, you can flush the kernel like described here:
     
  13. Like
    tionebrr reacted to Werner in [META] Would it be possible to have an atom feed per thread ?   
    You can try this. Create a new activity stream with the settings you like:

     
     
    Then subscribe to it via RSS:

  14. Like
    tionebrr reacted to yay in zfs read vs. write performance   
    sync; echo 3 > /proc/sys/vm/drop_caches Check Documentation/sysctl/vm.txt for details on drop_caches.
  15. Like
    tionebrr got a reaction from gprovost in Copy speed; What can I expect?   
    You can monitor your ZFS speed by issuing "zpool iostat x" with x being the time between each report in seconds. You can put some crazy value there, like 0.001s.
    There is one catch... I think the read/write values are the cumulative speed of each disk (with the parity), and not the true usable data R/W of the pool.


    On my helios, I'm getting about 140MBps true reads and 80MBps true writes for a raidz2 on 4 very old and already dying hdd.
    2x   https://www.newegg.com/hitachi-gst-ultrastar-a7k1000-hua721010kla330-1tb/p/N82E16822145164  
    2x   https://www.newegg.com/seagate-barracuda-es-2-st31000340ns-1tb/p/N82E16822148278


    WRITING TO from dd point of view:

     
    The same write from iostat point of view




    And wow, ZFS is learning fast... Files get cached on RAM. I get crazy reading speeds after successive reads on the same file:




    I also tried to copy a large folder from an NTFS USB SSD to ZFS and the write speed reported by zpool iostat was about 140MBps. So yeah. This is not a definitive answer about write speed. But our measurements are on the same ballpark at least.
    I believe the write speed could be optimized but only at the cost of general purpose usability. For my use cases it is enough speed and redundancy. I won't go into a speed race to find later that I trapped myself in a corner case.
     
    Edit: it is looking like the caching is actually done by the kernel in this case (I have not configured any ZFS cache).

    If someone want to do make an automated read/write test script, you can flush the kernel like described here:
     
  16. Like
    tionebrr reacted to michabbs in [TUTORIAL] First steps with Helios64 (ZFS install & config)   
    Well, generally it is good idea to partition your disk before you use it. :-)
    For example like this:
    # gdisk -l /dev/sda GPT fdisk (gdisk) version 1.0.5 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sda: 3907029168 sectors, 1.8 TiB Model: WDC XXX Sector size (logical/physical): 512/4096 bytes Disk identifier (GUID): XXX Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 3907029134 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 1050623 512.0 MiB EF00 EFI System 2 1050624 3907029134 1.8 TiB BF01 ZFS draalpool  
    I use only one partition (here: sda2) for ZFS, and I may use another ones (here: sda1) for another things. It is usually bad idea to use the whole disk for ZFS, because when you take the disk out and put it into another computer it is not guarantied that it will not try to "repair" the partition table (which you do not have...).
    In the above example I created the EFI partition just in case if I needed to move this disk to another computer  (in an unknown future...) and do something (no matter what...). Most likely it will never be used for anything. But it is there - just as an option. :-)
    Anyway - if you already created zfs on the whole disk - it should work just fine.
     
    Yes, it was just an example. There are plenty of ways you could create zfs pool. Generally 3-disks RAIDZ and 2-disks MIRROR are the most common.
     
    In ZFS you should never specify disks (or partitions...) by /dev/sdXY path.  The path is not guaranteed to be consistent between reboots. Actually it is guaranteed to be inconsistent when you remove a disk or swap disks. Your pool might fail to import after reboot if the disks have different paths assigned. Always use uuids!
     
    Possibly nothing is wrong with them. I always use official Docker repository because they support Ubuntu and provide with the newest stable version.
     
  17. Like
    tionebrr reacted to michabbs in [TUTORIAL] First steps with Helios64 (ZFS install & config)   
    Focal / root on eMMC / ZFS on hdd / LXD / Docker
     
    I received my Helios64 yesterday, installed the system, and decided to write down my steps before I forget them. Maybe someone will be interested. :-)
     
    Preparation:
    Assembly your box as described here. Download Armbian Focal image from here and flash it to SD card. You may use Ether. Insert the SD card to your Helios64 and boot. After 15-20s the box should be accessible via ssh. (Of course you have to find out it's IP address somehow. For example check your router logs or use this.)  
    First login:
    ssh root@IP Password: 1234  
    After prompt - change password and create your daily user. You should never login as root again. Just use sudo in the future. :-)
    Note: The auto-generated user is member of "disk" group. I do not like it. You may remove it so: "gpasswd -d user disk".
     
    Now move your system to eMMC:
    apt update apt upgrade armbian-config # Go to: System -> Install -> Install to/update boot loader -> Install/Update the bootloader on SD/eMMC -> Boot from eMMC / system on eMMC  
    You can choose root filesystem. I have chosen ext4. Possibly f2fs might be a better idea, but I have not tested it.
    When finished - power off, eject the sd card, power on. 
    Your system should now boot from eMMC.
     
    If you want to change network configuration (for example set static IP) use this: "sudo nmtui".
    You should also change the hostname:
    sudo armbian-config # Go to: Personal -> Hostname  
     
    ZFS on hard disk:
    sudo armbian-config # Go to Software and install headers. sudo apt install zfs-dkms zfsutils-linux # Optional: sudo apt install zfs-auto-snapshot # reboot  
    Prepare necessary partitions - for example using fdisk or gdisk.
    Create your zfs pool. More or less this way:
    sudo zpool create -o ashift=12 -m /mypool -mypool mirror /dev/disk/by-partuuid/abc123 /dev/disk/by-partuuid/xyz789  
    Reboot and make sure the pool is imported automatically. (For example by typing "zpool status".)
    You should now have working system with root on eMMC and ZFS pool on HDD.
     
    Docker with ZFS:
    Prepare the filesystem:
    sudo zfs create -o mountpoint=/var/lib/docker mypool/docker-root sudo zfs create -o mountpoint=/var/lib/docker/volumes mypool/docker-volumes sudo chmod 700 /var/lib/docker/volumes # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/docker-root sudo zfs set com.sun:auto-snapshot=true mypool/docker-volumes  
    Create /etc/docker/daemon.json with the following content:
    { "storage-driver": "zfs" }  
    Add /etc/apt/sources.list.d/docker.list with the following content:
    deb [arch=arm64] https://download.docker.com/linux/ubuntu focal stable # deb-src [arch=arm64] https://download.docker.com/linux/ubuntu focal stable  
    Install Docker:
    sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io #You might want this: sudo usermod -aG docker your-user  
     
    Voila! Your Docker should be ready! Test it: "docker run hello-world".
     
    Option: Install Portainer:
    sudo zfs create rpool/docker-volumes/portainer_data # You might omit the above line if you do not want to have separate dataset for the docker volume (bad idea). docker volume create portainer_data docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce  
    Go to http://yourip:9000 and configure.
     
     
    LXD with ZFS:
     
    sudo zfs create -o mountpoint=none mypool/lxd-pool sudo apt install lxd sudo lxc init # Configure ZFS this way: Do you want to configure a new storage pool (yes/no) [default=yes]? yes Name of the new storage pool [default=default]: Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: zfs Create a new ZFS pool (yes/no) [default=yes]? no Name of the existing ZFS pool or dataset: mypool/lxd-pool [...] #You might want this: sudo usermod -aG lxd your-user # Option: If you use zfs-auto-snapshot, you might want to consider this: sudo zfs set com.sun:auto-snapshot=false mypool/lxd-pool sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/containers sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/custom sudo zfs set com.sun:auto-snapshot=true mypool/lxd-pool/virtual-machines  
    That's it. Lxd should work now on ZFS. :-)
     
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines