Jump to content

tionebrr

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by tionebrr

  1. @cu6apumOh I clearly bookmarked that thread. It would be great to see your notes in a recap yes.
  2. Guys, try to run your helios in whatever mode but always under 1.4GHz. I have mine set to performance and throttling from 400MHz to 1.4GHz. Uptime is 9 days now. Only two of the CPU cores can run above 1.4GHz (see these posts for reference) and that might be one of the cause of the crashes.
  3. I can confirm, I'm running with a 400MHz to 1.4GHz range; up since 9 days.
  4. Not sure. I changed both policies to 1.6GHz and I get the same min/max as you: Changed back to 1GHz and both policy are back the same again. Okay, and I just noticed that not all core have the same frequencies available: So maybe the policies are made to fit the available frequencies? And yep, setting the min max to 1.4GHz makes both policies equal. Not sure why some cores doesn't accept 1.6 and 1.8GHz tho. Isn't this supposed to be the same clock for every core?
  5. I had instability too, I just dialed back on the performances and it hasn't crashed in a while. However, throttling is disabled. I fixed the cpufreq at about 1GHz. Hadn't had the time to do more testing.
  6. @gprovost It still crashes but only after a lot more time (days). It may be a separate issue because those crashes are not ending in a blinking red led. I even suspect that the system is still partially up when it happens, but didn't verified. I've found an old RPi 1 in a drawer yesterday, so I will be able to tell more the next time it happens.
  7. Thanks @aprayoga I'm testing this right now with governor 'ondemand' and throttling the full frequency range. Edit 7h later: Just crashed. Going back to 1800MHz locked.
  8. Hello all, I've suspected that the freezes could be related to throttling, so I restricted the governor to only one frequency (1.8GHz) and I hadn't had a freeze since. Kind of a hammer workaround but well...
  9. Thanks for the answer @aprayoga. Alright this I understand. Too bad for the 2.5G chips... This is interesting... I'm a hardware guy so I have no clue about how the interrupt control registers are un-masked by the kernel, and what the suspend procedure looks like. But there is several scenarios possible if the Phy is acting weird with its interrupt pin: The 1G Phy is reset before suspend, which pulls INTB low, which triggers the event, and the driver clears the interrupt before suspending? Is this possible guys?
  10. Thanks a lot@aprayoga I'm not sure I understand which firmware you are talking about. Is this the firmware on the 2.5Gbit chip? If I understand correctly, the 1G Phy interrupt is not opendrain and is actively pushing up the shared interrupt pin preventing the 2.5G Phy to pull down? Can you share the manufacturer ref of both phys? I would love to take a look at the datasheets out of curiosity. A quick fix might be to scratch the 1G PHY int trace on the board XD
  11. So if I understand correctly: the memory caching I've seen is not actually managed by ZFS but by the kernel?
  12. Hello all, I'm relying quite heavily on RSS for my news feed. However I can only see two feeds on the armbian forum: New threads and posts. As you can imagine, I get quite a lot of pings for the post feed, and I'm not able to follow previous threads from the New threads feed. Is there more feeds or is it possible to add a feed per thread to the forum?
  13. Read speed are quite hard to measure. If you test the read speed of the same file multiple times, ZFS will cache it. I'm getting up to 1GBps read speed after reading the same files 4 times in a row. Your file might also be cached if you just wrote it. By the way if someone knows how to flush that cache that would be helpful to run tests.
  14. You can monitor your ZFS speed by issuing "zpool iostat x" with x being the time between each report in seconds. You can put some crazy value there, like 0.001s. There is one catch... I think the read/write values are the cumulative speed of each disk (with the parity), and not the true usable data R/W of the pool. On my helios, I'm getting about 140MBps true reads and 80MBps true writes for a raidz2 on 4 very old and already dying hdd. 2x https://www.newegg.com/hitachi-gst-ultrastar-a7k1000-hua721010kla330-1tb/p/N82E16822145164 2x https://www.newegg.com/seagate-barracuda-es-2-st31000340ns-1tb/p/N82E16822148278 WRITING TO from dd point of view: The same write from iostat point of view And wow, ZFS is learning fast... Files get cached on RAM. I get crazy reading speeds after successive reads on the same file: I also tried to copy a large folder from an NTFS USB SSD to ZFS and the write speed reported by zpool iostat was about 140MBps. So yeah. This is not a definitive answer about write speed. But our measurements are on the same ballpark at least. I believe the write speed could be optimized but only at the cost of general purpose usability. For my use cases it is enough speed and redundancy. I won't go into a speed race to find later that I trapped myself in a corner case. Edit: it is looking like the caching is actually done by the kernel in this case (I have not configured any ZFS cache). If someone want to do make an automated read/write test script, you can flush the kernel like described here:
  15. Yeah I agree... it looks scary. But those files are only symlinks actually. The true block devices are located at /dev/sdX#. I wonder what it does if you rm them... I guess they are just references but would linux actually accept to remove one? Edit: I tried and yes you can also rm the /dev/sdX. It doesn't looks like it is affecting the ZFS pool at all. I'm doing a scrub before rebooting and will edit again later. Edit edit: Wait, I didn't actually rm-ed the zfs partitions from /dev. I just removed sda. Edit edit edit: yup I can still write to my pool even after doing rm /dev/sda*. So I don't know really. I guess those files are just handles created at boot time or when a block device gets enumerated. And everything is back after a reboot. I don't know why there is not a way to do `zpool import -d /dev/disk/by-id/ata-*`. That would be perfect.
  16. Thanks a lot @michabbs. Great informations there. I'm learning a lot. I think I've found a simpler way to create the pool without having to copy/past uuids. You can create it using /dev/sdX; then you export it and import back while forcing zpool import to look in the /by-partuuid folder. One more thing, it's looking like it can even be done such as the disk are identified by ata-MANUFACTURER_REF_SERIALNUMBER. To do this I had to sudo rm /dev/disk/by-id/wwn-* so that zpool only find ata-* devices when reimporting the pool. I made the transition several times without problems now, and the only issue I stumbled upon is at export. If some services are already using directories in the pool, you need to shut down those in order to export the pool cleanly. There is a PR in the Kobol wiki. Cheers
  17. Hello @michabbs, Thanks for that wonderful tutorial. It makes me contemplate how noob I am haha. I have a hard time figuring out what this command does and what partition I would want to create beforehand: I'm creating my ZFS using only zpool create RAIDZ2 /tank sda sdb... and then the datasets with zfs create /tank/<stuff> I guess you are mirroring instead of creating a raidz. And also you are specifying the drives by id instead of path. What are the ups and downs of both methods? Second question: I see you are not using the docker packages available by default. What is wrong with the original? Outdated?
  18. tionebrr

    ZFS on Helios64

    @grek Hey thanks a lot. I found what the issue was finally. The -current- headers were still the 5.9.11 in the repos until recently; but the latest download from kobol was already packaged with the 5.9.14 kernel. So modprobe couldn't find the 9.14 kmod for zfs after the build. I'm not experienced enough to know what might be the downsides of using 2.0.0 and getting out of trouble (if any). I finally built ZFS from the DKMS and will stick to that for now. Thanks for the help
  19. tionebrr

    ZFS on Helios64

    Hello folks. I received my Helios64 some days ago. I was able to build zfs 2.0.0 with the script from @grek with the 5.9.14 release. However, the docker env built the kmod for 5.9.11 and I cannot find the 5.9.14 headers in the repos... I can rebuild at will but where are the header package?
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines