Stuart Naylor

Members
  • Content Count

    50
  • Joined

  • Last visited

About Stuart Naylor

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Stuart Naylor

    F2FS revisited

    @tkaiser From what I have seen performance is similar to ext4 and not sure if it still suffers the level of performance degradation over time that it once had. Just wondering if you Armbian guys ever did any tests on block wear vs ext4 as just wondering as never really seen many real world examples. In a way f2fs should take a dumb old cheap sd card and use the boards cpu to provide a high level wear levelling algs & process with over-provision. Not really sure aside from the perf tests how you can capture the block wear as a comparison however.
  2. I have changed to 1080p and its almost stable but pull up any window that has maybe moderate workload to say heavy such a chromium the screen starts bouncing around. Think you know what I mean by bouncy, stuttery flashy. I thought that would just be fbturbo drivers and just the fex script but nope and a bit stuck. There is an image for Xenial as using the legacy version on the OrangePi site that is rock solid @ 1080p so it must be fixable but not sure what. Jan 2019 of xenial. I ran into some permission errors with NodeJS & PM2 so thought I would give Armbian another go as first image I tried was the latest less stable Bionic/Stretch which also at 1080p are like watching a box of frogs. fbturbo, X11 settings and fex script all seem OK anyone any pointers?
  3. Stuart Naylor

    research International handling charges

    Must of just gone over as £24.36 or unlucky as my BananaPoo M2 zero escaped the net. It just came from this "Border Force" HWDC thing I had never heard of before as just thought it was HMRC. PS I am not English, honest, just a European with a penchant for English football
  4. Is everybody being hit with custom & handling charges via royal mail? Added a total of £11.72 to my OrangePi H3 plus2 £3.72 customs and £8.00 handling charge. Its no bother as just curiosity with Smaller format boards but just wondering is this a Brexit shape of things to come?
  5. Stuart Naylor

    RAM Disk with Zram

    I have done an all-in-one based on a /etc/ztab that lets you create any number of swaps / zram backed dirs and also /var/log with log shipping to persistant. https://github.com/StuartIanNaylor/zram-config Should be great for block wear and near ram speed file performance. or extending memory via compression. If any of the Armbian guys want to grab hack or rename then please do so. Its a bit fresh so needs some input and use as sure there will be something. sudo apt-get install git git clone https://github.com/StuartIanNaylor/zram-config cd zram-config sudo sh install.sh Any issues please post on github and as said grab, hack or comment on improvements.
  6. Stuart Naylor

    RAM Disk with Zram

    https://www.phoronix.com/scan.php?page=news_item&px=ZRAM-Linux-5.1-Better-Perform I guess at one stage
  7. Stuart Naylor

    RAM Disk with Zram

    PS thought that was a good idea and a zram drive for any purpose might be a good idea. Need to tidy the conf info and readme.md But in zramdrive.conf there are 2 options. HDD_DIR is the dir you are going to create 'any name' to be the temp persistent dir that we will move to via bind ZRAM_DIR is the dir you want to hold in zram HDD_DIR=/opt/moved ZRAM_DIR=/var/backups https://github.com/StuartIanNaylor/zramdrive
  8. Stuart Naylor

    RAM Disk with Zram

    Tido is right as its an example of zram creating a ram drive that in that uses a zram drive for log files. It would take minimal change for it to be dbstorage instead and you could use the same mechanisms on start and stop to transfer to media to make it persistent. With Lz4 you would get minimal and near ram speed with large compression gains or use pure tmpfs ram. I don't like how log2ram writes out full files every hour and did my own version https://github.com/StuartIanNaylor/log2zram but its all based on azlux's work which is all based on https://debian-administration.org/article/661/A_transient_/var/log With log2ram or log2zram you could just change the bind directory which basically moves the directory somewhere else so you can mount a ram drive inplace. That is it create bind directory to move your directory. Mount you ram drive in the location that was moved. Copy the data to ram and go. On stop you just do the reverse and copy data back to the moved directory. Then unmount ram and bind and it is then persistent. With azlux and log2ram each hour it copies out to the moved directory, but that can still leave up to an hours gap and something IMHO is probably pointless. If you have lost the last hour via usage of volatile memory storage I am not really sure of the benefit of the hours preceding. Thats only if you get a complete crash and no orderly shutdown which you can prob trigger with a watchdog, but if you lose up to the last hour its lost. Prob worse with logs as the info of the crash is very likely to be lost and what you do have is of little interest. For most usages the complete crash is the exception and not likely to occur, apart from pulling the plug. zram is a hotplug system and a simple check is... if [ ! -d "/sys/class/zram-control" ]; then modprobe --verbose zram RAM_DEV='0' else RAM_DEV=$(cat /sys/class/zram-control/hot_add) Have the same routine in log2zram and zram-swap-config and should be able to run and be first with the modprobe of zram0 or just hot_add another. https://github.com/StuartIanNaylor/zram-swap-config does the same as https://github.com/StuartIanNaylor/log2zram and you just have to be aware of other zram services that don't check previous existence, or be sure you run after.
  9. Stuart Naylor

    research Log2Zram aka log2ram

    Has nothing to do with another PC just the default way log2ram works. Every hour it overwrites any log that has changed since last even if those might be a couple of log lines. In terms of block writes and block wear it is probably more, probably because it depends on your log usage. In a desktop yeah its likely much much especially when those logs are large and it writing out full logs to hdd.log each time for what might be a couple or single entries. Strangely there is absolutely no reason for log2ram to do this and showed azlux a solution. It swaps frequency of individual line log writes for a huge increase in bulk overwrites and in many cases it causes sd writes to be more. Depends what you are running and the logging they generate. High frequency logging then yeah L2R is beneficial but other than that your actually getting nearer to creating more than less writes.
  10. Stuart Naylor

    research Log2Zram aka log2ram

    I have been looking at log2ram after I finished zram-swap-config and every hour it copies logs that have changed. Its probably more writes than a normal /var/log in many situations.
  11. Stuart Naylor

    zram vs swap

    What I ended up doing is a grand title of Swapiness load balancer of my usual pretty poor scripting skills but it works https://github.com/StuartIanNaylor/zram-swap-config/blob/master/zram-swap-config-slb If loadavg < 1 then it will creep up to 100 in increments of 5 every 20sec. if loadavg > 1 it drops back to 80 asap if loadavg > 2 it lowers to 70 asap I read about the general disinterest in tweaks and tuning as yeah most of the settings are already set to best point for all occasions and manner of operation. You can tweak some that will suit certain manners of operation. But you can tweak many to much higher levels if they can react dynamically and back on various forms of process load that they are related to. Again probably profiles for certain tweaks & tunes will have bias to certain modes of intended operation. Sure you guys could provide far more finesse they I can and release as individual .debs I stopped flooding azlux/log2ram with zram swap guff and split into 2 so that L2R just does what it says and now has the option for a single zram backing store.
  12. Stuart Naylor

    zram vs swap

    I sort of finally plumbed for this dunno if you guys want to use or grab is so please do. https://github.com/StuartIanNaylor/log2ram/blob/master/Dev Notes pi@raspberrypi:~ $ stress --vm 6 --vm-bytes 128M --timeout 60s stress: info: [2693] dispatching hogs: 0 cpu, 0 io, 6 vm, 0 hdd stress: info: [2693] successful run completed in 61s On the mighty Pi0 PS just as an opinion I do find how you have locked all your routines into an Armbian only offering sort of crazy and insular. Crazy as with the wealth of your stuff there should be a plethora of references to installing and using your gear and splashing Armbian strongly across the internet. zram-config and various over utils would be having Pi owners thinking what is this Armbian thing then... but sadly not. Dunno who or why but that sort of framework doesn't compute with me, but just opinion and prob to your detriment as much as mine, but hey and cheers for the info... pi@raspberrypi:~ $ zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 160M 2.3M 573.6K 1000K 1 /var/log /dev/zram1 lz4 54.2M 4K 63B 4K 1 [SWAP] /dev/zram2 lz4 54.2M 4K 63B 4K 1 [SWAP] /dev/zram3 lz4 54.2M 4K 63B 4K 1 [SWAP] /dev/zram4 lz4 54.2M 4K 63B 4K 1 [SWAP] pi@raspberrypi:~ $ stress --vm 2 --vm-bytes 512M --timeout 60s stress: info: [906] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd stress: FAIL: [906] (415) <-- worker 908 got signal 9 stress: WARN: [906] (417) now reaping child worker processes stress: FAIL: [906] (451) failed run completed in 28s pi@raspberrypi:~ $ pi@raspberrypi:~ $ zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 160M 2.5M 614.6K 1M 1 /var/log /dev/zram1 lz4 650.2M 4K 64B 4K 1 [SWAP] pi@raspberrypi:~ $ stress --vm 2 --vm-bytes 512M --timeout 60s stress: info: [837] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd stress: info: [837] successful run completed in 61s Same uncompressed mem_limit of 50% of total with a single disk-size of 4 * 54.2M with a disk size expecting 3:1 compression as its a virtual size.
  13. Stuart Naylor

    zram vs swap

    I am on a Pi Zero at the moment and no Neon and LZ4 with Zram provides less overhead. I have noticed at extreme short duration LZO can sometime be better but 95% LZ4 is approx 20% lighter than LZO. I have been reading through and presuming that the binaries for LZO & LZ4 must have better NEON optimisation for the LZO binary as it doesn't make sense that its faster. Also when you have an lowly Arm with no Neon then it pans out that LZ4 is better as you are running both without NEON extensions. I just wondered if anyone had ever compiled LZ4 with https://github.com/lz4/lz4 with armcc --cpu Cortex-XX --vectorize -O3 -Otime --diag_warning=optimizations source.c ? And maybe given it a go. I am still trying to find what vm.swappiness you plumed for 100 seems to work best for me as dropping down to 80 raises my overall load by almost 20% For Pi Zero Zram with vm.swappiness = 90, /proc/sys/vm/page-cluster = 0 seems to provide lower overall load. What I am thinking is that ram2log with zram control will run a cron job that will increment / decrement vm.swappiness based on processor load as high vm.swappiness = 90 or above starts to add to the already intense boot. But after the first 2 minutes when things are much quieter its reducing load by almost 20% but there is no dynamic adjustment and it certainly could benefit from it. Anyone any ideas of maybe something more eloquent than cron?
  14. Stuart Naylor

    research Log2Zram aka log2ram

    Also forgot to mention but its great to see you have a /sys/block/zram0/mem_limit It took me a while to work out what this actually is and really you can forget about disksize. mem_limit is based on the max actual size as if you are unlucky with your compression you could end up with a disk close to disksize. Sort of makes disksize a weird choice as mem_limit is the important factor and could be anything really as mem_limit does the control. Its a weird one as disksize now if you implement mem_limit is maybe mem_limit*expected_compression_ratio_*2 as if you do hit the compression jackpot why limit it. Is that the way you look at it TK? Whats weird is that mem_limit seems to provide overhead and even if the same as disksize it seems to perform worse than disksize alone. echo "0" > /proc/sys/vm/page-cluster gives quite a good improvement. I was a typical drongo with my tests as had the load averages on a 10 second loop for 20 mins of a fresh reboot for each. Should of been every 1 seconds probably with maybe a longer 30 minute test. So prob need to go through all this again. Also large empty devices also seem to incur more overhead than I expected.
  15. Stuart Naylor

    research Log2Zram aka log2ram

    Because it does because you limit the paging size by dividing by four that with a default of 50% of mem actually is a 70M limit swap size for 4 devices which for any paging action which is a limit that doesn't really need to be and doesn't give advantage over 4, so why. Its not ignoring its just stating that my opinion is that dividing into multiple swaps is a unnecessary and pointless exercise that actually has drawbacks, its a curve ball like I say. I wondered if you would link your tests as did search but didn't find them. I am going to test myself anyway as need to check if "0" > /proc/sys/vm/page-cluster has any effect. You use the number of cores and if above limit to 4. I was never aware that there was any kernel need to be honest and thought it came supporting streams when introduced 2012ish? But no bother as I am just talking about now and like I say its not needed.