0
Stuart Naylor

research Log2Zram aka log2ram

Recommended Posts

https://github.com/StuartIanNaylor/log2ram/tree/log2zram

git clone --single-branch --branch log2zram https://github.com/StuartIanNaylor/log2ram

 

/etc/log2ram.conf says it all.

 

# Configuration file for Log2Ram (https://github.com/azlux/log2ram) under MIT license. This configuration file is read 
# by the log2ram service Size for the ram folder, it defines the size the log folder will reserve into the RAM. If it's 
# not enough, log2ram will not be able to use ram. Check you /var/log size folder. The default is 40M and is basically 
# enough for a lot of applications. You will need to increase it if you have a server and a lot of log for example. 
# Above is log2ram original size increased to 80M zram. Assuming 3:1 LZO compression ram usage should be reduced to 
# 26.66M. Size is in MB
SIZE=80
# This variable can be set to true if you prefer "rsync" rather than "cp". I use the command cp -u and rsync -X, so I 
# don't copy the all folder every time for optimization. You can choose which one you want. Be sure rsync is installed 
# if you use it.
USE_RSYNC=false
# If there are some errors with available RAM space, a system mail will be send Change it to false and you will have 
# only a log if there is no place on RAM anymore.
MAIL=true
# Big cores should only be counted really but this is where you can dictate zram stream count Default is 1 and settings 
# can be edited to personal taste.
# BIG_CORES=0 Disables zram
BIG_CORES=1
# Free mem factor deb is 505 I concur that 75 is better but its open. Free mem factor is just the fraction of 
# free mem used for swap drive total Drive size = Free_Mem * mem_factor / 100 / Big_Cores
# So if you want .75 of total mem make mem_factor 75 as divided by 100 to avoid float usage
MEM_FACTOR=75
# Compression algorythm either LZ0 or LZ$
COMP_ALG=lzo
# ZLTG flag means use ZRAM ramdisk for log2ram true=enable / false=disable
ZLTG=true
# SWAP_PRI sets swap_priority of zram streams set by big_cores default=75
SWAP_PRI=75

Just gives you a bit more control than zram-config_0.5_all.deb but basically is just that and log2ram in a single package.

Any issues tips or ideas give us a shout here or github

Share this post


Link to post
Share on other sites
4 hours ago, Stuart Naylor said:

zram-config

I don't understand your intention of this on armbian, as it already ships with ZRAM und log2RAM. Can you elaborate on your intention?

 

Share this post


Link to post
Share on other sites
6 hours ago, Tido said:

I don't understand your intention of this on armbian, as it already ships with ZRAM und log2RAM. Can you elaborate on your intention?

 

There was no intention really as not sure what you use just know the Armbian guys are often champions of lower end stuff.
Its posted because often I see both Zram & Log2Ram but the init script of Zram comes after Log2Ram and there is no option to have Log2Ram on a compressed drive.
Also there is no intention as its done but been playing with Pi Zero's and Raspbian and got annoyed with how much zram-config_0.5_all.deb sucks.
There is a problem with separate installs as any logs of anything previous in the boot will be lost by the use of log2ram even log2ram struggles to gives logs and sends by Root mail.
All linux versions come with zram but strangely the scripts for swap are often late in the boot and there is not a log2ram option to use it.
Does Armbian give you a compressed Log2Ram drive? If not here is an option.
You guys have prob got something better just my reaction to Raspbian and zram-config_0.5_all.deb as zram-config_0.5_all.deb sucks majorly.
Its fixed at 50% of the ram for zram probably because the dev was trying to dodge floating point and never thought of using a factor and divide by 100.
Also it applies is a zram cache for each processor which is relatively pointless for little cores and robs the big cores of precious ram swap.
Didn't know you guys had something better but when I was googling for a compressed drive log2ram I spotted a couple of your articles.
I assumed like many you where stuck with just zram-config_0.5_all.deb which I have hated with a passion for a long time and it has twizzled my nizzle that the simple addition of a compressed ram drive for log2ram isn't adopted.

I messaged azlux on 'hey it would be cool to have log2ram on a zram drive' was impatient for a reply so gave an implementation example.
Sent azlux another message just to grab and use whatever my branch may offer.

If log2ram doesn't then you can that branch if you wish as maybe like me its bugged you that a simple config for zram and log2ram with zram drive doesn't exist.
As I say no intention just me having a fit of frustration with Raspbian zram-config_0.5_all.deb and log2drive that totally miss some no brainer simple options.
While I was at it I added the simple options for Zram factor, number of drives even if I have called in Big_Cores and comp_alg choice which for some reason often seem missing from choice and /etc options.

I was going to ask you guys if there was any point in ext3 or if there is another alternative to get 1024k blocks maybe lower but actually not worth the install hassle of an extra partition. 
There is no big conspiracy as it is posted in general chit chat and at most that is its only intention.

Share this post


Link to post
Share on other sites
20 minutes ago, tkaiser said:

Great didn't know just me getting hissy with raspbian and zram-config_0.5_all.deb and didn't know.
When you google zram-or search git-hub I missed any reference or the ability to grab it for raspbian.
Wish you offered it as a standalone package as would of saved me quite a few hours :)

Hey TK lzo or lz4 with zero I am finding lzo better if you have a bit more ooomf does lz4 make a better option?
Dunno if you have one of your exhaustive tests anywhere?

I have a banana pi zero here as having a brief love affair with that low end form factor but also seems no armbian that.
Is that another lack of knowledge and misconception?

Just had a look and that is great and your bash code is far more elegant than my hacks. Guys you should really release that to the wild as a standalone armbian package as its great.
Only comment is zram_max_devs=${ZRAM_MAX_DEVICES:=4} as correct but there is no limit to stupidity and conversely shouldn't be either.    

Share this post


Link to post
Share on other sites

PS guys with the curve ball of debian zram_config I think you have fallen foul of the same trap I did.
I have to ask if a zram device can accept multiple streams then why are we creating a device per core?
Does any other type of swap not have concurrent streams or require 1 per core? ...
It quite clearly states a device supports multiple streams for each core unless you deliberately state /sys/block/zram0/max_comp_streams which your not.
https://www.kernel.org/doc/Documentation/blockdev/zram.txt

All you are doing is partitioning your swap in multiples of smaller chunks still accessible by all cores.
zramctl will show current streams on each device.

I have finished what I have suggested to azlux and it will not effect armbian in default log2ram mode and requested a pull.
https://github.com/StuartIanNaylor/log2ram/tree/master
But yeah pretty sure really we should just have a single zram swap irrespective of cores as it has multiple concurrent streams based on cores applied automatically unless deliberately overridden by max_comp_streams

Also as zram can use any /proc/crypto tried deflate (zlib) compression goes up from approx 2.5:1 (lz4) to 3.5:1 (deflate)
zramctl only registers 2 text strings lzo/lz4 but zram and zramctl like said can use all if you enter a non existant or null comp_alg zram will default to lzo.
So zramctl will display no comp_alg (as no txt string) but you can tell by results its working and that the default of lzo isn't in operation.
log2ram prob could use higher compression due to log nature as opposed to intensive page swapping.
Currently have mine with a single 4 stream lzo swap and deflate log and seems to work ok and guess will have to do some benches.
(My crappy bananapi m2 zero and TK yeap you couldn't be more correct about it, but love that form factor with a hdmi)

I have asked a question that is basically zram_config wtf?
https://answers.launchpad.net/ubuntu/+source/zram-config/+question/678905

 

NAME       ALGORITHM DISKSIZE  DATA  COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0                40M  2.5M 303.2K  604K       4 /var/log
/dev/zram1 lzo         371.6M    4K    76B    4K       4 [SWAP]


 

 

Share this post


Link to post
Share on other sites
3 hours ago, Stuart Naylor said:

 I have to ask if a zram device can accept multiple streams then why are we creating a device per core?

You miss that Armbian still supports devices running with a 3.4 kernel. And obviously you also miss all of the research already done (use forum search) and how Armbian's implementation currently works (for example choosing the most efficient algo automagically depending on kernel capabilities for the log2ram partition)

Share this post


Link to post
Share on other sites
3 hours ago, tkaiser said:

You miss that Armbian still supports devices running with a 3.4 kernel. And obviously you also miss all of the research already done (use forum search) and how Armbian's implementation currently works (for example choosing the most efficient algo automagically depending on kernel capabilities for the log2ram partition)

To be honest I don't miss anything because I haven't looked and why I am posting in general chitchat.
But I don't have to as I can see what your doing from your github code.
I am thinking applying a device per core is a curve ball thrown by zram_config as each device supports multiple streams set automatically by core count.
Chrome OS & Android use a single device and if you think about it what swap of any kind do we apply on a per core basis when it can cater for multiple streams?

Also echo "0" > /proc/sys/vm/page-cluster as we no longer need to pull in multiples of 8 to help with disk mechanism and it should cut latency.

I was only stating it can be any crypto algo not just lzo/lz4 because the text strings are missing from zramctl. I didn't know that.
I am finding unlike intensive paging that zlib seems to provide zero to little overhead the module name is deflate though not zlib.
I had to go and look at the code and yeah your using lz4 lz4hc quicklz zlib brotli zstd but I am finding on minimal installs your stuck with a choice of lzo, lz4 or deflate (zlib module name).
I had to glance your code to see what your using, as what your where doing with cpucore count but didn't go much further.
I tried to find lz4hc but couldnt find a lib apart from a java one on raspbian.

I am still thinking multiple smaller devices is a curve ball though and was only showing what I am doing. Deflate is the only available crypto on raspbian minimal / common cpu installs that gives better compression than lzo/lz4 or at least the others are not list in /proc/crypto without chancing too much cpu overhead.
To be honest not sure how quantify efficient as its a combination of logging overheads and how much you value compression and disk writes, but was just saying if you are using one other than lzo/lz4 then zramctl will report empty for comp alg as only contains those 2 strings.

So hadn't read what you use and knew I was doing something similar, but avail to all without distro specifics apart from xbian base so was just sharing in general chit chat my thoughts as ended up doing similar on raspbian.
I will have a search of in the Armbian forum but posting a link would of been appreciated more.
Got to page 10 without finding anything so maybe post a link, just seemed to be full of complaints about log2ram being full.
 
 


 

Share this post


Link to post
Share on other sites
3 hours ago, Stuart Naylor said:

Chrome OS & Android use a single device and if you think about it

Why do you ignore the answers you get? I explained that Armbian is not supposed to run only on one device but on many. And some of them still run with kernel 3.4. As such do we use up to 4 zram devices. More zram devices than 1 is needed on old kernels and doesn't matter on newer kernels (tested various times, simply search for zram).

Share this post


Link to post
Share on other sites

Because it does because you limit the paging size by dividing by four that with a default of 50% of mem actually is a 70M limit swap size for 4 devices which for any paging action which is a limit that  doesn't really need to be and doesn't give advantage over 4, so why.
Its not ignoring its just stating that my opinion is that dividing into multiple swaps is a unnecessary and pointless exercise that actually has drawbacks, its a curve ball like I say.
I wondered if you would link your tests as did search but didn't find them. 
I am going to test myself anyway as need to check if "0" > /proc/sys/vm/page-cluster has any effect.
You use the number of cores and if above limit to 4.

I was never aware that there was any kernel need to be honest and thought it came supporting streams when introduced 2012ish?
But no bother as I am just talking about now and like I say its not needed.


 

Share this post


Link to post
Share on other sites

Also forgot to mention but its great to see you have a /sys/block/zram0/mem_limit
It took me a while to work out what this actually is and really you can forget about disksize.
mem_limit is based on the max actual size as if you are unlucky with your compression you could end up with a disk close to disksize.
Sort of makes disksize a weird choice as mem_limit is the important factor and could be anything really as mem_limit does the control.
Its a weird one as disksize now if you implement mem_limit is maybe mem_limit*expected_compression_ratio_*2 as if you do hit the compression jackpot why limit it.

Is that the way you look at it TK?

Whats weird is that mem_limit seems to provide overhead and even if the same as disksize it seems to perform worse than disksize alone.
 echo "0" > /proc/sys/vm/page-cluster gives quite a good improvement.

I was a typical drongo with my tests as had the load averages on a 10 second loop for 20 mins of a fresh reboot for each.
Should of been every 1 seconds probably with maybe a longer 30 minute test.
So prob need to go through all this again.

Also large empty devices also seem to incur more overhead than I expected.
 

Share this post


Link to post
Share on other sites

I have been looking at log2ram after I finished zram-swap-config and every hour it copies logs that have changed.
Its probably more writes than a normal /var/log in many situations. 

Share this post


Link to post
Share on other sites
1 hour ago, Stuart Naylor said:

Its probably more

This says to me:  you don't know.

Just check your Linux PC (with Ubuntu for example) and then you can compare.

Share this post


Link to post
Share on other sites

Has nothing to do with another PC just the default way log2ram works.
Every hour it overwrites any log that has changed since last even if those might be a couple of log lines.
In terms of block writes and block wear it is probably more, probably because it depends on your log usage.
In a desktop yeah its likely much much especially when those logs are large and it writing out full logs to hdd.log each time for what might be a couple or single entries.
Strangely there is absolutely no reason for log2ram to do this and showed azlux a solution.
It swaps frequency of individual line log writes for a huge increase in bulk overwrites and in many cases it causes sd writes to be more. 
Depends what you are running and the logging they generate.
High frequency logging then yeah L2R is beneficial but other than that your actually getting nearer to creating more than less writes.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
0