Jump to content

zram vs swap


tkaiser

Recommended Posts

22 hours ago, sfx2000 said:

swap_tendency = mapped_ratio/2 + distress + vm_swappiness

 

(for the lay folks - the 0-100 value in vm.swappiness is akin to the amount free memory in use before swapping is initiated - so a value of 60 says that as long as we have free memory of 60 percent, we don't swap, if less than that, we start swapping out pages - it's a weighted value)

 

My understanding of vm.swappiness based on this and that is quite the opposite. It is not a percentage as explained here. My understanding is the whole relevance of vm.swappiness is dealing with swapping out application memory vs. dropping filesystem cache (influenced by setting vm.vfs_cache_pressure which defaults to 100). 'Everyone on the Internet' is telling you high vm.swappiness values favor high IO performance over app performance/latency. Every test I made so far tells a different story as long as 'zram only' is considered.

 

The tests I already did were with heavy memory overcommitment (running jobs that require 2.6 GB physical memory with just 1 GB RAM available). All the tests I did with lower swap activity also told me to increase vm.swappiness to the maximum possible (since apps finishing faster).

 

Another 4 hours spent on the following test:

 

Again compiling the ARM Compute Library. This time on a NanoPi M4 with 4 GB DRAM and kernel 4.4. First test with almost empty filesystem cache (no swapping happened at all even with vm.swappiness=100 -- just as expected):

real	31m54.478s
user	184m16.189s
sys	5m57.955s

Next test after filling up filesystem caches/buffers to 2261 MB with vm.swappiness=100:

real	31m58.972s
user	184m40.198s
sys	5m58.623s

Now the same with vm.swappiness=60 and again filling up caches/buffers up to 2262 MB:

real	32m1.119s
user	184m28.312s
sys	5m59.243s

In other words: light zram based swapping due to filesystem caches already somewhat filled with vm.swappiness=100 increased benchmark execution time by 4.5 seconds or ~0.2%. By switching to vm.swappiness=60 execution time increased by another 2.2 seconds (or ~0.1%). Rule of thumb when benchmarking: differences lower than 5% should be considered identical numbers.

 

So I still need a testing methodology that could convince me that a Linux kernel default setting made 15 years ago when we had neither intelligent use of DRAM (zram/zswap) nor fast storage (SSDs) but only horribly low performing spinning rust that is magnitudes slower than accessing DRAM makes any sense any more.

 

Background monitoring for vm.swappiness=100 and vm.swappiness=60. @Tido since I don't want to waste additional hours on this I hope you choose an appropriate testing methodology and monitoring when you provide insights why you feel vm.swappiness=100 being the wrong number.

 

Just for the record: while zram is a nice improvement I still consider the situation with Linux (or Android where zram originates from) horrible compared to other operating systems that make way better use of physical RAM. When I run any Mac with Linux I would need twice the amount of RAM compared to running the same hardware with macOS for example (most probably applies to Android vs. iOS as well prior to latest Android changes to finally make better use of RAM and allow the majority of Android devices to run somewhat smoothly with as less as 1 GB)

Link to comment
Share on other sites

44 minutes ago, tkaiser said:

In other words: light zram based swapping due to filesystem caches already somewhat filled with vm.swappiness=100 increased benchmark execution time by 4.5 seconds or ~0.2%. By switching to vm.swappiness=60 execution time increased by another 2.2 seconds (or ~0.1%). Rule of thumb when benchmarking: differences lower than 5% should be considered identical numbers.

 

Did you test with vm.swappiness set to 10, which was part of the discussion in the first place? You mention 60 and 100...

 

And benchmarking makes for nice numbers, but look at the real-world experience across different applications.

 

 

Link to comment
Share on other sites

2 hours ago, tkaiser said:

Just for the record: while zram is a nice improvement I still consider the situation with Linux (or Android where zram originates from) horrible compared to other operating systems that make way better use of physical RAM. When I run any Mac with Linux I would need twice the amount of RAM compared to running the same hardware with macOS for example (most probably applies to Android vs. iOS as well prior to latest Android changes to finally make better use of RAM and allow the majority of Android devices to run somewhat smoothly with as less as 1 GB)

 

That I do agree - but Apple does have the benefit of integrating their own HW (even chips this days with iOS) with their SW...

Link to comment
Share on other sites

2 hours ago, tkaiser said:

My understanding of vm.swappiness based on this and that is quite the opposite. It is not a percentage as explained here. My understanding is the whole relevance of vm.swappiness is dealing with swapping out application memory vs. dropping filesystem cache (influenced by setting vm.vfs_cache_pressure which defaults to 100). 'Everyone on the Internet' is telling you high vm.swappiness values favor high IO performance over app performance/latency. Every test I made so far tells a different story as long as 'zram only' is considered.

 

Confused by your statement here... or maybe reflecting your understanding of things...

 

Simply put - benchmarks are one thing, but application responsiveness is another - one can chose to aggressively swap or not - the value right now with Armbian default, IMHO, is excessively high, and one that concurs with Android and ChromiumOS devs, along with rest of world... including those of us that have scaled across different architectures, use cases, and platforms.

 

For those who want to test and evaluate on their own....

 

read them first, so one knows the current state before changing...

 

cat /proc/sys/vm/swappiness

cat /proc/sys/vm/vfs_cache_pressure

 

Now we can change this - it's a hot change, no need to reboot...

 

sudo sysctl -w vm.swappiness=60 - range is 0 to 100 # default here is the kernel at 60

 

since @tkaiser brings this up, one can also tinker about with the file system pressure - 


sudo sysctl -w vm.vfs_cache_pressure=100 # the default here is 100, but one can gently turn it down 

 

try 10 for swappiness and 50 for cache pressure - works well with small machines

 

Definitive source -- https://www.kernel.org/doc/Documentation/sysctl/vm.txt

 

Link to comment
Share on other sites

5 hours ago, sfx2000 said:

Confused by your statement here

 

Ok, this is what you were explaining about vm.swappiness:

Quote

the 0-100 value in vm.swappiness is akin to the amount free memory in use before swapping is initiated - so a value of 60 says that as long as we have free memory of 60 percent, we don't swap, if less than that, we start swapping out pages

 

This is what the majority of 'people on the Internet' will also tell us (see here for a nice example which couldn't be more wrong). While someone else linking to the swap_tendency = mapped_ratio/2 + distress + vm_swappiness formula put it this way:

Quote

The swappiness value is scaled in a way that confuses people. They see it has a range of 0-100 so they think of percent, when really the effective minimum is around 50.

 

60 roughly means don't swap until we reach about 80% memory usage, leaving 20% for cache and such. 50 means wait until we run out of memory. Lower values mean the same, but with increasing aggressiveness in trying to free memory by other means.

 

So given that vm.swappiness is mostly understood wrongly, especially is not what it seems to be (percentage of 'free memory' as you were explaining) and that the only reason to recommend vm.swappiness=60 seems to be just the usual 'Hey let's do it this way since... grandma already did it the same way' I think there is not that much reason to further continue this discussion on this basis especially given that there is zero productive contribution when dealing with such issues as 'making better use of available DRAM'.

 

We are still talking not about swapping out pages to ultra slow HDD storage but sending them to a compressed block device in RAM. There is NO disk access involved so why should we care about experiences from the past talking about spinning rust and 'bad real-world experiences' with high application latency. Everyone I know experimenting with zram in low memory situations ended up with setting vm.swappiness to the maximum.

 

Unless someone does some tests and is able to report differences between vm.swappiness=60 and 100 wrt 'application performance' there's no reason to change this. We want swapping to happen since in our case this is not ultra slow storage access but compressing memory (making better use of the amount of physical DRAM).

 

7 hours ago, sfx2000 said:

Did you test with vm.swappiness set to 10

 

Of course not. We still want to make better use of RAM so swapping (compressing memory) is essentially what we want. If the value is really too high we'll know soon and can then adjust the value to whatever more appropriate value. But on the basis that vm.swappiness is simply misunderstood, the default is back from the days we swapped to spinning rust and today we want swapping to happen to free up RAM by compressing portions of it I see no further need to discuss this now.

 

If users report soon we'll see...

 

I had some hope you would come up with scenarios how to reasonably test for this stuff (you mentioned MariaDB)...

Link to comment
Share on other sites

BTW: Once we can collect some experiences with the new settings I have the following two tunables on my TODO list:

But since we support more and more 64-bit boards the best method to better deal with available DRAM in low memory conditions would be using a 32-bit userland. But that's something literally no one is interested in or aware of.

 

Link to comment
Share on other sites

9 hours ago, sfx2000 said:

Simply put - benchmarks are one thing, but application responsiveness is another - one can chose to aggressively swap or not - the value right now with Armbian default, IMHO, is excessively high, and one that concurs with Android and ChromiumOS devs, along with rest of world...

It's not that armbian is built in a manner that you couldn't change those settings in case you're not convinced that our settings aren't sufficient for your use-case. 

The application responsiveness is somehow hard to determine - I tried and failed horrible.. :lol: 

Situation (not an arm): Ubuntu 18.04 on an outdated acer notebook with 4GB ram, an old but not that bad performing small SSD and normally >20 tabs open in my browser combined with an editor, 5-6 shells most of them ssh to other computers or SBCs. No swapfile but zram activated (and heavily used) - somehow similar to armbians defaults. I can force the system to get unresponsive and sometimes it even happens unforced.. :P I tried to figure out if it is possible to 'measure' (measure in this term means feel a difference in my daily usage) if swappiness changes between 60 and 100 have an impact - and I failed. There are simply to many variables to be sure. E.g. responsiveness of the browser depends on the tabs you opened (e.g. there are pages which just mess up everything, likely due to heavy javascript etc. garbage on it - the only thing which helps is to close and blacklist them :D - e.g. focus.de and cnn are on my sh*tlist due to 'not playing well in my sandbox'). For my daily usage it seems that zram with swappiness 100 works quite well - even when I can't tell you that I really 'measured' a difference to swappiness 60. This notebook is a 2CV and it will never be a Ferrari. :lol:

 

10 hours ago, sfx2000 said:

IMHO, is excessively high, and one that concurs with Android and ChromiumOS devs, along with rest of world...

Well humans are 'Gewohnheitstiere' (creatures of habit - not sure if there's a proper translation in english for it @TonyMac32? :P). We assumed more than 100 years that spinach has a bunch of iron in it.. It has, but the chemist who determined the amount of iron did it with dried spinach and humans assumed that fresh (wet) spinach should (for whatever reason) have 10times more (this might be true for things like vitamins which aren't often as stable but minerals normally don't degenerate). Long story short, assumptions based on results/tests made years ago aren't questioned quite often due to 'out of comfort zone'. 

I would be interested in a proper tool to determine the responsiveness in desktop scenarios with different swappiness settings, but until yet, I didn't found a way to do it. 

Link to comment
Share on other sites

22 minutes ago, chwe said:

I would be interested in a proper tool to determine the responsiveness in desktop scenarios with different swappiness settings, but until yet, I didn't found a way to do it.

 

This needs to be measured since 'feeling a difference' is always BS. I already referenced a perfect example: an Android app allowing silly things: https://www.guidingtech.com/60480/speed-up-rooted-android/

 

The author of at least the text clearly has not the slightest idea how VM (virtual memory) works in Linux, the whole thing is based on a bunch of totally wrong assumptions ('swap is bad', 'cleaning' memory, vm.swappiness being some sort of a percentage setting, Android developers being total morons since shipping their devices defaulting to 'very slow'). The icon showing a HDD already tells the whole sad story and the totally useless recommendation to reboot for this setting to take effect demonstrates what a bunch of BS this all is. And you'll find a a lot of people telling you their phone would be faster after they switched from 'Very slow' to 'Fast'). It's simple psychology.

 

Same with server migrations over the weekend that didn't happen for whatever reasons (missing prerequisites): On Monday users will report the server would be faster ('Really good job, guys!') and at the same time they will report that a printer in their department doesn't work any more (the one that has been replaced by another one already two weeks ago). Been there, experienced this too many times so that in the meantime we do not announce any server migrations any more.

 

If we're talking about expectations and efforts made human beings literally feel that something improved (it must have improved since otherwise no efforts would have been taken, true?). Same with 'change'. Change is evil so better let's stay with a somewhat undocumented default setting back from 15 years ago almost everybody does not understand. For the simple reason that it looked like a reasonable default to one or even a few kernel developers back at that time in combination with all the other VM tunables when swap was HORRIBLY EXPENSIVE since happening on ultra slow HDDs.

 

Talking about 'the other VM tunables': Why is no one talking about these guys but only about vm.swappiness?

root@nanopim4:~# ls -la /proc/sys/vm/
total 0
dr-xr-xr-x 1 root root 0 Sep 19 15:47 .
dr-xr-xr-x 1 root root 0 Jan 18  2013 ..
-rw-r--r-- 1 root root 0 Sep 25 11:55 admin_reserve_kbytes
-rw-r--r-- 1 root root 0 Sep 25 11:55 block_dump
--w------- 1 root root 0 Sep 25 11:55 compact_memory
-rw-r--r-- 1 root root 0 Sep 25 11:55 compact_unevictable_allowed
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_background_bytes
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_background_ratio
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_bytes
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_expire_centisecs
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_ratio
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirtytime_expire_seconds
-rw-r--r-- 1 root root 0 Sep 25 11:55 dirty_writeback_centisecs
-rw-r--r-- 1 root root 0 Sep 25 11:55 drop_caches
-rw-r--r-- 1 root root 0 Sep 25 11:55 extfrag_threshold
-rw-r--r-- 1 root root 0 Sep 25 11:55 extra_free_kbytes
-rw-r--r-- 1 root root 0 Sep 25 11:55 hugepages_treat_as_movable
-rw-r--r-- 1 root root 0 Sep 25 11:55 hugetlb_shm_group
-rw-r--r-- 1 root root 0 Sep 25 11:55 laptop_mode
-rw-r--r-- 1 root root 0 Sep 25 11:55 legacy_va_layout
-rw-r--r-- 1 root root 0 Sep 25 11:55 lowmem_reserve_ratio
-rw-r--r-- 1 root root 0 Sep 25 11:55 max_map_count
-rw-r--r-- 1 root root 0 Sep 25 11:55 min_free_kbytes
-rw-r--r-- 1 root root 0 Sep 25 11:55 mmap_min_addr
-rw------- 1 root root 0 Sep 25 11:55 mmap_rnd_bits
-rw------- 1 root root 0 Sep 25 11:55 mmap_rnd_compat_bits
-rw-r--r-- 1 root root 0 Sep 25 11:55 nr_hugepages
-rw-r--r-- 1 root root 0 Sep 25 11:55 nr_overcommit_hugepages
-r--r--r-- 1 root root 0 Sep 25 11:55 nr_pdflush_threads
-rw-r--r-- 1 root root 0 Sep 25 11:55 oom_dump_tasks
-rw-r--r-- 1 root root 0 Sep 25 11:55 oom_kill_allocating_task
-rw-r--r-- 1 root root 0 Sep 25 11:55 overcommit_kbytes
-rw-r--r-- 1 root root 0 Sep 19 15:47 overcommit_memory
-rw-r--r-- 1 root root 0 Sep 25 11:55 overcommit_ratio
-rw-r--r-- 1 root root 0 Sep 25 08:15 page-cluster
-rw-r--r-- 1 root root 0 Sep 25 11:55 panic_on_oom
-rw-r--r-- 1 root root 0 Sep 25 11:55 percpu_pagelist_fraction
-rw-r--r-- 1 root root 0 Sep 25 11:55 stat_interval
-rw-r--r-- 1 root root 0 Sep 25 06:31 swappiness
-rw-r--r-- 1 root root 0 Sep 25 11:55 user_reserve_kbytes
-rw-r--r-- 1 root root 0 Sep 24 15:58 vfs_cache_pressure

 

Link to comment
Share on other sites

49 minutes ago, tkaiser said:

This needs to be measured since 'feeling a difference' is always BS.

you got the wrong point here.. :lol: As said, for me there's no proper tool to benchmark 'responsiveness' or at least none I'm aware of.. That's why I assume finding the best defaults for 'desktop scenarios' will be hard. 

 

52 minutes ago, tkaiser said:

Been there, experienced this too many times so that in the meantime we do not announce any server migrations any more.

also known as placebo effect or phantom pain.. :D or, modern politics.. (depends on your level of sarcasm..). 

 

Cause I don't have a proper tool to measure responsiveness for my desktop system, I go a step back and kick out everything which might harm the responsiveness.. e.g. everything unnecessary but with a huge memory footprint (I don't need cnn for my daily news-feed and as long as they don't get it how to develop a newssite which doesn't mess up as hard, they're in my penalty box).  

 

Spoiler

that's not some sort of CNN bashing... I can back my 'feeling' with data.. :lol: They deserved it 'being on my sh*tlist*... 

'ps aux  | awk '{print $5/1024 " " $6/1024 " " $11}'  | sort -n | grep "opera"'  (and yes, I still use opera :lol:)

memory.png.3d25a4c6814e7c79d1a767c6a27df8e5.png

 

1 hour ago, tkaiser said:

Talking about 'the other VM tunables': Why is no one talking about these guys but only about vm.swappiness?

most likely cause nobody cares as long as @tkaiser does it...  :lol::ph34r: Talking about them means thinking about them, means some sort of phantom pain.. :lol: We must keep in mind that the average user (I would assume that developers count here as users too) doesn't care and/or isn't experienced enough to step into such a topic. They go for the low-hanging fruits which let them 'feel' that the system is faster now (e.g. cpu 'over-clocking' is something comes up quite early in every 'new board' thread.. :rolleyes:). 

Link to comment
Share on other sites

don't worry @tkaiser zram *is fast*, very fast in fact, the experience running x-desktop on orange pi pc 1gb ram and even orange pi one 512k ram is quite a bit smoother with zram :D

however, it isn't about zram's speed that's the only consideration, e.g. for the orange pi one 512k ram, running on only ram and zram may land one with a freeze up / stall when memory really runs out especially when running on x11 desktop with various apps. hence some swap on flash is *inevitable* to overcome the simply *lack of ram*.

 

then there are other issues on the fringes, such as ssh failed to connect after some time

https://forum.armbian.com/topic/6751-next-major-upgrade-v560/?do=findComment&comment=62409

https://forum.armbian.com/topic/6751-next-major-upgrade-v560/?do=findComment&comment=62484

that seemed to be alleviated by setting a vm.swapiness, less than 100 leaving some room for some idle processes to simply 'stay in memory'

 

Link to comment
Share on other sites

21 minutes ago, ag123 said:

then there are other issues on the fringes, such as ssh failed to connect after some time

https://forum.armbian.com/topic/6751-next-major-upgrade-v560/?do=findComment&comment=62409

https://forum.armbian.com/topic/6751-next-major-upgrade-v560/?do=findComment&comment=62484

that seemed to be alleviated by setting a vm.swapiness, less than 100 leaving some room for some idle processes to simply 'stay in memory'

 

Ok, I give up. Do whatever f*ck you want with zram from now on in Armbian. I really can't stand it any more.

 

Let's do it like grandma already did it! That's the only way things work!

 

Why monitoring stuff? Why analyzing problems? Why doing some research? Assumptions, assumptions, assumptions! Nothing but assumptions!

Link to comment
Share on other sites

5 hours ago, tkaiser said:

Ok, I give up. Do whatever f*ck you want with zram from now on in Armbian. I really can't stand it any more.

 

I'll be the first to apologize, for opening a can of worms - whatever the defaults in armbian are, that's ok, people have spent time evaluating things, and have made decisions that are generally good in their experience.

 

The VM params - most of them are tunable, can be done on the fly without a reboot - different use cases - desktop general vs. database server (yes, people can and do run databases on SBC's), each is going to have params that are more suitable for that particular application.

 

On a personal level - yes, I'm sometimes very forward and frank on stating my opinions, and it can be offensive - it's not intended to be, and I'm sorry.

 

 

Link to comment
Share on other sites

14 hours ago, sfx2000 said:

The VM params - most of them are tunable, can be done on the fly without a reboot - different use cases - desktop general vs. database server (yes, people can and do run databases on SBC's), each is going to have params that are more suitable for that particular application.

 

Thanks for the reminder. I totally forgot that running databases is amongst valid use cases. Based on my testings so far neither databases nor ZFS play very well with zram so the only reasonable vm.swappiness value with both use cases is IMO 0 (which means disable swap entirely starting with kernel 3.5 as far as I know, prior to that behavior at 0 was the same as 1 today) and using an appropriate amount of physical RAM for the use case or tune the application (calling ZFS an application too since default behavior to eat all available RAM except 1 GB for its own filesystem cache can be easily configured)

 

I added this to the release log right now.

 

Yesterday throughout the whole day I did some more testing even with those recommendations back from the days when we all did something horribly stupid: swapping to HDDs. I tested with vm.swappiness=10 and vm.vfs_cache_pressure=50 and got exactly same results as with both tunables set to 100 for the simple reason that the kernel limited the page cache in almost the same way with my workload (huge compile job). No differences in behavior, just slightly slower with reduced vm.swappiness.

 

So it depends on the use case, how situation with page cache looks like vs. application memory. But all numbers and explanations we find deal with swap on slow storage and/or are from 15 years ago. But today we're talking about something entirely different: no swap on ultra slow storage but compressing/decompressing memory on demand. Back in the old days with high swappiness of course you run into trouble once pages have to swapped back in from disk (switching to Chromium and waiting 20-40 seconds). But we're now talking about something entirely different. 'Aggressively swapping' with zram only is not high disk IO with slow storage any more but 'making better use of available memory' by compressing as much as least used pages as possible (again: what we have in Linux is way more primitive compared to e.g. Darwin/XNU).

 

And what really turns me down is this:

 

completescancountalgorithm-pdf.png

 

That's a oversimplified presentation of what's happening around swapping in Linux. And people now only focus on one single tunable they don't even understand for the sole reason they think it would be a percentage value (something that's not that complex as VM reality and something they're used to and feel they could handle) and feel it could be too 'aggressive' based on all the wrong babbling on the Internet about what vm.swappiness seems to be.

 

What everyone here is forgetting: with latest release we switched with vm.swappiness from '0' (no swap) to 'some number greater than 0' (swapping allowed). Of course this will affect installations and it might trigger bugs that got undetected for the simple reason that Armbian did not swap at all for the last years. So please can we now focus on what's important again and don't waste our time with feelings, believes and (mostly) wrong assumptions (not talking about you here).

Link to comment
Share on other sites

14 hours ago, tkaiser said:

What everyone here is forgetting: with latest release we switched with vm.swappiness from '0' (no swap) to 'some number greater than 0' (swapping allowed). Of course this will affect installations and it might trigger bugs that got undetected for the simple reason that Armbian did not swap at all for the last years. So please can we now focus on what's important again and don't waste our time with feelings, believes and (mostly) wrong assumptions (not talking about you here).

 

Swappiness=0 didn't really disable swap, it just makes the kernel very aggressive not wanting to swap - even to the point of it getting out the OOM killer with extreme prejudice to kill off tasks, depending on other factors... the easiest way to ensure that swap is disabled is never to create a swap file/partition in the first place, and if that is present, swapoff -a fixes that...

 

The flow diagram that @tkaiser points out is a good representation of the complexities involved...

 

ZRAM, by and large, is a good thing, and I was very happy to see it enabled as part of the distro in the first place (just like mapping out /tmp to tmpfs, which is also good for flash based devices).

 

Anyways, never took it personal, and initiating the discussion wasn't intended to be a personal attack on things - that it encourages a documentation change is a good thing and everybody wins!

 

That's the best possible outcome of constructive discourse.

Link to comment
Share on other sites

7 hours ago, sfx2000 said:

Swappiness=0 didn't really disable swap, it just makes the kernel very aggressive not wanting to swap

 

https://en.wikipedia.org/wiki/Swappiness

 

https://forum.armbian.com/topic/7819-sbc-bench/?do=findComment&comment=62802 (

 

Less swap activity with vm.swappiness=1 compared to vm.swappiness=0. So again, this is something that needs further investigation. We can not trust in stuff 'Everyone on the Internet' is telling (like lz4 being better than lzo since at least with ARM it's the other way around).

 

Link to comment
Share on other sites

15 hours ago, tkaiser said:

Less swap activity with vm.swappiness=1 compared to vm.swappiness=0. So again, this is something that needs further investigation. We can not trust in stuff 'Everyone on the Internet' is telling (like lz4 being better than lzo since at least with ARM it's the other way around).

 

Agreed - doesn't make sense when looking at paper, but in the real world, lzo can do better than lz4 in the zram use case...

 

There was a ticket open over on ChromiumOS - and performance for them was a wash with the code they have...

 

They already had lzo in place, and lz4 didn't offer enough improvement so it got moved to wontfix - only bring this up as there was a fair amount of discussion about it, and supports the current armbian position.

Link to comment
Share on other sites

15 hours ago, tkaiser said:

Less swap activity with vm.swappiness=1 compared to vm.swappiness=0. So again, this is something that needs further investigation.

 

Looks like you and the team already did the investigation there with swappiness set to zero - and I agree with the results...

 

23 hours ago, sfx2000 said:

Swappiness=0 didn't really disable swap, it just makes the kernel very aggressive not wanting to swap - even to the point of it getting out the OOM killer with extreme prejudice to kill off tasks, depending on other factors

 

Which is pretty much what I said...

Link to comment
Share on other sites

5 hours ago, sfx2000 said:

There was a ticket open over on ChromiumOS

 

The important part there is at the bottom: the need to 'NEON-ize lz4 and/or lzo' which has not happened on ARM yet, that's why lz4 outperforms lzo on x86 (SIMD instructions used) while on ARM it's the opposite.

 

Once some kernel developer will spend some time to add NEON instructions for the in-kernel code for either lzo or lz4 (or maybe even a slower algo) I'll immediately evaluate a detection routine for this based on kernel version since most probably this also then outperforming any other unoptimized algorithm that in theory should be more suited.

 

Missing SIMD optimization on ARM are still a big problem, see for example https://www.cnx-software.com/2018/04/14/optimizing-jpeg-transformations-on-qualcomm-centriq-arm-servers-with-neon-instructions/

Link to comment
Share on other sites

On 9/18/2018 at 12:47 AM, Igor said:

There is a problem at a kernel change, more precisely if/when initrd is regenerated.


update-initramfs: Generating /boot/initrd.img-4.18.8-odroidc2
I: The initramfs will attempt to resume from /dev/zram4
I: (UUID=368b4521-07d1-43df-803d-159c60c5c833)
I: Set the RESUME variable to override this.
update-initramfs: Converting to u-boot format

This leads to boot delay:

  Reveal hidden contents


Starting kernel ...

Loading, please wait...
starting version 232
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... Scanning for Btrfs filesystems
Begin: Waiting for suspend/resume device ... Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
Begin: Running /scripts/local-block ... done.
done.
Gave up waiting for suspend/resume device
done.
Begin: Will now check root file system ... fsck from util-linux 2.29.2
[/sbin/fsck.ext4 (1) -- /dev/mmcblk1p1] fsck.ext4 -a -C0 /dev/mmcblk1p1 
/dev/mmcblk1p1: clean, 77231/481440 files, 600696/1900644 blocks
done.
done.
Begin: Running /scripts/local-bottom ... done.
Begin: Running /scripts/init-bottom ... done.

Welcome to Debian GNU/Linux 9 (stretch)!

 

Ideas on how to fix it best?

 

Noticed today that this happens with Armbian 5.6...

 

Looking at something else, downloaded the image for NanoPi NEO, and did the update/upgrade thing, and this popped up..

update-initramfs: Generating /boot/initrd.img-4.14.70-sunxi
I: The initramfs will attempt to resume from /dev/zram4
I: (UUID=b387caf1-0414-4831-98f6-e8e10b3c3daa)
I: Set the RESUME variable to override this.

 

 

Link to comment
Share on other sites

On 9/27/2018 at 10:43 PM, tkaiser said:

The important part there is at the bottom: the need to 'NEON-ize lz4 and/or lzo' which has not happened on ARM yet, that's why lz4 outperforms lzo on x86 (SIMD instructions used) while on ARM it's the opposite.

 

Once some kernel developer will spend some time to add NEON instructions for the in-kernel code for either lzo or lz4 (or maybe even a slower algo) I'll immediately evaluate a detection routine for this based on kernel version since most probably this also then outperforming any other unoptimized algorithm that in theory should be more suited.

 

We're getting off topic, but yes, I agree - and we'll likely see any effort on optimization on lzo/lz4 on Aarch64 first - mainly due to the gazillions of Android 64 bit phones and the recent push for big CPU's in the data center (Ampere and Thunder X2 for example). If it trickles down to ARMv7-A, that would be nice...

 

I'd expect ARM to take the lead on that, just like Intel did with OpenSSL - where they leveraged SSE4 along with AES-NI to improve certain things (AES-128-GCM for example, even on non AES-NI, it performs very well)

Link to comment
Share on other sites

On 9/16/2018 at 10:37 PM, tkaiser said:

For anyone else reading this: do NOT do this. Just use Armbian -- we care about zram at the system level and also set vm.swappiness accordingly (low values are bad)

 

@tkaiser - Had a chance to do a deeper review the following...

 

https://github.com/armbian/build/blob/master/packages/bsp/common/usr/lib/armbian/armbian-zram-config

 

Couple of questions... I realize that this is your script, and you've done a fair amount of effort on it, perhaps across multiple platforms and kernels

 

1) why so complicated?

2) why mount the zram partion as ext4-journaled - is this for instrumentation perhaps?

mkfs.ext4 -O ^has_journal -s 1024 -L log2ram /dev/zram0

This adds a bit of overhead, and isn't really needed... swap is swap...

 

It's good in any event, and my script might not be as sexy or complicated as yours - actually, with your script, the swappiness testing variables are buried in overhead there if I read your script correctly.

 

I realize you might get excited a bit about a challenge, but take another look - less code is better sometimes. I'm not here to say you're wrong, I'm here to suggest there's perhaps another path - I know my script isn't perfect, but like yours, it works...

Link to comment
Share on other sites

5 hours ago, sfx2000 said:

why mount the zram partion as ext4-journaled

 

Read the script and don't just skim through. This script (initially written by Igor and not me -- there's also a commit history which helps you understand what stuff is dealt with here) sets up things that need to work on any of the kernels Armbian supports (we're not just dealing with the few devices you seem to use) and does a couple of different stuff. Of course we're not using filesystems for swap and you might even look into the Changelog to understand why mkfs.ext4 is used there..

 

@Igor certificate for docs.armbian.com has expired

Link to comment
Share on other sites

22 hours ago, tkaiser said:

Read the script and don't just skim through. This script (initially written by Igor and not me -- there's also a commit history which helps you understand what stuff is dealt with here) sets up things that need to work on any of the kernels Armbian supports (we're not just dealing with the few devices you seem to use) and does a couple of different stuff. Of course we're not using filesystems for swap and you might even look into the Changelog to understand why mkfs.ext4 is used there..

 

 

I can read, review, and understand the script - and I did... and gave a bit of feedback.

 

But for the benefit of the forum and community - no disrespect to others, but many do copy/paste on trust without knowing...

 

Tell it to me again like I'm five years old... why go down this path? What is the process, why is it done, how can it be done better?

Link to comment
Share on other sites

Hi guys,

 

Can someone please explain how to resize the zram and make it persistent? I tried using swapoff, zramctl etc to resize zram which worked OK but when I restarted my armbian box, the zram partitions reset back to its original size (64MB).

Link to comment
Share on other sites

6 hours ago, martinayotte said:

Check the file /usr/lib/armbian/armbian-ramlog, it is defaulted to 50M ...

 

Thanks for your response, however, I think ramlog is disabled by default in newer armbian releases in favour of zram. My /usr/lib/armbian/armbian-ramlog shows:

 

SIZE=50M
USE_RSYNC=true
ENABLED=false

 

I'm trying to find the file which manages zram sizes, can anyone help please?

Link to comment
Share on other sites

34 minutes ago, martinayotte said:

Check the script carefully : it is the script that create zram ...

 

I don't doubt your expertise but my zram devices are 60MB each and this config file specifies 50MB. On further investigation, I found /usr/lib/armbian/armbian-zram-config which then refers to /etc/default/armbian-zram-config where changes can be made to ZRAM_PERCENTAGE and MEM_LIMIT_PERCENTAGE

 

I changed both of these parameters to 200(%) and this has increased my zram sizes and solved my issue.

 

Thanks for your help. I hope this helps someone else.

Link to comment
Share on other sites

21 hours ago, gyrex said:

Can someone please explain how to resize the zram and make it persistent? I tried using swapoff, zramctl etc to resize zram which worked OK but when I restarted my armbian box, the zram partitions reset back to its original size (64MB).

 

There are two services that run out of systemd that override the distro defaults - so zramctl, swapoff, etc, will be overridden by these units...

 

You can find them...

 

$ systemctl list-unit-files | grep armbian
armbian-ramlog.service
armbian-zram-config.service

So you can either disable them, or you can edit the unit files to work with your needs...

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines