Jump to content

Issue using swap


ddurdle

Recommended Posts

i've noticed swap is not being used on my armbian system (in this case, it is a Banana Pi R1).  I tried to dig deeper and spent several hours looking into a cause and cannot find any way to resolve.

 

I updated everything yesterday to ensure there was no issue with a known bug in a previous package.

 

$ uname -a
Linux raspberrypi 4.9.7-sunxi #1 SMP Thu Feb 2 01:52:06 CET 2017 armv7l GNU/Linux
 
I have my swap located on a 2.5" HD that is attached.  It is on the /u01 mounted partition as a swap file.  I tried creating a second just to eliminate that is a file issue (using mkswap) and trying to activate it.   Both files will activate and show in use, but % swap always remains 0.
 
$ ls -l /u01/swap*
-rw------- 1 root root 1048576000 Apr 13  2016 /u01/swap
-rw-r--r-- 1 root root 1073741824 Feb 19 11:11 /u01/swap2
 
 $ free
             total       used       free     shared    buffers     cached
Mem:       1022748    1000784      21964      20600      12948     802580
-/+ buffers/cache:     185256     837492
Swap:      1023996          0    1023996
 
 
# cat /proc/sys/vm/swappiness
60
 
 
 sysctl -n vm.swappiness 
60
 
# swapon
NAME      TYPE  SIZE USED PRIO
/u01/swap file 1000M   0B   -1
 
 
Yesterday I was getting processes killed because of out of memory.  The system refuses to use the swap.

 

Link to comment
Share on other sites

Little update on overcommit, as it may cause system crash.

I deployed BPim2+ couple of days ago on remote location, for puprose of serving as http manageable Radius server and for collecting logs required by law.

Soon enough syslog-ng stopped collecting logs and behaviour was, I supose caused by swap failure.

I could create files on SD card while on internal emmc it wasn't possible due to write error to swap file.

 

Double checked swap:

- with fsck - 0 errors or corruption

- recreated swap on SD card on enabled with swapon - for some reason couldn't find device by uuid ...

 

So I followed link pointed above by tkaiser. And set overcommit_memory as 2 which crashed my system.

Not sure if it was due to overcommit_rate =50 which mulitplied my RAM amount 1GB given 512 MB and while only 128MB was alocated for swap.

I realized that and tried to change rate to possible 12 % but it was already too late.

So either I don't realize overcommit fully (more than possible) or I made silly mistake.

Good i'm driving there tomorrow :)

 

Treat it as a warning. Don't see It as complaint.

I love your job guys, especially practical approach to benchmarking by tkaiser.

Would like to commit more but I'm trying hard as an small ISP and couldn't allow time (my background is way different).

Link to comment
Share on other sites

I don't know why anyone is trying to use swap at all. Swapping to HDD or SD card is awfully slow and the majority of boards lacks a fast interface to use an SSD or fast eMMC for swap.

 

So why not stopping to use swap and use zram instead? Or a board with appropriate amount of DRAM? 

 

With a recent Armbian installation all you need to do to use zram is to uncomment this line in /etc/init.d/armhwinfo, then adjust vm.swappiness in /etc/sysctl.conf (change from 0 to 60) and reboot.

 

On most recent Ubuntu Xenial images zram is already active by default (check free output) so nothing needs to be done anyway...

Link to comment
Share on other sites

Thanks for good reading. I will use search on forum more too.

I realized my problem comes more from log2ram the minute I posted this. My bad. df damn you (I used "btrfs fi df . " recently).

Hopefully I got another BPi for testing and managed to redirect logs back to my office via PPTP.

 

I cannot afford to shrunk my log files since those are packets with  TCP syn and TCP ack fin flags noted for my clients.

This creates about 1,5 GB of (steady yet varying in hours) logs a day in human readable form.

A week ago I already found out a method to clean it a little with sed and compress well, but I prefer to cron the job to logrotate in nightime when BPi is not bothered by anyone.

So after the reading my idea is to shut the log2ram service or ommit this somehow for this particular log. And rotate data in night time with log rotate while solid compressing.

 

Meanwhile I copied the same fstab entry for emmc included in armbian, to protect SD card from too many writes and to keep space usage limited, reducing commit time to 300sec.

Then again btrfs itself has compression function with ZLO as default, and ZLI or ZSTD possibilities.

But I've seen btrfs wiki FAQ that goes through some issues with balancing filesystem and other compression methods (as of 2014). From what I've understood it is not automated and my clogg itself or got corrupted in unmanageable way.

Balancing needs to be done manually and changing compression may end in complete loss of data.

 

Not really I'am asking for ready solution but just direction how to make such service reliable.

Logrototate works with GZIP by default, there are other implementations like PIGZ (parallelized) for instance which improve CPU utilization.

Which way by your experience tkaiser may be best to achieve it?

On redundancy I plan to synchronize files nightly between two BPis. And send compressed files afterward on my home NAS via temporary mounted sshfs.

Link to comment
Share on other sites

10 minutes ago, JacaBaca said:

Logrototate works with GZIP by default, there are other implementations like PIGZ (parallelized) for instance which improve CPU utilization.

Which way by your experience tkaiser may be best to achieve it?

 

If you're running with at least kernel 4.4 I would let btrfs do the job and tell logrotate to stop compressing logfiles. Which is BTW exactly what Armbian is doing when running off a compressed btrfs (which is our default when using btrfs): https://github.com/armbian/build/blob/7927a540da0698593f6786ceb699f0844ecf6981/packages/bsp/common/etc/init.d/armhwinfo#L74-L80

 

But let's focus in this thread on swap usage (or zram!) and maybe better open up a new thread discussing log compression/rotation strategies...

Link to comment
Share on other sites

I choose to continue in this thread as VI prompted me first time when log2ram was full that it cannot write to swap file. Feel free to put me in rigth thread :)

 

As some of users may experience problems with logs that are heavy for Pi distributions please explain how to disable log2ram indefinitely.

For my understanding btrfs filesystem commits writes to SD card or emmc based on fstab setting which you set by default as 600 sec, and I copied for SD card.

So anyway log data is buffered (or cached ) into RAM unitl commit kicks in. Right?

 

My solutions for big logs are:

1. Cron using sed to clean useless log data then pipe to xz ( take care as this is time consuming with big files, and xz may be memory killer with compression set to 8-9 or many threads )

sed 's/firewall.*src-mac//;s/NAT.*//;s/len.*//' mikrotik.2018.02.08.log | xz -8ev -M 800MiB > new.xz

2. Better but little more demanding - fiddle with syslog-ng config (if you have extensive logs header or other out of MSG box data you have to work it out yourself )

If you have some remote syslog configured add this template into log command

rewrite MT_log_subst{
    subst("<begin.*end>", "<substitution>", value("MSG"), type(posix), flags("global") );
    subst("<anotherregex>", "<anothersubst> ", value("MSG"), flags(global) );
};

that handles useless data in logs on fly yet taking some CPU attention.

More (maybe even too much) info below in pages 349-365;

https://syslog-ng.com/documents/html/syslog-ng-ose-3.8-guides/en/syslog-ng-ose-guide-admin/pdf/syslog-ng-ose-guide-admin.pdf#index

 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines