Jump to content

JacaBaca

Members
  • Posts

    3
  • Joined

  • Last visited

  1. I choose to continue in this thread as VI prompted me first time when log2ram was full that it cannot write to swap file. Feel free to put me in rigth thread As some of users may experience problems with logs that are heavy for Pi distributions please explain how to disable log2ram indefinitely. For my understanding btrfs filesystem commits writes to SD card or emmc based on fstab setting which you set by default as 600 sec, and I copied for SD card. So anyway log data is buffered (or cached ) into RAM unitl commit kicks in. Right? My solutions for big logs are: 1. Cron using sed to clean useless log data then pipe to xz ( take care as this is time consuming with big files, and xz may be memory killer with compression set to 8-9 or many threads ) sed 's/firewall.*src-mac//;s/NAT.*//;s/len.*//' mikrotik.2018.02.08.log | xz -8ev -M 800MiB > new.xz 2. Better but little more demanding - fiddle with syslog-ng config (if you have extensive logs header or other out of MSG box data you have to work it out yourself ) If you have some remote syslog configured add this template into log command rewrite MT_log_subst{ subst("<begin.*end>", "<substitution>", value("MSG"), type(posix), flags("global") ); subst("<anotherregex>", "<anothersubst> ", value("MSG"), flags(global) ); }; that handles useless data in logs on fly yet taking some CPU attention. More (maybe even too much) info below in pages 349-365; https://syslog-ng.com/documents/html/syslog-ng-ose-3.8-guides/en/syslog-ng-ose-guide-admin/pdf/syslog-ng-ose-guide-admin.pdf#index
  2. Thanks for good reading. I will use search on forum more too. I realized my problem comes more from log2ram the minute I posted this. My bad. df damn you (I used "btrfs fi df . " recently). Hopefully I got another BPi for testing and managed to redirect logs back to my office via PPTP. I cannot afford to shrunk my log files since those are packets with TCP syn and TCP ack fin flags noted for my clients. This creates about 1,5 GB of (steady yet varying in hours) logs a day in human readable form. A week ago I already found out a method to clean it a little with sed and compress well, but I prefer to cron the job to logrotate in nightime when BPi is not bothered by anyone. So after the reading my idea is to shut the log2ram service or ommit this somehow for this particular log. And rotate data in night time with log rotate while solid compressing. Meanwhile I copied the same fstab entry for emmc included in armbian, to protect SD card from too many writes and to keep space usage limited, reducing commit time to 300sec. Then again btrfs itself has compression function with ZLO as default, and ZLI or ZSTD possibilities. But I've seen btrfs wiki FAQ that goes through some issues with balancing filesystem and other compression methods (as of 2014). From what I've understood it is not automated and my clogg itself or got corrupted in unmanageable way. Balancing needs to be done manually and changing compression may end in complete loss of data. Not really I'am asking for ready solution but just direction how to make such service reliable. Logrototate works with GZIP by default, there are other implementations like PIGZ (parallelized) for instance which improve CPU utilization. Which way by your experience tkaiser may be best to achieve it? On redundancy I plan to synchronize files nightly between two BPis. And send compressed files afterward on my home NAS via temporary mounted sshfs.
  3. Little update on overcommit, as it may cause system crash. I deployed BPim2+ couple of days ago on remote location, for puprose of serving as http manageable Radius server and for collecting logs required by law. Soon enough syslog-ng stopped collecting logs and behaviour was, I supose caused by swap failure. I could create files on SD card while on internal emmc it wasn't possible due to write error to swap file. Double checked swap: - with fsck - 0 errors or corruption - recreated swap on SD card on enabled with swapon - for some reason couldn't find device by uuid ... So I followed link pointed above by tkaiser. And set overcommit_memory as 2 which crashed my system. Not sure if it was due to overcommit_rate =50 which mulitplied my RAM amount 1GB given 512 MB and while only 128MB was alocated for swap. I realized that and tried to change rate to possible 12 % but it was already too late. So either I don't realize overcommit fully (more than possible) or I made silly mistake. Good i'm driving there tomorrow Treat it as a warning. Don't see It as complaint. I love your job guys, especially practical approach to benchmarking by tkaiser. Would like to commit more but I'm trying hard as an small ISP and couldn't allow time (my background is way different).
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines