Jump to content

bedna

Members
  • Posts

    63
  • Joined

  • Last visited

Everything posted by bedna

  1. Boring? To learn and have discussions? Strange sentiment. About bcachefs, for your sake, let's hope Linus Torvalds goes back on his comment that he and Kent will split ways in kernel 6.17 then so you won't have to rely on dkms for your unstable filesystem. bcachefs is in beta and should not be used as primary filesystem. Linux kernel mailing list: https://lore.kernel.org/all/CAHk-=wi+k8E4kWR8c-nREP0+EA4D+=rz5j0Hdk3N6cWgfE03-Q@mail.gmail.com/ No, it's impossible to have the same UUID on two btrfs systems at the same time, you can't even format a btrfs system and give it the same UUID, the kernel won't let you. See btrfs documentation: https://btrfs.readthedocs.io/en/latest/Send-receive.html So you are lying for some reason. Maybe the same reason you "don't read links". xD Has nothing to do with "trust", it has to do with how COW and send|receive works witch you don't seem to understand/ignore. And ofc an rsync (suddenly you talk about rsync instead of btrfs send -p) will work, it just checks if the files are changed and update them if necessary, witch is exactly what I have been saying is the better way for backups the whole time. IMHO it's even better to just create the subvolume(s) and rsync from the first run altogether if you want a backup, send|receive is not necessary and is just slow. And in the case of a complete backup incl boot like shrink-backup does by dd:ing the "boot sector" (witch includes potential boot partition), all that has to be done is make sure configs point to the new UUID (in case of rpi derivative systems nothing has to be done, because they use PARTUUID witch can be the same in contrast to UUID). If you "don't want to click links" to learn, I won't pressure you. You already seem triggered for some reason. But for others that might read this I hope my knowledge about backup strategies came through. Again, snapshots are perfect for rollbacks, but NOT for backups. Have a good Saturday.
  2. That does not change anything about the FIRST send you do, it will take just as long as a dd does while rsync does it in 10% of that time. -p does the same thing as an rsync does to files that has already been rsync:d, the difference is rsync only looks at the data in the files while a send|receive also syncs the entire structure of the filesystem, that's the whole point with the COW snapshot method. So if parent is corrupted without you knowing it (witch is very often the case), so will the child. And as a rule of thumb in backups, you should have multiple backups, the 3-2-1 model is a good guidance. The downside with snapshots is they inherit the flaws of the parent, that is why it is common knowledge (or should be) that snapshots should NOT be treated as backups but ONLY as restore points. No, a scrub will not notice anything until it's too late. A scrub is just a check of the integrity of the filesystem, so if a scrub notices something is wrong the filesystem is already corrupted. So unless you also do a scrub EVERY time on the filesystem you are about to send from before sending, the receive system will also become corrupted. A complete scrub of 2TB takes a while, probably a few hours even if on an nvme. We are all different, I have all my userspace data on home, seems odd to me to rely on a network to be able to access my userspace. Would be really annoying if something were to happen to networking during for example an update or if the network interface stopped working on my desktop for some reason, maybe broken hardware in between like a switch or something. I also do a bit of AI stuff, and some models can be pretty darn big (for example some flux models are around 12GiB) so I want to be able to access them at the speed of an nvme. Other files like movies and such I keep on a network drive, I just have an NFS share on a rpi with a 4TiB spinning drive connected to it, that works just fine for me. That drive is ofc also backed up to another device with 260 incremental versions using urbackup. I'm not trying to convince you to use anything specific, I am simply trying to explain why relying soly on snapshots is not a great idea. I also think that having a restore via an img file is way easier than having only file backups in case of disaster, therefore I maintain shrink-backup. If for example my pihole dies, let's say the sd-card becomes completely unusable, I want as little downtime as possible. Writing an img file to another sd-card only takes a few minutes and the system is restored. Or even worse, if I were to utilize your system relying on a nas. If that nas dies and all my backups are on that nas, what do I do then? So as some friendly advice, I think you should reconsider your backup strategy if you want to be safe in case of complete disaster, you should always plan for worst case scenario. Here is a pretty good read on why snapshots are not considered a good strategy for backups: https://community.veeam.com/blogs-and-podcasts-57/data-backup-basics-i-why-snapshots-are-not-backups-understanding-the-differences-6074 I also rely on snapshots as restore points, but I would never rely on them alone. As an example, by using shrink-backup on my sbc:s I create an img file, I compress that img file and that file then becomes backed up with urbackup to another device (I actually exclude the img file itself in the backup), so when I update that img with shrink-backup, I rename the last compressed file with an "-old" suffix and create a new compressed file. That way I always have 2 backups accessible directly on my desktop computer. Those files are then again backed up by urbackup and are kept at least 3 months (260 backups is the actual threshold so it depends on how many backups is done per day, but minimum 3 months if my computer were to run 24/7). I also store one of those urbackup backups per month for 3 years at a location outside my house, sent encrypted via the internet to another physical location at a friends house (this could also be a cloud storage). In case of for example a fire, no matter how many backups I have at the house, they will all burn and I loose everything. A disaster only has to occur once, and by then it's too late. 3-2-1 backup strategy.
  3. Thanks for the feedback, If that was the intention. If you try the version on the testing branch, that won't be an issue because 'btrfs du' (it is just slow and unnecessary) is no longer used. xD There are actually more changes coming to btrfs in the next release. Even looking into conversion from ext4 to btrfs. Oddly enough, I can backup my arch system with boot mounted at /boot/efi using the script on main branch, wonder why yours doesn't, so clearly distro DOES matter. But it soon won't matter for the problem you had, because it will be changed anyway. Besides, shrink-backup is not really intended to be used on a desktop computer, it's for sbc:s that usually are way less convoluted than a desktop setup. For ext4 and f2fs shrink-backup only covers root and boot partitions (if boot partition exists), btrfs is an exception where the script looks up existing subvolumes and creates them. As of now, the script also assumes all subvolumes starting with @ is a top volume, witch does not actually have to be the case, but changes are coming. Interesting to see you use /boot/efi (discouraged by linux kernel standards), and not /boot as boot partition, wonder why? I do the same, because I use timeshift that is built upon snapper. Timeshift also creates entries directly on my grub menu letting me boot into the ro snapshots and restore from there if needed. For others reading this: This is because if you roll back a snapshot where you have changed things in boot process and files inside /boot, you kinda want files inside /boot to be rolled back too or your system won't boot, hence letting /boot be part of the root subvolume and mount boot partition on /boot/efi instead. My point was, that the problem you mentioned, the database might become corrupted is not solved by using snapshots and btrfs, a database has to be closed and locked to be 100% sure it won't become corrupted in ANY copy process. (beside the fact that the database in the case of pihole is only a log and has nothing to do with the integrity of the application itself) I completely disagree with that. If I revert a snapshot because I messed something up on my system or something went wrong in an update for example, witch happens once in a while, I do NOT want my home to be over-written by an old snapshot. I also want to be able to read old log-files to figure out WHAT went wrong, so /home & /var/log in separate snapshots is absolutely a good thing for me. On a sbc I might agree that separate snapshots is a bit overkill though, but not in any way a bad idea. But if you actually use snapshots as a restore points on an sbc I argue /var/log is a good thing to have on separate snapshot. A snapshot takes no time at all, 1s I would guess, but what about the send|receive? Because a snapshot is completely worthless if the filesystem you took the snapshot from becomes corrupt, COW (copy on write) means you use the exact same data. That is why it's (or should be) general knowledge that a snapshot is NOT A BACKUP, but it becomes a file-backup if you send|receive it to a completely different filesystem. So the interesting part here is the send|receive because op asked about backups, and that takes about as long as a dd would take, while rsync probably does the file transfer in about 10% of that time. Also, again, shrink-backup takes everything, including boot process parts and makes an img file at small size (for two reasons, saving space and ability to use the img file on smaller storage device, moving from a 64GiB to a 32GiB sd-card for example) that can be written directly instead of also having to do a bunch of pretty complicated restore processes. And quite a few armbian images utilizes u-boot without a boot partition with header data written to the disk before root partition, shrink-backup covers that as-well. Generally, I treat snapshots as they are intended, as snapshots that can be used as RESTORE POINTS, and I use a backup application to make regular BACKUPS. On my desktop computer I use clonezilla that I have added a menu for in grub to easily boot into (you can make custom menus and boot directly into img/iso files in grub), and on all my sbc:s I obviously use shrink-backup. (for those who don't know, I am the maintainer of shrink-backup) I also have 260 "individual" incremental file backups of my home (and some other locations like /etc and my "file disk"), one every 6h using urbackup running on a rpi4 with a 6TB disk connected to it. It utilizes btrfs so the "individual" means they are just snapshots and only changed files actually take up more space. Lastly, if you don't actually utilize the functionality of running a system with multiple subvolumes, you don't even need to have a subvolume for /, you can just write files directly to the btrfs filesystem.
  4. I have a pihole on one of my devices that I backup using shrink-backup. When pihole released 6.0 there was a bug that rendered my system unstable so I found it easier to revert back to 5.x until they fixed the bug. The restore worked without any issues whatsoever. The pihole database is only a db file and the records between the backup and to date is obv lost, but as for integrity of the system it's only a query log so it has no impact on pihole itself. As a matter of fact, you can run pihole without any db history at all. And IIRC the database history is 90 days by default, not one year. A btrfs send|recieve is only a file backup, it will never handle boot process or anything like that. You will also need to do it on every subvolume individually since send|recieve only takes one subvol, not even nested volumes. You will also have to make the source subvolume read only before issuing the send, but I guess that is what you mean by saying "make atomic snapshots". It's also very slow compared to rsync. Besides, using btrfs send|recieve does not solve the issue that copying a database that is in use might corrupt it, shutting down the application using the database before making a backup is. As a matter of fact, it might just make things even more complicated since you have to make the database (subvolume) read only during the send so the application might freak out from not being able to write to the database (unless you snapshot it first like you mention, but then you still have the same problem that an open database might break after being backed up). btrbk solves this (I think) by creating a snapshot (snapshots become read only by default) from the subvolume first and then send that, but you can't btrfs send|recieve a read/write subvolume, but again, does not solve the issue that the database is open. https://btrfs.readthedocs.io/en/latest/btrfs-send.html Fun fact: shrink-backup can backup btrfs devices. https://github.com/UnconnectedBedna/shrink-backup?tab=readme-ov-file#btrfs
  5. Well that depends, have you tried adding more space to exclude the possibility that the img actually can fit all the data? If you are using snaps without excluding the containerized mounts on one of them and not the other, it seems like a very logical conclusion that the img actually runs out of space and the "not enough space" is accurate in this situation. Edit: I have never used snaps so I don't know, but I think snaps create containerized mounts every time you boot, and you probably don't want that data on your backup anyway since it will never be used. Others can surely put more light into this than my wild guess about this. Edit 2: This made me a bit curious about how snaps actually work, if they were to randomize the mountpoints and such, but after a tiny bit of searching I found this: https://www.howtogeek.com/660193/how-to-work-with-snap-packages-on-linux/#installing-snap-packages So just editing exclude.txt and adding /snap/* and then add -t option when running shrink-backup should solve this issue. As a matter of fact, I should probably add that as default in the exclude.txt file on the repo and in the script if not using the exclude file, because I don't think you ever want that data on the image... Feedback about this would be highly appreciated before I implement that change, does snap create the directory for each SquashFS within /snap if it does not exist or will it break if it is excluded like I suggest above? Edit 3: According to this each SquashFS is mounted directly to /snap so excluding /snap/* (keeping the actual /snap directory but excluding all content) should work right? Feedback please, I know there are some of you who know how this work, @eselarm maybe?
  6. Please use github for issues: https://github.com/UnconnectedBedna/shrink-backup/issues Don't forget to run shrink-backup with -l option and provide the log with the report. Also please try testing branch first, see below. I want to point out there are a ton of reasons for a broken pipe error with rsync that are presented as "no space left", memory on the machine running for example even thought it sounds strange, short network disconnections is another, so most likely these errors are not related to shrink-backup. There have been some changes made to the rsync operation on the testing branch though, primarily an extended --timeout in case it is network related, so please try that out first before creating an issue on github. cd <directory where you git cloned shrink-backup> # switch to testing branch git checkout testing # run shrink-backup sudo ./shrink-backup <whatever options you use> # if you want to switch back to main git checkout main If you used other method than git to acquire the application you can find the testing branch here: https://github.com/UnconnectedBedna/shrink-backup/tree/testing Edit: A solution in a situation like that would be to edit exclude.txt and add paths to what should be excluded and then run shrink-backup with -t option Please see: https://github.com/UnconnectedBedna/shrink-backup/tree/main?tab=readme-ov-file#-t-excludetxt
  7. It works. On armbian: In /boot/armbianEnv.txt extraargs=ipv6.disable=1 Worked on my orangePi-PC2 at least. On rpiOS: In /boot/firmware/cmdline.txt, add ipv6.disable=1 to your console line, for example: console=serial0,115200 console=tty1 root=PARTUUID=583925ff-02 rootfstype=ext4 fsck.repair=yes ipv6.disable=1 rootwait You can check if it works: $ ls /proc/sys/net/ipv6 ls: cannot access '/proc/sys/net/ipv6': No such file or directory
  8. Wow, that was fast! But what about ipv6? It works on the community image for opi-PC2 as I mentioned but not here.. Edit. Nvm, I see there is a cmdline.txt file in there.. Thank you for the fast reply!
  9. rpi4b8gb device. I am trying to disable wifi, bluetooth and ipv6 like I do on raspberry pi OS, but it won't play nice with me. Bluetooth & wifi I read https://docs.armbian.com/User-Guide_Armbian_overlays/ (it says it's WIP so that might be why?) and looked for the dtolverlays I use on rpiOS: $ ls -1 /boot/dtb/overlays/disable* | cut -d / -f 5 disable-bt.dtbo disable-bt-pi5.dtbo disable-emmc2.dtbo disable-wifi.dtbo disable-wifi-pi5.dtbo $ ls -1 /boot/dtb-6.12.31-current-bcm2711/overlays/disable* | cut -d / -f 5 disable-bt.dtbo disable-bt-pi5.dtbo disable-emmc2.dtbo disable-wifi.dtbo disable-wifi-pi5.dtbo They are there, ie disable-bt and disable-wifi, so I edited /boot/armbianEnv.txt and added: overlays=disable-bt disable-wifi But after reboot they are still active (see wifi interface below about ipv6): $ systemctl status bluetooth.service ● bluetooth.service - Bluetooth service Loaded: loaded (/lib/systemd/system/bluetooth.service; enabled; preset: enabled) Active: active (running) since Thu 2025-06-05 18:17:32 CEST; 4min 3s ago Docs: man:bluetoothd(8) Main PID: 610 (bluetoothd) Status: "Running" Tasks: 1 (limit: 8723) Memory: 3.1M CPU: 102ms CGroup: /system.slice/bluetooth.service └─610 /usr/libexec/bluetooth/bluetoothd Yes, I know, I can just disable and mask the service, but this way should work too right? IPV6 And for ipv6 I edited armbianEnv.txt and added: extraargs=ipv6.disable=1 But still available (you can also see wifi not disabled): $ ls -l /proc/sys/net/ipv6/conf/all/disable_ipv6 # this should not be available -rw-r--r-- 1 root root 0 Jun 5 18:19 /proc/sys/net/ipv6/conf/all/disable_ipv6 $ ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host noprefixroute valid_lft forever preferred_lft forever 2: end0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether <redacted> brd ff:ff:ff:ff:ff:ff inet 192.168.99.60/24 metric 100 brd 192.168.99.255 scope global dynamic end0 valid_lft 42013sec preferred_lft 42013sec inet6 <redacted>/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether <redacted> brd ff:ff:ff:ff:ff:ff The strange thing is on my OrangePi-PC2 disabling ipv6 this way works... Any advice?
  10. I'm no expert, but I think what you are talking about here is a function to keep the log in memory and write to disk at shutdown or at timed intervals to minimize writes to disk. I would assume it involves some kind of mount --bind at boot, but I honestly don't know. Please create an issue on the github and continue conversation there if you still think this is something the script is responsible for, and by doing so, please follow the instructions in my previous post here on the forum. As for the "keep the script quiet", sure, make a feature request on the ghub and I'll look into it, does not sound to complicated to fix for you tbh.
  11. Please create an issue on the github providing the logfile and describe your workflow in detail. Make sure the script is up to date, because I think in the past, hdd.log WAS excluded by default. Also provide an example run and show the file is missing by creating a fresh backup, use the `--loop` function of the script and then mount the loop and run ls to show the file is missing. Even if the directory is in ram or similar, and not excluded, it should be in the backup img. Edit. Oh, I see the claim is the file is missing AFTER the autoexpansion, well.. Please still do the original request so there are logs to study. (run the script and include the -l option) If it actually gets removed in the expansion, we have to look deeper into it, because the autoexpansion is made by using armbians systemd autoexpansion. Edit 2 Could also be interesting to see if the file "survives" if you include the -e option, ie do not autoexpand on boot.
  12. Found the login credentials, lets continue on the github where we started. Might as well make the disclaimer here too. The "original" Bedna has passed, I as an extremely close relative have taken over. (Bedna actually came from "Big Edna" a character in the Weird Al movie UHF and we kinda shared that username, twins you know) All credentials for his accounts has been passed on to me so his spirit will live on in shrink-backup. There are TONS of documentation left behind. Please do not make a thing about it. I just wanted to let ppl know that this is still in good hands and will continue to be supported if bugs are found and if changes are needed to be made. But as for getting it into armbian as armbian-backup will have to take a step back. He started with things, but if I do that, it is better to start from scratch again, and my time is a bit limited...
  13. Yes, I am working on that. Whiptail is what I am educating me in closer right now. But it will most likely be it's own script. Trying to implement the script straight up I think would become messy pretty fast, and I don't think anybody is interested in that. But yes, it is happening, sometime.. xD I was honestly hoping it would already be a reality, but my health keeps putting up roadblocks for me.
  14. Try again Thank you! Op now updated to reflect current version.
  15. I am not sure why I do not get email notifications when posts are made here, I thought I fixed that... (lmao, I just realized I had for some reason unclicked "follow topic") Thank you so much, both of you for your feedback! ------------------------------------------------------------------------------------------------------- shrink-backup v1.2 released As per usual, I can not change OP so for full and updated information please visit https://github.com/UnconnectedBedna/shrink-backup A smaller release this time. Added curl install method for users that prefer to have application and configs files in a static location, please see wiki Various QOL updates regarding the interface Bug fixes Thank you for using shrink-backup!
  16. Late to the party... Yes. https://github.com/UnconnectedBedna/shrink-backup - Make a working install - Make an img with shrink-backup - "burn" that img and use on another device Note that the UUID:s will ofc be the same on both systems.
  17. Please see this post (last post in the thread) and tell me if you need/want something from me... After running the update from 6.1 > 6.6 it freezes at boot. I have access to backup img:s from before this update is ran.
  18. @going Interesting.... update-initramfs: Armbian: Symlinking /boot/uInitrd-6.6.31-current-sunxi64 to /boot/uInitrd '/boot/uInitrd' -> 'uInitrd-6.6.31-current-sunxi64' update-initramfs: Armbian: done. Remove unused generated file: /boot/initrd.img-6.1.63-current-sunxi64 Remove unused generated file: /boot/uInitrd-6.1.63-current-sunxi64 Armbian: update last-installed kernel symlink to 'Image'... '/boot/Image' -> 'vmlinuz-6.6.31-current-sunxi64' Armbian: Debian compat: linux-update-symlinks install 6.6.31-current-sunxi64 boot/vmlinuz-6.6.31-current-sunxi64 I: /vmlinuz.old is now a symlink to boot/vmlinuz-6.6.31-current-sunxi64 I: /initrd.img.old is now a symlink to boot/initrd.img-6.6.31-current-sunxi64 I: /vmlinuz is now a symlink to boot/vmlinuz-6.6.31-current-sunxi64 I: /initrd.img is now a symlink to boot/initrd.img-6.6.31-current-sunxi64 Armbian 'linux-image-current-sunxi64' for '6.6.31-current-sunxi64': 'postinst' finishing. I wiped the sd-card, but I have access to a backup to before running this update.. You guys need/want something from me?
  19. Yeah, I didn't react to the comment about if I was using bullseye, I was 100% sure I had a bookworm img installed, but that might not be the case, I must have dreamt I installed the only opiPC2 img I had downloaded on my computer. WDYM? I updated that system at least once a week up until it no longer worked. Worked flawlessly up until I made the post. Thank you for providing feedback, but I will reinstall with the latest community img and see what that leads to.
  20. Yeah, I didn't react to the comment about if I was using bullseye, I was 100% sure I had a bookworm img installed, but that might not be the case. Thank you for providing feedback, but I will reinstall with the latest community img and see what that leads to.
  21. Oh crap! I missed these repliess, I must make sure to activate email notification, I'm sorry! I honestly just left it unupdated, it's not a huge issue for me, I don't really use it for other than testing a script. The img it was installed from was "Armbian_community_24.5.0-trunk.93_Orangepipc2_bookworm_current_6.6.18_minimal" and I am aware it IS a "community release", that's why I didn't put a massive amount of investigation. Close.. "bedna".. xD But yes, bedna is UID 1000, the ONLY sudo user. 1003 is a system user, no home, no login shell IIRC. Edit. I made a check, and it actually is supposed to have a home in /home/unifi (MY BAD.. xD unifi:x:1003:1003:System user for Unifi,,,:/home/unifi:/usr/sbin/nologin So that is on me, I probably forgot to use the --system option or smthn, it is not a big deal and most likely not connected to the network failing at boot anyway. And the zsh is just super strange, I never use it even though I know it comes with the img. (I now realize, the option in the installation is just selecting the shell, there is absolutely a zsh installed on the opi, I have just not noticed it) I will try with a fresh image and come back with results, when I get time... Edit 2: https://github.com/armbian/build/blob/2a2e609e3c5e55404759ea9a2cf010b268c2f356/lib/functions/compilation/packages/armbian-zsh-deb.sh#L49C3-L49C129 awk -F'[:]' '{if (\$3 >= 1000 && \$3 != 65534 || \$3 == 0) print ""\$1":"\$3" "\$6"/.zshrc"}' /etc/passwd | xargs -n2 chown -R Is the "culprit". In this situation it IS my bad as shown above, not sure what I did because the directory was not created, but the entry in /etc/passwd clearly exist. A solution would be to run [-d <user-home-path found in passwd> ], but tbh, babysitting at that level is imho a bit too far... This is on me.. But the system freezing at boot trying to connect to network, I will look deeper into.
  22. ==> shrink-backup 1.1 release <== With this release versioning is changed from x.x.x to x.x The most noticeable change is the UI with coloring. But small efficiency increases to the code has also been made. Support for dietPi and webmin. Also created a way to convert your systems ext4 filesystem into f2fs on the img file. Downside is f2fs can not be resized while mounted unlike with ext4 so the user have to manually expand the img to cover the entire storage medium manually before booting. Increasing size while updating the img is also not yet covered, but should be doable so this feature will be implemented in a future release. A loop function to retry 3 times after looping the img file within the script has been implemented because bug reports started coming in about the UUID on the loop not being found, therefore failing the backup. Giving the system some time seems to resolve the issue. This seems to be related to if img file is located on a network storage. Usually, but not always, wifi network. Features in the release: UI improvements in form of coloring and other formatting New funcionality: --f2fs convert ext4 on root into f2fs on img file Added support for f2fs Added support for DietPi Added support for webmin --version option added Added .gitignore to github repo for users that change exclude.txt and want to use git pull without issues. Thank you for reading.
  23. I ran an update on my orangpi PC2. Setting up armbian-zsh (24.5.1) ... cp: cannot create directory '/home/unifi/.oh-my-zsh': No such file or directory cp: cannot create regular file '/home/unifi/.zshrc': No such file or directory chown: cannot access '/home/unifi/.oh-my-zsh': No such file or directory chown: cannot access '/home/unifi/.zshrc': No such file or directory Witch is to be expected to fail, because that is a system user I accedentally gave an above 1000 UID Unifi is UID 1003, my regular user bedna (1000) is the owner of /home/bedna I don't mind, I don't use zsh anyway, but figured you would want to know about this. Edit I spoke to early, the update actually breaks the system. Seems to be network related because it gets stuck at that if I connect a display to the device. I have restored a backup and retried 2 times, same thing, so there is presumably something wrong on your side here. This is what the terminal gave during the update, and as you can see, it looks like it all goes smooth, something in the firmware or kernel?
  24. Except what? You open up a flood door like that, you have to explain to op and hold his hand in how to find out what different users/groups are supposed to own different files. And since I have no idea what op has installed on his system, and how he did it, and what user he was when he did (sudo and root is NOT the same) I recommend op reinstall and do everything the correct way. Example from a system with docker running, nothing out of the ordinary. 3 cointainers the correct way, user in docker group and never using sudo with docker. $ sudo find . -group root -printf '.'| wc -c 153 153 files/directories just for root inside ~/. And this is only a quick search for root group, there are obviously other users and groups owning a whole bunch of stuff too. Probably a GREAT idea to "sudo chown -R 1000:1000 ~/"... Remember, op stated: And this is just an example of what can happen. Blows my mind an armbian admin thinks blindly taking ownership is a good idea... 😮
  25. A friendly warning about doing above. MAKE SURE YOU DO NOT CHANGE OWNERSHIP OF FILES IN YOUR HOME THAT SHOULD STILL BELONG TO ROOT OR OTHER USERS/GROUPS!!!! As I stated before, just blindly changing everything in your userspace to belong to you MIGHT work, but I recommend against it since this seems to be something that happened immediately after you wrote the img. Safest way is to reinstall and follow the correct procedure.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines