legogris

Members
  • Content Count

    25
  • Joined

  • Last visited

About legogris

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So, this is how it works (and advance apologies if I misread the conversation and you're talking about something else): First off, if you want to talk semantics, sha1 is technically a cryptographic hash function, not a checksum algorithm. One could even argue that it's overkill for this use-case and that an algortithm made for checksums, such as CRC, is more appropriate (less secure but by far enough to detect random write or read errors, but more performant). The data on the card is read, bit-by-bit (well, on a lover level the reads are buffered for performance reasons, but it makes no practical difference otherwise). The data is then put through the sha1 hash function, which produces a 160-bit hash. A single bit being changed produces a completely different hash. The odds of the hashes matching up in case every single bit doesn't is 1 in ~10^18. For practical purposes, low enough odds to be practically impossible. When we're at those odds, you might as well start thinking about the probability that the read during your verification steps gives you the expected output due to a read error exactly matching the inverse of the write error during the write, RAM errors and cosmic radiation. If you're concerned about well-funded adversaries with a huge dedicated compute cluster deliberately giving you a fake image with the same matching hash, then you might want to look at more cryptographically secure hash functions - in which case replace the `sha1sum` command above with the slower but more secure `sha512sum` (512 bits, or 1 in 10^154). The chances of a collision are small enough that on average, with all the worlds current computing power, we'd be talking millions of years. The number of atoms in the entire universe is believed to be around 10^80. Pick two random atoms in the known universe and the odds of those being the same atom is still vastly higher than SHA-2 producing the same hash after a write error. TLDR; For your purposes, if the checksum validates, so does the written data (given that the checksum verification here is made between data read back from the written card vs the source file and not just between the source file and a website). Etcher did CRC32 until 2018, and is now on SHA512. Looking at the source, usbimager actually reads back the source and the target into memory and does a memcmp (straight-up byte-by-byte comparison): https://gitlab.com/bztsrc/usbimager/-/blob/master/src/main_libui.c#L147 These are all valid approaches, only caveat with the last one being that you need to be able to fit 2x the image size into RAM during the validation
  2. Yes - if those two commands return the same output, it means the data in img file and on the sdcard have the same sha1 hash (and therefore contain the exact same bytes).
  3. I just use vanilla linux tools. To write image: $ dd if=image.img of=/dev/sda bs=4M status=progress && sync To verify sha1sum: $ head -c $(stat -c '%s' image.img) /dev/sda | sha1sum $ sha1sum image.img
  4. Awesome, thanks, I somehow missed that despite searching around! Maybe you want to put that in the top post :) (For anyone else: https://github.com/150balbes/Build-Armbian)
  5. @balbes150 Have you, or do you have any plans to release the source (/config/resources and docs necessary to rebuild) for your images? It would be a game-changer if we could do reproducible builds, or at least compile ourselves.
  6. Cool. Already placed an order last week but FriendlyELEC seems hit by coronavirus or something. 7 days now and zero communication despite reachouts over e-mail, forum PMs and phone calls. If it weren't for their track record and the lockdown in Guangzhou I'd just assumed I've been scammed. Thinking if I should do a chargeback or keep waiting :/ EDIT: What do you know, I just logged in on their webshop and the order was marked as shipped today, so I guess things are moving and they're just overwhelmed and low on english-speaking staff.
  7. Nice perk of seeding: Since there is such a large number of torrents targeting individual boards and chipsets, sorting torrents by ratio give nice statistics on board popularity and trends. Can be helpful to see which boards to check out closer
  8. Ok, cool. Was just thinking we may be a bunch of people with Gbit/s downlinks who will occupy bandwidth that'd be better serving those actually downloading releases for themselves and that it might be hard to get people to change it once they've deployed so better nip it in the bud so to speak. But I guess that's just premature optimization, so nvm
  9. I don't really see what the users bandwidth has to do with this..? Initial seeders get hit equally hard by 100MBit/s downloads regardless of if that user is on a 100Mbit or 1Gbit/s connection?
  10. I was mostly thinking to ease for the initial seeders rather than for the people using the script. 10Mbit/s sould be a sane default?
  11. First batch of torrents on my new seed box Maybe it would be a good idea to modify the script to limit download speed a bit by default (so we don't get overloaded by all of us downloading at top speed at release, there's a bit of balance here, just a thought)
  12. Did anyone run Ethernet/Wi-Fi benchmarks/speed test on any of these boards?
  13. @gdm85 I'm considering this board as a wifi hotspot and I'm curtious what kind of speeds you see over Ethernet / Wi-Fi?
  14. Thanks for the input, guys! I'll order another board and run some benchmarks (and sorry for getting x86 in here xD) @Daniel Denson What case do you use for this with disk(s)?
  15. I'm currently on the hunt for the cheapest decently reliable board to use for building a glusterfs cluster. Requirements are simple: GbE, SATA port, arm64, preferably low power. While helios64 is obviously the way to go for a NAS when it lands, it's totally overkill for a setup like this and I just realized that ODROID HC2 won't fly due to being 32-bit only. Most other 64-bit boards require expansion boards for SATA, which drags up the total cost. Finally I realized that the PC Engines APU boards fit all criteria, and come with what I understand to be performant GX-412TC CPUs. This one, for example, for 86EUR (maybe a bit on the pricey side, but I haven't found better actually): https://www.pcengines.ch/apu2d0.htm These boards are generally only used as router boards, and I am actually already using one as a router (which prevents me from doing tests myself right now unfortunately :P) Does anyone have any idea of if this is a Bad Idea or worth a shot? Looking at the specs I don't see an obvious reason why this wouldn't work. EDIT: While I guess an interesting topic to use these boards for other purposes, I realized now that the Espressobin looks more suitable at smaller size and lower cost at ~60EUR + shipping, so I think that's the best bet for now. EDIT2: Seems like GlusterFS has higher requirements than I thought and performs poorly with Gluster; this user got a measly 35Mbps (albeit with 5 disks): https://www.reddit.com/r/DataHoarder/comments/92zwct/espressobin_5drive_glusterfs_build_follow_up/. Redhat recommends 16GB minimum but surely it could be reasonable for home use with less... https://access.redhat.com/articles/66206 The APU has 2 instead of 4 cores and comes with in a 4GB version over the Espressobin's 2GB, so maybe it's the better alternative after all.