legogris
-
Posts
42 -
Joined
-
Last visited
Reputation Activity
-
legogris reacted to Werner in Armbian (Debian Bulleye) - supported or not?
Yes, Bullseye will be supported while Buster phases out but may take a few additional month to get things prepared.
-
legogris got a reaction from Bagel in Odroid HC4 - No network connection after boot
This sounds similar to the issue where eth0 comes up during initramfs but fails to come up after startup.
See if this solves it for you:
ip link set eth0 down; sleep 1; rmmod realtek; sleep 1; modprobe realtek; sleep 1; ip link set eth0 up (And then do dhcp/add ip and routes)
Most of that delay is probably unnecessary - I noticed it would sometimes fail without them but didn't bother to pinpoint exactly where and how long is necessary.
If it does work, until the issue is solved upstream, you can add and enable this systemd unit to have it done on startup:
[Unit] Description=Load Realtek drivers. Before=network-online.target [Service] Type=simple ExecStartPre=/usr/bin/sh -c '/sbin/rmmod realtek && sleep 1' ExecStart=/usr/sbin/modprobe realtek [Install] WantedBy=multi-user.target
Separate but similar issue to https://superuser.com/questions/1520212/realtek-ethernet-not-working-after-kernel-update
-
legogris reacted to Igor in First Panfrost enabled desktop builds
After several months of development, we finally bring desktop development to the daily driver state. Our desktop builds follow KISS philosophy. They are build around proven technology, we are not touching things that are done well. Just fixing things here and there and removing most / hopefully all bloatware that is added upstream. We are preparing a wide selection of desktop variants
while we will officially only support a few of them but anyone is welcome to join and tweak / support its desktop of choice. We will support you in best possible manner! And this way things starts the voyage to become officially supported one day. Only if you join. We can't take more load.
We support XFCE since early days and that will remain primary desktop since it represent a best combination between functionality, speed and beauty. Second desktop, which we are adding now is Gnome because its clean and most stable among advanced / bulky desktops. It can ran fairly smooth as low as on Allwinner H5 / 1Gb memory, while it runs very well on some RK3399 hardware. In both cases it uses open source 3D acceleration - Lima and Panfrost. New desktop option will be added gradually, but for now:
Orange pi 4: Budgie, Gnome, Cinnamon, Gnome, Mate
https://www.armbian.com/orange-pi-4/
Pinebook PRO: Gnome
https://www.armbian.com/pinebook-pro/
Nanopi T4: Budgie, Gnome, Cinnamon, Gnome, Mate
https://www.armbian.com/nanopc-t4/
Nanopi M4V2: Budgie, Gnome, Cinnamon, Gnome, Mate
https://www.armbian.com/nanopi-m4-v2/
What about others?
- ASAP. Those are semi manual builds, some are manually tested and it takes a lot of time. On top of that we are having some infrastructure troubles ATM ...
- we still need to fix few minor bugs, before we put on stamp as "supported" even those builds are IMO generally in a better shape then other images on the market
- you can help by testing and enabling specific builds by sending a PR to this file. https://github.com/armbian/build/blob/master/config/targets.conf It will help to get things up faster.
-
legogris got a reaction from denni_isl in Need your help - what else beside Etcher
So, this is how it works (and advance apologies if I misread the conversation and you're talking about something else):
First off, if you want to talk semantics, sha1 is technically a cryptographic hash function, not a checksum algorithm. One could even argue that it's overkill for this use-case and that an algortithm made for checksums, such as CRC, is more appropriate (less secure but by far enough to detect random write or read errors, but more performant).
The data on the card is read, bit-by-bit (well, on a lover level the reads are buffered for performance reasons, but it makes no practical difference otherwise). The data is then put through the sha1 hash function, which produces a 160-bit hash. A single bit being changed produces a completely different hash.
The odds of the hashes matching up in case every single bit doesn't is 1 in ~10^18. For practical purposes, low enough odds to be practically impossible. When we're at those odds, you might as well start thinking about the probability that the read during your verification steps gives you the expected output due to a read error exactly matching the inverse of the write error during the write, RAM errors and cosmic radiation. If you're concerned about well-funded adversaries with a huge dedicated compute cluster deliberately giving you a fake image with the same matching hash, then you might want to look at more cryptographically secure hash functions - in which case replace the `sha1sum` command above with the slower but more secure `sha512sum` (512 bits, or 1 in 10^154). The chances of a collision are small enough that on average, with all the worlds current computing power, we'd be talking millions of years. The number of atoms in the entire universe is believed to be around 10^80. Pick two random atoms in the known universe and the odds of those being the same atom is still vastly higher than SHA-2 producing the same hash after a write error.
TLDR; For your purposes, if the checksum validates, so does the written data (given that the checksum verification here is made between data read back from the written card vs the source file and not just between the source file and a website). Etcher did CRC32 until 2018, and is now on SHA512. Looking at the source, usbimager actually reads back the source and the target into memory and does a memcmp (straight-up byte-by-byte comparison): https://gitlab.com/bztsrc/usbimager/-/blob/master/src/main_libui.c#L147
These are all valid approaches, only caveat with the last one being that you need to be able to fit 2x the image size into RAM during the validation
-
legogris got a reaction from denni_isl in Need your help - what else beside Etcher
I just use vanilla linux tools.
To write image:
$ dd if=image.img of=/dev/sda bs=4M status=progress && sync
To verify sha1sum of the data just written to the card:
$ head -c $(stat -c '%s' image.img) /dev/sda | sha1sum $ sha1sum image.img
-
legogris reacted to MMGen in Full root filesystem encryption on an Armbian system (NEW, replaces 2017 tutorial on this topic)
No, I wouldn't assume that. See the comments by @sunzone above regarding the Orange Pi Zero.
In their case, the problem may be connected with the fact that the OPi Zero requires 'flash-kernel' to set up the boot loader.
I think that boards/images that don't depend on flash-kernel should generally work with this tutorial, but I need more test data to confirm that hypothesis.
-
legogris got a reaction from TRS-80 in [WTB] [ASIA] FriendlyElec NanoPi NEO2
I'd like to buy one or more NanoPi NEO2 or NanoPi NEO2 BLACK. Preferably 1GB model.
The manufacturer stopped selling these boards and none of the currrent models fit in their CNC OLED case (which I would also be interested in but will otherwise buy from them).
(Actually, anything with 64-bit CPU that fits in that case, or similarly nano-size with display and buttons, is of interest)
Will pay shipping to Asia from wherever you are.
-
legogris got a reaction from Werner in [WTB] [ASIA] FriendlyElec NanoPi NEO2
I'd like to buy one or more NanoPi NEO2 or NanoPi NEO2 BLACK. Preferably 1GB model.
The manufacturer stopped selling these boards and none of the currrent models fit in their CNC OLED case (which I would also be interested in but will otherwise buy from them).
(Actually, anything with 64-bit CPU that fits in that case, or similarly nano-size with display and buttons, is of interest)
Will pay shipping to Asia from wherever you are.
-
legogris reacted to jeanrhum in Review of Station M1: a perfect home server!
Station M1 is based on a firefly board quite similar to roc-rk3328-cc. However, the M1 has a different layout with all ports on the left or on the right, and it is enclosed in a metal case used also as a heat sink. The device tree is also different (rk3328-roc-pc instead of rk3328-roc-cc) and by default a smooth android 10 system is installed on emmc. Additionally, the powering of this board is ensured by a 5V USB-C port instead of a micro-usb for the roc-cc. You can find official pictures and specifications here.
My M1 has 2GB ram and 16GB emmc but variants with 4GB ram and more emmc storage can be bought. Currently, an armbian CSC image can be downloaded (bullseye or focal) or built using armbian buiding system.
I tried several images and all boots fine with almost everything working out of the box for a server use case. Here is the log of armbianmonitor with a buster build from the end of january 2021 with kernel 5.10.9: http://ix.io/2Nol
Idle the temperature goes under 40°C in a room where the temperature is about 21°C and the metal case is not warm. Running sbc-bench in this context produces these results: http://ix.io/2NoL
The temperature never goes over 61°C without any throttling. Compared to some other rk3328 SBCs, the M1 is cooler and this is mainly due to its metal case.
After general configuration with armbian-config, installing openmediavault and docker were as easy as possible, since they are also available through armbian-config in the softy menu.
After connecting an usb3 drive, I got an efficient nas system for a personal use. It can also be considered as a home-server while running several additional services thanks to docker (home-assistant, zigbee2mqtt, pyload, etc.). With this use case, the 4 A53 cores and 2 GB of ram are more than enough and all is running smoothly.
I also use other similar arm boards and here is my comparison:
- an amlogic S912 TV box with 3GB ram and GBit ethernet using armbianTV: 2GB is enough in my case and rk3328 is better supported than s912. Moreover, s912 is not officially supported by armbian, and latest armbianTV images are not compatible. The USB3 port on M1 is also a great improvement over USB2 for s912. TV boxes may also have unrecognized wifi chips whereas M1 has working wifi (and bluetooth ? not tested).
- Librecomputer lafrite 1GB: it is officially supported by armbian and it works well. Its main drawbacks compared to M1 is only 1GB, USB2 and only 100MBits ethernet, but it is much cheaper.
- Pine64 Pineh64 model B: IMO it is the real competitor to the M1. It has similar features with USB3 and GBE. It is officially supported by armbian and it has a more powerful SoC (Allwinner H6: 1.8Ghz 4xA53 with mali 720 gpu). However, this board is much warmer than M1 since in the same room its temperature is greater than 60°C while idle and throttling may occur on moderate to heavy loads. For moderate use case M1 is enough.
What to expect in the future:
- We can hope that this SBC becomes quickly officially supported, since the CSC release is already very mature (in my opinion).
- The main element that can be missing with mainline kernel for some use cases is the GPU (for desktop use) and VPU, but they are available on 4.4 kernel.
Another review of the M1 (using also armbian): https://www.thanassis.space/stationm1.html
Official forum: https://www.stationpc.com/forum-60-1.html
-
legogris reacted to NicoD in 32-core 3.3Ghz ARM Server Review
Today I had the pleasure of benchmarking an ARM64 server.
This server has been made available for Armbian to test native ARM64 image building.
I knew nothing about the server. Nobody told me any details.
So everything was an adventure for me to find out. I got SSH access, so my research began.
A lscpu informed me it had 32-cores all clocked at 3.3Ghz.
cat /proc/cpuinfo confirmed these 32-cores
Checking on what kernel we're on. Ubuntu Focal 5.4.0-52-generic.
And how much memory. 128GB RAM.
So first thing I wanted to know, how does one core perform with 7-zip benchmark?
The record I had seen until now was from the A73 cores from the Odroid N2+ clocked at 2.4Ghz. 2504MIPS decompression.
So :
taskset -c 31 7z b
This beats the Odroid N2+ its A73 cores clocked at 2.4Ghz. 2763 vs 2504MIPS decompression.
This also tells me these cores do not perform as good per clock as a high performance core.
While doing the single core benchmark I checked the sensors to know the wattage and temperature.
CPU power is about 20W for a single core tasks.
Without a load the CPU consumes between 10W-15W. So in total it consumes a bit more than 20W in idle.
Temperature never went under 49C even after +5 minutes in idle.
Of course, the next thing to do is an all-core 7zip benchmark.
This gives an amazing result. Way higher than anything I had ever seen on ARM.
85975MIPS decompression. This is amazing.
Best I had seen was 11000MIPS of the Odroid N2+. So this server does 8 x better than the N2+.
Tho, I must say. 7zip does bad with unequal clusers. The N2+ has a great difference in cluster frequencies. So it performs worse then expected here.
The wattage went a lot higher, up to 110W. And the temperature rose quickly up to 75C in seconds.
To test the internet connection I downloaded an Armbian image multiple times. Sometimes it was as low as 3MB/s.
Highest average speed I've seen was 12.5MB/s
Next test. BMW Blender render benchmark.
Here the fastest I had ever seen was by the Khadas VIM3. That did it in 42m51s.
I haven't done this yet with the N2+ in Armbian. In Odroid's Ubuntu it was a little slower. I expect it to be a little faster than the VIM3 in Armbian Bionic.
This is a tile based test. So every core gets its own task, until all tiles are done.
Well, this ARM64 server did this in 8m27s.
5 x faster compared to the Khadas VIM3.
For this the wattage didn't go over 85W. But the temperature did rise to 83C. So it started to throttle.
@lanefu already had done SBC-Bench on it when it was free. So this I didn't have to do myself.
http://ix.io/2Dcc
Here we see a lot. For example the CPUMiner did : 81.0kH/s
The Odroid N2+ : 14 kH/s 5.7 x less
RK3399 does a maximum of : 10.23kH/s 8 x less
Odroid C2 clocked at 1.75Ghz : 4.65kH/s 17 x less
So this server clearly can move a lot of bits around.
Now, what is this server? Ask google if nobody else tells me. "32 core ARM server 3.3Ghz"
First answer : https://www.theregister.com/2018/09/18/ampere_shipping/
That looks like it is this CPU. But still I can't find the exact name.
2nd answer : https://www.servethehome.com/ampere-32-core-64-bit-arm-chip-x-gene-3-ip/
So this is the Ampere 32-core 64-bit from X-Gene 3 IP.
Here the wikichip : https://en.wikichip.org/wiki/apm/x-gene/apm883832-x3?fbclid=IwAR0ljCQ61DY8Zwh_VyZd0fQH43dmPUTJA-CGLiQKYqU2fWwszFm1CPjH6Zo
This supports up to 1TB RAM. 8 channels @ 2666Mhz. With a maximum memory bandwidth of 158.95 GiB/s.
42 lanes of PCIe Gen 3, with 8 controllers
– x16 or two x8/x4
– x16 or two x8/x4
– x8 or two x4
– Two x1
4 x SATA Gen 3 ports, 2 x USB2. And a TDP of 125W TDP.
For me this is just an awesome thing to behold. I use ARM for almost everything.
The NanoPi M4V2 is my main desktop computer.
It isn't as powerful as my PC, but does the task for 10 x less power consumption, while being completely silent.
But when I need a big CPU, it isn't enough.
Even the more powerful Odroid N2+ isn't powerful enough to render long, +20minutes 1440p video's for example for my Youtube channel.
So then i need to use my x86/amd64 PC.
Today I have seen and tasted the future.
While this doesn't use the most modern Cortex/clusters. And it is only 16nm.
So there is still a lot of room for improvements in performance and lower power consumption.
ARM for desktop is possible, and ARM servers for big datacenters is possible(AWS). I have seen the future, I loved every second of it.
Here benchmarks compared to my SBCs
Greetings, NicoD
-
legogris reacted to Werner in Crude NAS with local caching
You all know the problem: You have a spare SBC (with USB3) laying around catching dust, a spare SSD or other hard drive and for whatever reason some (terabyte?) space in whatever cloud on the planet and have no idea what to do with it.
Well here is what I came up with to solve this struggle
https://zuckerbude.org/cloud-nas-with-local-cache/
-
legogris reacted to raoulh in [EU] Selling Helios64
Hi,
If anyone want to buy one, I want to sell mine. I do not use it, or have any time playing with it. It would be better in other hands.
I live in France, so I would prefer to sell it in EU.
It's in very good condition, as I did only play a few hour with it and it is stopped on my desk for the last 2 months...
Feel free to ask for pictures and anything if interested.
Raoul
-
legogris reacted to lanefu in Armbian v21.02
I'll continue to help with PBP.. As you all are familiar my work usually comes in bursts.. But yeah i'd like to build kind of a specific community up for it.. almost wonder if its worth a subforum under RK3399 or osmething.
I'm getting a PBP bare motherboard that i can use for the bare metal testing of liek uboot etc... and @Rich Neesehas a PBP coming in the mail
-
legogris reacted to Igor in ZFS dkms package upgraded to v2.0.1
Media info:
https://arstechnica.com/gadgets/2020/12/openzfs-2-0-release-unifies-linux-bsd-and-adds-tons-of-new-features/
Remarks: 32bit support is broken in Debian Stretch, no upgrade there. Also this upgrade is needed to bump kernel to 5.10.y
-
legogris reacted to NicoD in Video : Review of an AWS Graviton2 ARM server
Hi all.
I've just finished a new video where I review an Amazon AWS Graviton2 arm64 server.
This is a monster with 64 high-performance N1 cores.
It isn't clocked that high at 2.5Ghz, but it performs amazing for that clockspeed.
Here is my video.
Here all the info I've gathered.
AWS Server 32-cores 128GB ------------------------- NEOVERSE N1 64-core AWS Graviton2 ARMv8.2 aarch64 Arm’s Neoverse N1 cores -> based on A76 -> almost identical to Arm’s 64-core reference N1 platform -> CPU cores are clocked a bit lower 2.5GHz and only 32MB instead of 64MB of L3 cache Max speed 2500Mhz 8-channel DDR-3200 128GB ram 64 PCIe4 lanes TSMC’s 7nm process node ~1W per core at the 2.5GHz frequency between 80W as a low estimate to around 110W estimation. This info is not disclosed by AWS 7z single core Ampere 32-core : 2763 AWS 32-core : 3393 Threadripper 3990x : 4545 7z quad core : Raspberry Pi 4 @ 1.5Ghz : 6307 Odroid N2+ 4xA73@2.4Ghz : 9900 Ampere 32-core : 11145 AWS 32-core : 13733 Threadripper 3990x : 18060 7z all cores : Ampere 32-core : 85975 AWS 32-core : 110628 Threadripper 3990x : 391809 433702 OC Blender BMW CPU Odroid N2+ : 30m Ampere 32-core : 8m27s AWS Server 32-core : 2m08s ThreadRipper 3990x : 30s Blender Barber shop CPU AWS Server 32-core : 8m28s Threadripper 3990x : 2m18s79 CPU Miner Odroid N2+ : 14 Ampere 32-core : 87 AWS Server 32-core : 154.20 ThreadRipper 3990x : 1310 SBC bench : http://ix.io/2FrG Internet speed test between 1500 Mbit/s and 2000 Mbit/s both up- and download (up to 250MB/s) I had 32-cores of the 64-cores. It is expected to perform a bit worse per core with 64-cores vs 32-cores since less cache available per core. There's a newer Ampere 80-core N1 at 3Ghz SoC. https://www.anandtech.com/show/15578/cloud-clash-amazon-graviton2-arm-against-intel-and-amd/6 Thank you to @lanefu for giving me access to this.
-
legogris got a reaction from TRS-80 in Need your help - what else beside Etcher
So, this is how it works (and advance apologies if I misread the conversation and you're talking about something else):
First off, if you want to talk semantics, sha1 is technically a cryptographic hash function, not a checksum algorithm. One could even argue that it's overkill for this use-case and that an algortithm made for checksums, such as CRC, is more appropriate (less secure but by far enough to detect random write or read errors, but more performant).
The data on the card is read, bit-by-bit (well, on a lover level the reads are buffered for performance reasons, but it makes no practical difference otherwise). The data is then put through the sha1 hash function, which produces a 160-bit hash. A single bit being changed produces a completely different hash.
The odds of the hashes matching up in case every single bit doesn't is 1 in ~10^18. For practical purposes, low enough odds to be practically impossible. When we're at those odds, you might as well start thinking about the probability that the read during your verification steps gives you the expected output due to a read error exactly matching the inverse of the write error during the write, RAM errors and cosmic radiation. If you're concerned about well-funded adversaries with a huge dedicated compute cluster deliberately giving you a fake image with the same matching hash, then you might want to look at more cryptographically secure hash functions - in which case replace the `sha1sum` command above with the slower but more secure `sha512sum` (512 bits, or 1 in 10^154). The chances of a collision are small enough that on average, with all the worlds current computing power, we'd be talking millions of years. The number of atoms in the entire universe is believed to be around 10^80. Pick two random atoms in the known universe and the odds of those being the same atom is still vastly higher than SHA-2 producing the same hash after a write error.
TLDR; For your purposes, if the checksum validates, so does the written data (given that the checksum verification here is made between data read back from the written card vs the source file and not just between the source file and a website). Etcher did CRC32 until 2018, and is now on SHA512. Looking at the source, usbimager actually reads back the source and the target into memory and does a memcmp (straight-up byte-by-byte comparison): https://gitlab.com/bztsrc/usbimager/-/blob/master/src/main_libui.c#L147
These are all valid approaches, only caveat with the last one being that you need to be able to fit 2x the image size into RAM during the validation
-
legogris reacted to mboehmer in Odroid C2 on seafloor (part II)
We are down... and modules seem to be in good shape. Let's hope the powerup will work as expected.
-
legogris reacted to Werner in How i can config DHCP Server to Client from armbian?
Any tutorial for Debian/Ubuntu on the web should suffice since there is no difference between Armbian and those on OS level.
-
legogris reacted to NicoD in Quick review of NanoPi Fire3
N2+ or Khadas VIM3. They're the only ones that have more performance than this SoC.
I've got the NanoPC T3+ with this SoC. Single core performance isn't great. But multi-core it is amazing. Faster than the RK3399.
Here some old benchmarks. For you the CPUMiner scores are important.
64-bit SBC's Odroid N2+ Clock S/C | B/C | Blender | 7z S/C | 7Z B/C | CPUMiner Ubuntu Bionic 4.9 2.02Ghz 2.40Ghz 11m20s 1755 2504 14 Khadas VIM3 |SBC bench result |CPU Miner |7-zip s/c A53 |7-zip b/c A73 |7-zip multi avg. of 3 |Blender Ubuntu 18.04.02 http://ix.io/1MFD 13.10kH/s 1577 2311 10578 42m51s Armbian@1.9S.C./1.7B.C. http://ix.io/1NRJ Odroid N2 |SBC bench result |CPU Miner |7-zip s/c A53 |7-zip b/c A73 |7-zip multi avg. of 3 |Blender Ubuntu Bionic http://ix.io/1Brv 11.35kH/s 1564 1879 9988 50m28s NanoPC T3+ |SBC bench result |CPU Miner |7-zip s/c |7-zip b/c |7-zip multi avg. of 3 |Blender Armbian Bionic http://ix.io/1iRJ 10.99kH/s 1290 10254 1h10m25s Arbmian Stretch http://ix.io/1qiF 8.55kH/s 1275 10149 1h13m55s Rock Pi 4B |SBC bench result |CPU Miner |7-zip s/c |7-zip b/c |7-zip multi avg. of 3 |Blender Ubuntu http://ix.io/1uVr 9.50kH/s 1242 1818 7802 1h17m22s NanoPi M4 |SBC bench result |CPU Miner |7-zip s/c |7-zip b/c |7-zip multi avg. of 3 |Blender Armbian bionic hz1000 http://ix.io/1nLh 10.23kH/s 1335 2005 8352 1h13m50s CONFIG_HZ=250 http://ix.io/1BLW 10.45kH/s 1335 2007 8320 1h08m28s Armbionic@1.4/1.8 hz250 1253 1828 7821 1h12m52s Armbian bionic nightly http://ix.io/1pDo 10.24kH/s 1329 1990 8292 1h13m28s Armbian stretch desktop http://ix.io/1odF 8.66kH/s 1350 1977 8400 1h14m12s Armbian stretch dsk nightly //ix.io/1pM0 8.80kH/s 1359 1993 8500 1h15m04s Armbian stretch core no fan //ix.io/1pKU 8.80-8.65kH/s 1353 1989 8461 Armbian stretch core //ix.io/1pL9 8.76kH/s 1354 1988 8456 Armbian stretch core nightly //ix.io/1pLf 8.82kH/s 1357 1994 8494 Lubuntu Bionic arm64 http://ix.io/1oGJ 9.24kH/s CPU Miner 1056 1551 6943 1h28m13s Lubuntu Bionic armhf http://ix.io/1pJ1 1111 1769 7705 2h02m54s Lubuntu Xenial armhf http://ix.io/1oCb 989 1507 6339 2h20m51s Khadas Vim2 Max |SBC bench result |CPU Miner |7-zip s/c |7-zip b/c |7-zip multi avg. of 3 |Blender Ubuntu Xenial http://ix.io/1qkA 6.86kH/s 823 1134 6682 1h14m39s 7-zip only 600% of 800% used It would be hard to make enough money to repay the devices, and power.
The Odroid N2+ is one of the most energy efficient SBCs around. When maxed out on CPUs it only consumes 6W. Most others need double of that for half the performance.
These days mining isn't done on computer hardware but on specialized hardware. So with a few SBCs you can't compete with these ASICs.
But if you use solar power then you've got the big advantage that you don't need to pay for energy costs. What's the biggest cost for ASICs. If you really want to do this, then do your research well. Else it's a probable moneypit. But at least you can enjoy the SBCs.
Greetings.
-
legogris got a reaction from manuti in Focal vs Buster?
It's not a case of arm64 not being as well-supported. In general most modern software has way better support on arm64 than armf. If you're on a 64-bit processor, you can choose. HC2 will simply not do arm64 since it's a 32-bit processor.
You can use either distro. I'd personally vouch for buster, or even bullseye. debian's testing is generally relatively stable in this phase of the lifecycle, and buster has old versions with known issues of some packages by now.
-
legogris reacted to NicoD in Odroid N2+ / N2 Plus
Armbianmonitor: http://ix.io/2sUH Hi all.
I've recently received the Odroid N2+. Since nobody else started a topic about it, I'll be the one.
All works fine with Armbian since not much hardware changes have been made vs the N2. Except for the CPU frequency.
With the Odroid Ubuntu you can set it to 2.4Ghz for the big cores A73 as overclock (2208 stock)., and 2016Mhz for the A53 cores (1908 stock).
With Armbian the max clocks are 2Ghz for all cores. Using Armbian Focal 4.9 legacy.
I tried setting the higher values in config.ini. Also tried with the "meson64_odroidn2_plus.dtb" file from the Odroid Ubuntu. Doesn't boot with that. (What do I know )
Other changes are the RTC battery that's now on the board.
The heatsink has changed a little. But it's still more than sufficient to keep it cool even when overclocked.
USB3 still rather s*cks on it. Slow and a lot of issues with 2.4Ghz dongles(wifi/keyboard...).
Too bad they didn't do anything about that. But that probably could have complicated compatibility with N2 images.
Also feels a bit more sluggish than RK3399 on NVMe vs 128GB eMMC on the N2+. That's what fast I/O does. I'll try on USB3-NVMe later.
Here some pictures. 1st pic the N2+ with its case open.
2nd picture the N2 left and the N2+ on the right.
Cheers all.
-
legogris reacted to ning in [HOWTO] build Debian-flavor kernel packages for Armbian with module signed and with debian-featured kernel config
from 1st and 2nd posts, you already knows how to use debian linux build framework to build a module signed kernel for your armbian with your own kernel configure. but you may ask is my kernel missing some features required by debian? how can I know it? and how to fix it.
the answer to these 3 questions are depends on your current kernel configurations, but it difficult to read and compare each kernel configuration with debian's kernel configuration.
let me tell you how debian build its own kernel configurations, then you can answer these questions yourself.
The formula: Debian kernel configuration = common configs + arch specific configs + flavor configs + kernel autoselected configs.
here common configs are core debian features, which is in file debian/config/config
arch configs are needed for debian to run on an arch, which are in files: debian/<arch>/config
flavor configs are tune debian kernel into some flavor, eg rt kernel. debian/config/config.rt
the 3 kinds of configs are write in config files, only define 5% kernel configs, and the rest are kernel autoselected.
at this point, you need only to change arch related configs to make the kernel runs on your device.
here are steps to make the change.
1, use debian defualt config to rebuild your kernel. stop after .config is created.
2, use meld to compare your config file and .config. only take care the configs missing in .config, do not touch the configs added in .config.
3, add missing config in arch config
now you get the missing configs to run on your devices.
-
legogris reacted to ning in [HOWTO] build Debian-flavor kernel packages for Armbian with module signed and with debian-featured kernel config
before you start:
Armbian already provides kernel build script in armbian build framework, and you can download these packages via `apt`
in common cases, you shouldn't build kernel yourself, even you want to build a customer kernel package, you should use armbian build framework.
because you can get best supports.
so below content is only for experts.
prepare source code:
1, stable linux kernel source code:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git
2, armbian build framework, for patches and kernel config
https://github.com/armbian/build
3, debian linux build rules:
git clone https://salsa.debian.org/kernel-team/linux.git --depth=1
optional: checkout to a branch to build a different kernel version, current version is 5.5.8
4, add patches to debian linux build rules:
copy all family patches to <debian-rules>/debian/patches/
add all your patches to <debian-rules>/debian/patches/series
make sure all your patch are appliable to mainline kernel.
optional: remove all debian's kernel patch, only leave armbian kernel patch.
5, apply the patches.
apt install devscripts
cd <debian-rules>
debian/bin/genorig.py <path-to-stable-linux>
debian/rules orig
6, copy kernel config to <debian-rules>/config/[armhf, arm64]/config
prepare native armhf/arm64 build env
1, donwload armbian's prebuild rootfs, or use debootstrap.
2, mount dev, sys, proc, tmp to armhf/arm64 rootfs. reference: https://github.com/armbian/build/blob/master/lib/image-helpers.sh#L27
3, mount <debian-rules> to armhf/arm64 rootfs.
start build, in chroot rootfs
1, cd <debian-rule>
2, apt install devscripts fakeroot
3, debian/rules debian/control DEBIAN_KERNEL_DISABLE_INSTALLER=true
4, debuild -i -us -uc -b # do it only once, to let build system promt missing build depends. you need to install all build depends at this step.
5, fakeroot debian/rules binary or fakeroot debian/rules binary-arch
6, long wait.
-
legogris reacted to NicoD in Daily (tech related) news diet
Bit off topic, that's what the topic is for
So I've been using the NanoPi M4/V2 for over a half year as main desktop and left my pc off most of the times. That today gave me a nice advantage in my electrical bill.
While the price of electricity goes up, I consumed 8.8% less only by using SBC's instead of my pc. And that not for a full year even. I can still improve with LED lights. And keep using my SBCs.
This does give an example why using ARM can be very benificial. I even made a quick video about it. Not really Armbian related. I don't think I need to convince any of you of the advantages of using arm.
But it is a good thing to think about where we can save money/energy in replacing x86 machines with arm boards.
Arm also has disadvantages off course. Less user friendly, do it yourself... I also talk about that. So you don't need to see the video any more actually, bit if you do, please enjoy.
Greetings, NicoD
-
legogris got a reaction from OrangePee in Need your help - what else beside Etcher
So, this is how it works (and advance apologies if I misread the conversation and you're talking about something else):
First off, if you want to talk semantics, sha1 is technically a cryptographic hash function, not a checksum algorithm. One could even argue that it's overkill for this use-case and that an algortithm made for checksums, such as CRC, is more appropriate (less secure but by far enough to detect random write or read errors, but more performant).
The data on the card is read, bit-by-bit (well, on a lover level the reads are buffered for performance reasons, but it makes no practical difference otherwise). The data is then put through the sha1 hash function, which produces a 160-bit hash. A single bit being changed produces a completely different hash.
The odds of the hashes matching up in case every single bit doesn't is 1 in ~10^18. For practical purposes, low enough odds to be practically impossible. When we're at those odds, you might as well start thinking about the probability that the read during your verification steps gives you the expected output due to a read error exactly matching the inverse of the write error during the write, RAM errors and cosmic radiation. If you're concerned about well-funded adversaries with a huge dedicated compute cluster deliberately giving you a fake image with the same matching hash, then you might want to look at more cryptographically secure hash functions - in which case replace the `sha1sum` command above with the slower but more secure `sha512sum` (512 bits, or 1 in 10^154). The chances of a collision are small enough that on average, with all the worlds current computing power, we'd be talking millions of years. The number of atoms in the entire universe is believed to be around 10^80. Pick two random atoms in the known universe and the odds of those being the same atom is still vastly higher than SHA-2 producing the same hash after a write error.
TLDR; For your purposes, if the checksum validates, so does the written data (given that the checksum verification here is made between data read back from the written card vs the source file and not just between the source file and a website). Etcher did CRC32 until 2018, and is now on SHA512. Looking at the source, usbimager actually reads back the source and the target into memory and does a memcmp (straight-up byte-by-byte comparison): https://gitlab.com/bztsrc/usbimager/-/blob/master/src/main_libui.c#L147
These are all valid approaches, only caveat with the last one being that you need to be able to fit 2x the image size into RAM during the validation