TRS-80 reacted to ShadowDance in Kernel panic in 5.8.17 20.08.21
@SymbiosisSystems I take it you have a set of 0.8.5 modules built? They work fine with the 0.8.4 zfsutils-linux package, but it requires the zfs-dkms package which will fail to build. We can work around this by installing a dummy package that provides zfs-dkms so that we then can go ahead and install zfsutils-linux / zfs-zed / etc. from backports.
Here's how you can create a dummy package:
apt-get install --yes equivs mkdir zfs-dkms-dummy; cd $_ cat <<EOF >zfs-dkms Section: misc Priority: optional Standards-Version: 3.9.2 Package: zfs-dkms-dummy Version: 0.8.4 Maintainer: Me <me@localhost> Provides: zfs-dkms Architecture: all Description: Dummy zfs-dkms package for when using built kmod EOF equivs-build zfs-dkms dpkg -i zfs-dkms-dummy_0.8.4_all.deb
After this, you can go ahead and install (if not already installed) the 0.8.5 modules and zfsutils-linux.
TRS-80 reacted to 5kft in Switching SUNXI-DEV to 5.10.y (h3-h5-h6/megous)
With just a few local test hacks, I was able to bring up kernel 5.10-rc1 on sunxi this morning (based on the new megous branch):
root@air's password: _ _ ____ _ _ _ | \ | | _ \(_) / \ (_)_ __ | \| | |_) | | / _ \ | | '__| | |\ | __/| | / ___ \| | | |_| \_|_| |_| /_/ \_\_|_| Welcome to Armbian 20.08.14 Buster with Linux 5.10.0-rc1-sunxi System load: 2% Up time: 11 min Memory usage: 14% of 491M IP: 172.24.18.151 CPU temp: 34°C Usage of /: 24% of 7.2G Last login: Tue Oct 27 07:20:02 2020 from 172.24.18.20 root@air:~# cat /proc/version Linux version 5.10.0-rc1-sunxi (root@355045fc2473) (arm-none-linux-gnueabihf-gcc (GNU Toolchain for the A-profile Architecture 9.2-2019.12 (arm-9.10)) 9.2.1 20191025, GNU ld (GNU Toolchain for the A-profile Architecture 9.2-2019.12 (arm-9.10)) 22.214.171.12491209) #trunk SMP Tue Oct 27 14:13:23 UTC 2020 root@air:~# root@air:~# cpufreq-info -c 0 cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to email@example.com, please. analyzing CPU 0: driver: cpufreq-dt CPUs which run at the same hardware frequency: 0 1 2 3 CPUs which need to have their frequency coordinated by software: 0 1 2 3 maximum transition latency: 5.44 ms. hardware limits: 480 MHz - 1.20 GHz available frequency steps: 480 MHz, 648 MHz, 816 MHz, 960 MHz, 1.01 GHz, 1.10 GHz, 1.20 GHz available cpufreq governors: conservative, ondemand, userspace, powersave, performance, schedutil current policy: frequency should be within 480 MHz and 1.20 GHz. The governor "ondemand" may decide which speed to use within this range. current CPU frequency is 816 MHz (asserted by call to hardware). cpufreq stats: 480 MHz:97.72%, 648 MHz:0.22%, 816 MHz:0.20%, 960 MHz:0.23%, 1.01 GHz:0.15%, 1.10 GHz:0.19%, 1.20 GHz:1.29% (679) root@air:~#
I tested this on a spare NanoPi NEO Air board I had; I haven't tried an arm64 build yet. dmesg is clean; cpufreq works, overclocking works, wireless works, etc. Our 5.9 kernel patchset applied almost completely without error (there are a few other changes needed such as the builddeb patches and fbcon reversion patch we did).
In any case I just wanted to let people know. Given that 5.10 is confirmed to be the new LTS and 5.9 is the new primary -stable, I'm wondering if 5.8's days are numbered (like what happened to 5.7). I'm happy to put some work into bringing 5.10 into build if there is interest.
TRS-80 got a reaction from gounthar in Armbian Donations
I am also very pleasantly surprised at the support. That was a decent figure, and we hit it pretty quickly. There must be more people out there who appreciate Armbian than I realized. A silent majority, if you will.
The donations are greatly appreciated. Thanks to everyone who pitched in!
I also view this as an indicator, a validation of everyone's efforts who contribute and are involved in this project in one way or another. FeelsGoodMan.jpg
TRS-80 reacted to NicoD in Armbian Donations
Donations reached the goal.
To everybody who helped. Big thank you. This will be well used. I've seen the server and it's a monster.
Next goal maybe different desktop implementations, with GPU and maybe VPU if possible. Who could we hire? What cost?
But before that there's already enough new things coming soon. And the server will be in good use or that.
From the whole team. Thank you.
TRS-80 reacted to NicoD in Box86 on the RK3399 with Armbian Reforged
It's mainline Focal 5.8 with panfrost. @Salvador Liébana did build the image. I'm just a messenger spreading the word.
His team is behind TwisterOS for the Raspberry Pi. They've got tons of portable apps to make life easy for noobs to experienced user. Will be an amzazig addition to armbian.
This is a preview of what's to come for more SBCs. Certainly when panfrost is ready for Odroid N2/N2+
They've got a whole club on Discord for every project of the group. A lot happens there.
@Salvador Liébana Is it possible to write a build script for your image that makes use of ours.(write down all yours steps) Then it will take a big load of your back in the long time.
You then always are along with armbian changes. And maybe later we can merge this to te desktop project. Then you can build it all without having to set up something manually. I hate to see too much forks. People better work all together to improve that what we build. Cheers.
TRS-80 reacted to Gediz in Olimex LCD Panel Support for A64
You're welcome. If you don't mind, I'd recommend you to use a more recent U-Boot version if there isn't any special purpose to use v2015.04.
By the way, I had to spend a little bit amount of time to integrate a custom U-Boot and Linux to Armbian for my own use case. I thought that it'd help to see what/where should i add/modify back in then. Maybe this may help you a bit. By the way, do not mind FEX. It was for a really old kernel.
. ├── config │ ├── boards │ │ └── myboard-a13.csc │ ├── fex │ │ └── olinux-som-a13.fex │ ├── kernel │ │ └── linux-sun5i-default.config │ └── sources │ └── sun5i.conf ├── defconfig │ └── u-boot │ └── myboard-a13_defconfig └── userpatches ├── config-myboard.conf └── u-boot └── u-boot-sunxi-legacy └── myboard-a13_defconfig.patch
Just after copying an overlay like this to your Armbian build directory, then ./compile.sh myboard
This configuration may be outdated to an extent. It's been some months.
TRS-80 reacted to SIGSEGV in iSCSI on Helios64 [Working great on Armbian 20.11 test images]
For anyone looking to build an iSCSI target on their new Helios64, I can confirm that it's working correctly under the test builds for Armbian 20.11.
Thanks to @aprayoga for his comment on the pull request, the kernels modules related to LinuxIO have been added to all boards in the Armbian project.
TRS-80 reacted to Pavel Löbl in Banana Pi P2 Zero NAND Installation support
I already have BPI-P2-Zero device tree and kernel config ready. At least for board revision 1.1. The version 1.0 has some hardware issues and differences. My current test image is based on Yocto/OpenEmbedded. I plan to send DT upstream and then I want look how to build Armbian for this board.
TRS-80 reacted to Heisath in ClearfogPro: Difference in switch behaviour between LK4.19 and LK5.8
So I figured it out (and wasted 5 hours or so ) ...
Support for bridge flags for the mv88e6xxx chips was introduced here: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/net/dsa/mv88e6xxx?h=v5.1&id=4f85901f0063e6f435125f8eb54d12e3108ab064
One of these flags is flooding. Which seems to be enabled by default, and leads to incoming packets on port A to be replayed on all other ports. This naturally lowers the receive speed on port A to the minimum transmit speed of all other ports.
Easy way to fix this is to issue 'bridge link set dev lan4 flood off' for all the lan ports on the clearfog.
I confirmed this also works on Lk5.8
Does it make sense to add a family tweak which does this for all lan ports? Or is this possible via device tree?
EDIT: Nevermind, this is not an issue with the dsa but with the default settings of a linux bridge. Let's not touch it, because not everyone will put the lan ports on the clearfog in a bridge. If they do, they hopefully stumble upon this and check the man page: https://www.man7.org/linux/man-pages/man8/bridge.8.html
TRS-80 reacted to pako in OPi Lite 2 + RPi.GPIO
I return to this old topic.
Perhaps someone will be interested in my contribution.
I have successfully modified the library https://github.com/Jeremie-C/OrangePi.GPIO, so it also works for H6.
You can find the modified library here:
TRS-80 reacted to akschu in Stability issues on 20.08.21
Did more testing over the weekend on 5.9.9. I was able to benchmark with FIO on top of a ZFS dataset for hours with the load average hitting 10+ while scrubing the datastore. No issues. Right nowt he uptime is 3 days.
I'm actually a little surprised at the performance. It's very decent for what it is.
I wonder if the fact that I'm running ZFS and 5.9.9 while others are using mdadm and 5.8 is the difference. I'm not really planning on going backwards on either. If 5.9.9 works then no need to build another kernel, and you would have to pry ZFS out of my cold dead hands. I've spend enough of my life layering encryption/compression on top of partitions on top of volume management on top of partitions on top of disks. ZFS is just better, and having performance penalty free snapshops that I can replicate to other hosts over SSH is the icing on the cake.
TRS-80 reacted to NicoD in X86 Windows and Linux programs and games on RK3399 with Box86 Armbian Reforged
Here my instruction video on how to install Armbian Reforged and set it up.
TRS-80 reacted to SteeMan in Can not find image to download for TX3 mini
Given that you do have general linux knowledge and rpi familiarity, here are my comments on your requests.
I have 4 TX3 mini's three of which I run armbian on and one that I use the original android on. I will mention that just because a box is labeled TX3 mini, doesn't mean the internals are the same. The manufactures put identical external branding on boards that may be significantly different. For example all TX3 minis claim they have emmc storage in them. But only two of my TX3 minis have emmc storage, the other two come with nand storage (cheaper to manufacture that way). Since mainline linux doesn't support nand I can only install armbian on internal storage on two of my boxes.
From the above linked post you need to download an image file from any of the download locations. The file you are looking for is the arm-64 version from October 14th 2020. These are the last versions from balbes150 to support Amlogic cpus. So be warned that when and if you get this running on your TX3 mini box, there is currently no path to get anything newer than this Oct 14 build with 5.9.0 kernel. You will get updates from your chosen distribution (debian or ubuntu) just no kernel updates, unless someone else in the community picks up the ball and begins building/maintaining amlogic kernels.
In the downloads directory you will find builds for debian (buster and bullseye) and ubuntu (bionic and focal), along with both a desktop and non-desktop version of each.
Once you download your chosen build (for example https://users.armbian.com/balbes150/arm-64/Armbian_20.10_Arm-64_focal_current_5.9.0.img.xz - ubuntu focal non-desktop build)
You need to burn the image to an SD card. Generally balenaEtcher is recommended (however I have only ever used dd on linux to create my sd cards, so I have no familiarity with that tool)
Once you have the SD card with your chosen build, then you need to edit the boot configuration file on the SD card. In the BOOT partition of the SD card there will be a file /boot/extlinux/extlinux.conf, that you need to edit. (In earlier builds this was done in the /boot/uEnv.txt file, so a lot of comments in these threads talk about that file, but in the latest builds it was changed to the extlinux.conf file)
Your extlinux.conf file should look like:
# aml s9xxx
APPEND root=LABEL=ROOTFS rootflags=data=writeback rw console=ttyAML0,115200n8 console=tty0 no_console_suspend consoleblank=0 fsck.fix=yes fsck.repair=yes net.ifnames=0
Basically you need to have the correct dtb for your box and the correct boot command for your box, along with the top three environment variables set. *Everything* else needs to either be deleted or commented out. This is a common mistake where people uncomment out what they need, but leave other lines in the file not uncommented and thus they fail to boot. The extlinux.conf file above is directly from my TX3 mini box. Note that if you were using a different box than a TX3 mini, you would attempt to use different dtb files until you found the one that works the best for you boxes hardware (there are a bunch of dtb files in /boot/dtb/... to try depending on your cpu architecture and hardward).
Next you need to copy the correct uboot for your box. This is needed for amlogic cpus (other cpus have different uboot stuff to do). For your TX3mini you need to copy u-boot-s905x-s912 to u-boot.ext (note I say copy not move).
Once you have your SD card prepared, on an Amlogic box you need to enable multiboot. There are different ways documented to do this, but for your TX3 mini box, you should use the toothpick method. At the back of the audio/video jack connector is a hidden reset button. By pressing that button with a toothpick or other such pointed device you can enable multiboot. What you need to do is have the box unpluged, have your prepared sd card inserted, then press and hold the button while inserting the power connector. Then after a bit of time you can release the button. (I don't know exactly how long you need to hold the button after power is applied, but if it doesn't work the first time try again holding for longer or shorter times).
You should now be booting into armbian/linux.
If you want at this point, you can copy the installation to emmc (assuming your box has emmc). You do this by running the appropriate shell script in /root, which for your case is /root/install-aml.sh. Note that it is recommended that you make a backup of emmc first (use the ddbr tool that should be installed on your sd card). Also be prepared if anything goes horribly wrong with your emmc install to reinstall the armbian firmware using the Amlogic USB Burning Tool to unbrick your device. It is pretty easy to find TX3mini android firmwares on the internet and you can generally recover a bricked box using the Amlogic tool and an original firmware file.
Finally, I have written this from memory and haven't been actually doing these steps as I am writing, so there might be something I forgot to say, so I make no promises that this is completely accurate, but I think it is.
Also, don't expect that all parts of your TX3mini box will work. You should have a working boot, working wired ethernet and working hdmi. Don't expect things like wifi, bluetooth, infrared remote, box display to work. The experimental armbian for these boxes is enough to get a basic server running and a light graphical display, but don't expect full functionality.
TRS-80 reacted to @lex in HTOP not showing CPU anymore
This is a classical memory corruption.
Htop has possibly crashed. During a crash Htop emits a backtrace with some info. If you have the backtrace info, please post here with your Htop version.
You can also try a few things:
* Remove every meter, F2 and delete all the meters, exit. Start again and add one cpu bar. If it is Ok then proceed with the rest.
* If you have some skills build Htop with debug info, the backtrace will show the function previous to the free() memory.
TRS-80 reacted to Mangix in Random system reboots
Progress update: kernel 4.19.70 fails. .65 works. Testing .67 now.
edit: .66 has not crashed yet. Will wait to see if it can stay alive for 12 hours.
I'm trying to compile kernels based on commit. It doesn't seem to work though. I'm trying
@@ -10,7 +10,7 @@ fi
case $BRANCH in
[ error ] ERROR in function compile_kernel [ compilation.sh:379 ]
[ error ] Error kernel menuconfig failed
I'm trying to see which commit is responsible for the failure based on https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/log/?h=v4.19.158&ofs=9800
Current theory is this commit: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v4.19.158&id=46b306f3cd7b47901382ca014eb1082b4b25db4a
It says it's for 32-bit.
TRS-80 reacted to atomic77 in Self-contained Tensorflow object detector on Orange pi lite + GC2035
I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.
Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!
Use a 3.4 kernel with custom GC2035 driver
Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2
Install Tensorflow lite runtime
Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl
Build opencv for python 3.5 bindings
This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )
cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages
Build gstreamer plugin for Cedar H264 encoder
This is required to get a working gstreamer pipeline for the video feed:
git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc
The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:
def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
For more details, including a detailed build script, the full source is here:
TRS-80 reacted to JMCC in AMD Threadripper 3990X Armbian Build Server Review
Okay, another use case. This one will bring some surprises.
Let us imagine we want to compile natively armhf/arm64 binaries. Like, for example, making the new Armbian multimedia packages that we will announce very soon
In this case, the Threadripper will be in clear disadvantage, since it needs to virtualize the ARM CPU through Qemu. But, will it be able to make up with core count and sheer processing power? Here are the numbers. We will compare the Threadripper with the Ampere ARM server, and with my highly optimized Odroid XU4 (good cooling and slight overclock).
First, a single thread 7-zip bench (Decompressing MIPS, higher is better):
$ 7z b -mmt1 Threadripper (native amd64): 4793 Threadripper (emulating armhf): 1529 Ampere ARM server (native armhf): 2889 Odroid XU4 (native armhf): 2160 As you can see, the single-core performance of the Threadripper is reduced to 1/3 of its natiive performance when emulating through Qemu, leaving it well below the Odroid XU4 and the Ampere.
Now, a real-world use case: let us compile our customized version of Kodi for armhf (compilation time, lower is better):
$ time cmake --build . -- -j$(nproc --all) Threadripper (emulating armhf): 18m9.696s Ampere (native armhf): 5m50.033s Odroid XU4 (native armhf): 45m50.711s The 32-core ARM server beats here the 64C/128T AMD server for more than three times shorter compile time. And Odroid XU4 gets just slightly above double the compile time of the AMD. If we factor in power consumption, it becomes very clear that compiling in an emulated environment is very suboptimal.
Now, we must remember that for building Armbian images we don't emulate, but instead cross-compile. In that case, the AMD is working natively, and that is another story. In that case, the AMD has absolutely no match with the ARM server, or anything else I ever tested. We will probably post numbers about this in some other opportunity.
TRS-80 reacted to akschu in Stability issues on 20.08.21
I'm been testing my Helios64 as well. I'm running armbian 20.08.21 Focal, but I also downloaded the kernel builder script thingy from github and built linux-image-current-rockchip64-20.11.0-trunk which is a 5.9.9 kernel. Installed that, then built openzfs 2.0.0-rc6. I then proceeded to syncoid 2.15TB of snapshots to it also while doing a scrub and was able to get the load average up to 10+. The machine ran through the night, so I think it might be stable. A few more days testing will validate this.
TRS-80 reacted to NicoD in AMD Threadripper 3990X Armbian Build Server Review
I again had the pleasure of working with an amazing server. This time the AMD Threadripper 3990X, 64-cores and 128 threads.
After last week working on a 32-core ARM server I thought I had seen performance.
This is again not comparable with anything before.
The specs of this monster :
ASRock Rack TRX40D8-2N2T AMD Ryzen Threadripper 3990x 256GB memory (8 x 32Gb) ECC 2 x 1TB PCI 4.0 Nvme SSD Water Cooling
I again got private SSH access. So I opened 3 terminals. One with HTop, another to check sensors. And the 3th to execute my benchmarks.
First thing I saw were the 128-cores. Being used to seeing 6, this was almost unbelievable.
With light loads it turbo's up to 4.3Ghz. All cores maxed out @ 3Ghz while consuming 400W.
And scoring 391809MIPS with 7zip decompression.
Then with an overclock to 3.9Ghz all cores it consumed +600W. With a 7zip decompression score of 433702MIPS
This is again so many levels better than the Ampere 32-core ARM server which got 85975MIPS.
So this AMD server is up to 5 x more powerful when overclocked, than the Ampere 32-core server. Consuming 6 x as much.
With normal configuration they both perform almost as well in performance/watt.
In idle it sonsumed 100W, what is a lot for doing nothing.
The 32-core ARM server only consumed a bit more than 100W maxed out.
The BMW Blender benchmark, which takes 29m23s on the fastest ARM SBC the Odroid N2+. The Ampere ARM server did it in 8m27s.
For the Threadripper this was a way too light load, it did it in 30s.
Even when doing this render 10 x after each other it didn't raise the temperatures much. The maximum I've seen was 50C.
To try a heavier load I downloaded the Barber Shop Blender render. This was 6912 tiles to render. But again the Threadripper wasn't impressed by this load. 2m18s79. I don't have anything to compare this to, nothing I've got would be able to do it within hours.
ARM may be great, but AMD is mighty. Intel does not have anything to compete with this. Certainly not performance/watt.
It was a pleasure benchmarking this server.
I learned a lot, like that I need to find better tools for these amazing machines.
You can see my full review video here, greetings.