Jump to content

TRS-80

Moderators
  • Posts

    760
  • Joined

  • Last visited

Reputation Activity

  1. Like
    TRS-80 got a reaction from gounthar in Armbian Donations   
    I am also very pleasantly surprised at the support.  That was a decent figure, and we hit it pretty quickly.  There must be more people out there who appreciate Armbian than I realized.  A silent majority, if you will.
     
    The donations are greatly appreciated.  Thanks to everyone who pitched in!
     
    I also view this as an indicator, a validation of everyone's efforts who contribute and are involved in this project in one way or another.  FeelsGoodMan.jpg
     
    Cheers, mates!
     
       
  2. Like
    TRS-80 reacted to Werner in Armbian Donations   
    It is there for me?
     
    Anyway.
    This community is crazy. Not that we made it through the crowdfunding but we made it through in less than half of the time it was public. This is insane. Not much to say but THANK YOU ALL WHO DONATED!
  3. Like
    TRS-80 reacted to NicoD in Armbian Donations   
    Donations reached the goal. 
    To everybody who helped. Big thank you. This will be well used. I've seen the server and it's a monster.

    Next goal maybe different desktop implementations, with GPU and maybe VPU if possible. Who could we hire? What cost?
     
    But before that there's already enough new things coming soon. And the server will be in good use or that.

    From the whole team. Thank you.
    NicoD
  4. Like
    TRS-80 reacted to NicoD in Box86 on the RK3399 with Armbian Reforged   
    It's mainline Focal 5.8 with panfrost. @Salvador Liébana did build the image. I'm just a messenger spreading the word.
    His team is behind TwisterOS for the Raspberry Pi. They've got tons of portable apps to make life easy for noobs to experienced user. Will be an amzazig addition to armbian.
    This is a preview of what's to come for more SBCs. Certainly when panfrost is ready for Odroid N2/N2+

    They've got a whole club on Discord for every project of the group. A lot happens there. 
    @Salvador Liébana Is it possible to write a build script for your image that makes use of ours.(write down all yours steps) Then it will take a big load of your back in the long time.
    You then always are along with armbian changes. And maybe later we can merge this to te desktop project. Then you can build it all without having to set up something manually. I hate to see too much forks. People better work all together to improve that what we build. Cheers.
  5. Like
    TRS-80 reacted to Gediz in Olimex LCD Panel Support for A64   
    You're welcome. If you don't mind, I'd recommend you to use a more recent U-Boot version if there isn't any special purpose to use v2015.04.
     
    By the way, I had to spend a little bit amount of time to integrate a custom U-Boot and Linux to Armbian for my own use case. I thought that it'd help to see what/where should i add/modify back in then. Maybe this may help you a bit. By the way, do not mind FEX. It was for a really old kernel.
     
    . ├── config │   ├── boards │   │   └── myboard-a13.csc │   ├── fex │   │   └── olinux-som-a13.fex │   ├── kernel │   │   └── linux-sun5i-default.config │   └── sources │   └── sun5i.conf ├── defconfig │   └── u-boot │   └── myboard-a13_defconfig └── userpatches ├── config-myboard.conf └── u-boot └── u-boot-sunxi-legacy └── myboard-a13_defconfig.patch  
    Just after copying an overlay like this to your Armbian build directory, then ./compile.sh myboard
     
    This configuration may be outdated to an extent. It's been some months.
  6. Like
    TRS-80 reacted to Gediz in Olimex LCD Panel Support for A64   
    I do not know about the other issues but if i recall correctly CONFIG_VIDEO_LCD_MODE is actually an option in U-Boot, not kernel.
  7. Like
    TRS-80 reacted to SIGSEGV in iSCSI on Helios64 [Working great on Armbian 20.11 test images]   
    For anyone looking to build an iSCSI target on their new Helios64, I can confirm that it's working correctly under the test builds for Armbian 20.11.
    Thanks to @aprayoga for his comment on the pull request, the kernels modules related to LinuxIO have been added to all boards in the Armbian project.
     

  8. Like
    TRS-80 reacted to Pavel Löbl in Banana Pi P2 Zero NAND Installation support   
    I already have BPI-P2-Zero device tree and kernel config ready. At least for board revision 1.1. The version 1.0 has some hardware issues and differences. My current test image is based on Yocto/OpenEmbedded. I plan to send DT upstream and then I want look how to build Armbian for this board.
  9. Like
    TRS-80 reacted to Heisath in ClearfogPro: Difference in switch behaviour between LK4.19 and LK5.8   
    So I figured it out (and wasted 5 hours or so  ) ...
     
    Support for bridge flags for the mv88e6xxx chips was introduced here: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/drivers/net/dsa/mv88e6xxx?h=v5.1&id=4f85901f0063e6f435125f8eb54d12e3108ab064
    One of these flags is flooding. Which seems to be enabled by default, and leads to incoming packets on port A to be replayed on all other ports. This naturally lowers the receive speed on port A to the minimum transmit speed of all other ports.
     
    Easy way to fix this is to issue 'bridge link set dev lan4 flood off' for all the lan ports on the clearfog.
    I confirmed this also works on Lk5.8
     
    Does it make sense to add a family tweak which does this for all lan ports? Or is this possible via device tree?

    EDIT: Nevermind, this is not an issue with the dsa but with the default settings of a linux bridge. Let's not touch it, because not everyone will put the lan ports on the clearfog in a bridge. If they do, they hopefully stumble upon this and check the man page: https://www.man7.org/linux/man-pages/man8/bridge.8.html
  10. Like
    TRS-80 reacted to pako in OPi Lite 2 + RPi.GPIO   
    I return to this old topic.
    Perhaps someone will be interested in my contribution.
    I have successfully modified the library https://github.com/Jeremie-C/OrangePi.GPIO, so it also works for H6.
    You can find the modified library here:
    https://github.com/Pako2/OrangePi.GPIO.
  11. Like
    TRS-80 reacted to akschu in Stability issues on 20.08.21   
    Did more testing over the weekend on 5.9.9.  I was able to benchmark with FIO on top of a ZFS dataset for hours with the load average hitting 10+ while scrubing the datastore.  No issues.  Right nowt he uptime is 3 days. 
     
    I'm actually a little surprised at the performance.  It's very decent for what it is. 
     
    I wonder if the fact that I'm running ZFS and 5.9.9 while others are using mdadm and 5.8 is the difference.  I'm not really planning on going backwards on either.  If 5.9.9 works then no need to build another kernel, and you would have to pry ZFS out of my cold dead hands.  I've spend enough of my life layering encryption/compression on top of partitions on top of volume management on top of partitions on top of disks.  ZFS is just better, and having performance penalty free snapshops that I can replicate to other hosts over SSH is the icing on the cake. 
     
  12. Like
    TRS-80 reacted to NicoD in X86 Windows and Linux programs and games on RK3399 with Box86 Armbian Reforged   
    Here my instruction video on how to install Armbian Reforged and set it up. 
     
  13. Like
    TRS-80 reacted to SteeMan in Can not find image to download for TX3 mini   
    Given that you do have general linux knowledge and rpi familiarity, here are my comments on your requests.
    I have 4 TX3 mini's three of which I run armbian on and one that I use the original android on.  I will mention that just because a box is labeled TX3 mini, doesn't mean the internals are the same.  The manufactures put identical external branding on boards that may be significantly different.  For example all TX3 minis claim they have emmc storage in them.  But only two of my TX3 minis have emmc storage, the other two come with nand storage (cheaper to manufacture that way).  Since mainline linux doesn't support nand I can only install armbian on internal storage on two of my boxes.
     
    From the above linked post you need to download an image file from any of the download locations.  The file you are looking for is the arm-64 version from October 14th 2020.  These are the last versions from balbes150 to support Amlogic cpus.  So be warned that when and if you get this running on your TX3 mini box, there is currently no path to get anything newer than this Oct 14 build with 5.9.0 kernel.  You will get updates from your chosen distribution (debian or ubuntu) just no kernel updates, unless someone else in the community picks up the ball and begins building/maintaining amlogic kernels.
     
    In the downloads directory you will find builds for debian (buster and bullseye) and ubuntu (bionic and focal), along with both a desktop and non-desktop version of each.
     
    Once you download your chosen build (for example  https://users.armbian.com/balbes150/arm-64/Armbian_20.10_Arm-64_focal_current_5.9.0.img.xz - ubuntu focal non-desktop build)
    You need to burn the image to an SD card.  Generally balenaEtcher is recommended (however I have only ever used dd on linux to create my sd cards, so I have no familiarity with that tool)
     
    Once you have the SD card with your chosen build, then you need to edit the boot configuration file on the SD card.  In the BOOT partition of the SD card there will be a file /boot/extlinux/extlinux.conf, that you need to edit. (In earlier builds this was done in the /boot/uEnv.txt file, so a lot of comments in these threads talk about that file, but in the latest builds it was changed to the extlinux.conf file)
     
    Your extlinux.conf file should look like:
    LABEL Armbian
    LINUX /zImage
    INITRD /uInitrd
    # aml s9xxx
    #FDT /dtb/amlogic/meson-gxbb-p200.dtb
    FDT /dtb/amlogic/meson-gxl-s905w-tx3-mini.dtb
    #FDT /dtb/amlogic/meson-gxm-q200.dtb
    #FDT /dtb/amlogic/meson-g12a-x96-max.dtb
    #FDT /dtb/amlogic/meson-g12b-odroid-n2.dtb
    APPEND root=LABEL=ROOTFS rootflags=data=writeback rw console=ttyAML0,115200n8 console=tty0 no_console_suspend consoleblank=0 fsck.fix=yes fsck.repair=yes net.ifnames=0
     
    Basically you need to have the correct dtb for your box and the correct boot command for your box, along with the top three environment variables set.  *Everything* else needs to either be deleted or commented out.  This is a common mistake where people uncomment out what they need, but leave other lines in the file not uncommented and thus they fail to boot.  The extlinux.conf file above is directly from my TX3 mini box.  Note that if you were using a different box than a TX3 mini, you would attempt to use different dtb files until you found the one that works the best for you boxes hardware (there are a bunch of dtb files in /boot/dtb/... to try depending on your cpu architecture and hardward).
     
    Next you need to copy the correct uboot for your box.  This is needed for amlogic cpus (other cpus have different uboot stuff to do).  For your TX3mini you need to copy u-boot-s905x-s912 to u-boot.ext (note I say copy not move).
     
    Once you have your SD card prepared, on an Amlogic box you need to enable multiboot.  There are different ways documented to do this, but for your TX3 mini box, you should use the toothpick method.  At the back of the audio/video jack connector is a hidden reset button.  By pressing that button with a toothpick or other such pointed device you can enable multiboot.  What you need to do is have the box unpluged, have your prepared sd card inserted, then press and hold the button while inserting the power connector.  Then after a bit of time you can release the button.  (I don't know exactly how long you need to hold the button after power is applied, but if it doesn't work the first time try again holding for longer or shorter times).
     
    You should now be booting into armbian/linux.
     
    If you want at this point, you can copy the installation to emmc (assuming your box has emmc).  You do this by running the appropriate shell script in /root, which for your case is /root/install-aml.sh.  Note that it is recommended that you make a backup of emmc first (use the ddbr tool that should be installed on your sd card).  Also be prepared if anything goes horribly wrong with your emmc install to reinstall the armbian firmware using the Amlogic USB Burning Tool to unbrick your device.  It is pretty easy to find TX3mini android firmwares on the internet and you can generally recover a bricked box using the Amlogic tool and an original firmware file.
     
    Finally, I have written this from memory and haven't been actually doing these steps as I am writing, so there might be something I forgot to say, so I make no promises that this is completely accurate, but I think it is.
     
    Also, don't expect that all parts of your TX3mini box will work.  You should have a working boot, working wired ethernet and working hdmi.  Don't expect things like wifi, bluetooth, infrared remote, box display to work.  The experimental armbian for these boxes is enough to get a basic server running and a light graphical display, but don't expect full functionality.
     
  14. Like
    TRS-80 reacted to SteeMan in Can not find image to download for TX3 mini   
    Follow the information in the first post in the following thread.  Note that the October 14, 2020 build of 5.9.0 is the last build that will have the necessary boot scripts to run on Amlogic boxes.
     
     
  15. Like
    TRS-80 reacted to SteeMan in Can not find image to download for TX3 mini   
    To answer this, you need to tell us what you are planning to do with your tx3 mini box.  Also if you could let us know your experience level with linux and different linux distributions that would be helpful as well.
  16. Like
    TRS-80 reacted to @lex in HTOP not showing CPU anymore   
    This is a classical memory corruption.
     
    Htop has possibly crashed. During a crash Htop emits a backtrace with some info.  If you have the backtrace info, please post here with your Htop version.
     
    You can also try a few things:
    * Remove every meter, F2 and delete all the meters, exit. Start again and add one cpu bar. If it is Ok then proceed with the rest.
    * If you have some skills build Htop with debug info, the backtrace will show the function previous to the free() memory.
  17. Like
    TRS-80 reacted to Mangix in Random system reboots   
    Progress update: kernel 4.19.70 fails. .65 works. Testing .67 now.
     
    edit: .66 has not crashed yet. Will wait to see if it can stay alive for 12 hours.
     
    I'm trying to compile kernels based on commit. It doesn't seem to work though. I'm trying
     
    ```
    --- a/config/sources/families/mvebu.conf
    +++ b/config/sources/families/mvebu.conf
    @@ -10,7 +10,7 @@ fi
     case $BRANCH in
            legacy)
     
    -               KERNELBRANCH='tag:v4.19.66'
    +               KERNELBRANCH='commit:46b306f3cd7b47901382ca014eb1082b4b25db4a'
     
            ;;
    ```
     
    Which gives
     
    ```
    [ error ] ERROR in function compile_kernel [ compilation.sh:379 ]
    [ error ] Error kernel menuconfig failed 
    ```
     
    I'm trying to see which commit is responsible for the failure based on https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/log/?h=v4.19.158&ofs=9800
     
    Current theory is this commit: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?h=v4.19.158&id=46b306f3cd7b47901382ca014eb1082b4b25db4a
     
    It says it's for 32-bit.
  18. Like
    TRS-80 reacted to atomic77 in Self-contained Tensorflow object detector on Orange pi lite + GC2035   
    I got my hands on a "Set 9" Orange Pi Lite + GC2035 camera a while back and I've finally been able to put together a self-contained object detection device using Tensorflow, without sending any image data outside for processing.

    Basically, its a python Flask application that captures frames from the camera using a GStreamer pipeline. It runs them through a Tensorflow object detection model and spits out the same frame with extra metadata about objects it found, and renders a box around them. Using all four cores of the H2 it can do about 2-3 fps. The app keeps track of the count of all object types it has seen and exposes the metrics in prometheus format, for easy creation of graphs of what it sees over time with Grafana
     


    I'll explain some of the more interesting aspects of how I got this to work here in case anyone else wants to try to get some use out of this very inexpensive hardware, and I am grateful to the many posts on this forum that helped me along the way!

    Use a 3.4 kernel with custom GC2035 driver

    Don't bother with anything new - the GC2035 was hopeless on any newer builds of Armbian I tried. The driver available at https://github.com/avafinger/gc2035.git provided far better image quality. After installing the updated GC2035, I run the following to get the camera up and running:
     
    sudo sunxi-pio -m "PG11<1><0><1><1>" sudo modprobe gc2035 hres=1 sudo modprobe vfe_v4l2  
    Install Tensorflow lite runtime
     
    Google provides a tensorflow runtime as a binary wheel built for python 3.5 armv7. When pip installing, expect it to take 20 minutes or so as it will need to compile numpy (the apt repo version isn't recent enough)
     
    wget https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl sudo -H pip3 install tflite_runtime-2.5.0-cp35-cp35m-linux_armv7l.whl  

    Build opencv for python 3.5 bindings

    This was something I tried everything I could to avoid, but I just could not get the colour conversion from the YUV format of the GC2035 to an RGB image using anything else I found online, so I was dependent on a single color-conversion utility function.
     
    To build the 3.4.12 version for use with python (grab lunch - takes about 1.5 hours :-O )

     
    cmake -DCMAKE_INSTALL_PREFIX=/home/atomic/local -DSOFTFP=ON \ -DBUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python2=0 \ -D BUILD_opencv_python3=1 -D WITH_GSTREAMER=ON \ -D PYTHON3_INCLUDE_PATH=/usr/include/python3.5 .. make -j 4 make install # Check that ~/local/lib/python3.5/dist-packages should now have the cv2 shlib export PYTHONPATH=/home/atomic/local/lib/python3.5/dist-packages  
     
    Build gstreamer plugin for Cedar H264 encoder
     
    This is required to get a working gstreamer pipeline for the video feed:
    git clone https://github.com/gtalusan/gst-plugin-cedar ./autogen.sh sudo make install # When trying against a pipc I had to copy into .local to get gstreamer to recognise it cp /usr/local/lib/gstreamer-1.0/libgst* ~/.local/share/gstreamer-1.0/plugins/ # Confirm that plugin is installed: gst-inspect-1.0 cedar_h264enc  
    Processing images
     
    The full app source is on github, but the more interesting parts that took me some time to figure out were about getting python to cooperate with gstreamer:
     
    Frames from the camera arrive to python at the end of the pipeline as an appsink. The Gstreamer pipeline I configured via python was:
     
    src = Gst.ElementFactory.make("v4l2src") src.set_property("device", "/dev/video0") src.set_property("do-timestamp", 1) filt = Gst.ElementFactory.make("capsfilter") filt.set_property("caps", Gst.caps_from_string("video/x-raw,format=NV12,width=800,height=600,framerate=12/1")) p1 = Gst.ElementFactory.make("cedar_h264enc") p2 = Gst.ElementFactory.make("h264parse") p3 = Gst.ElementFactory.make("rtph264pay") p3.set_property("config-interval", 1) p3.set_property("pt", 96) p4 = Gst.ElementFactory.make("rtph264depay") p5 = Gst.ElementFactory.make("avdec_h264") sink = Gst.ElementFactory.make("appsink", "sink") pipeline_elements = [src, filt, p1, p2, p3, p4, p5, sink] sink.set_property("max-buffers", 10) sink.set_property('emit-signals', True) sink.set_property('sync', False) sink.connect("new-sample", on_buffer, sink)
    This pipeline definition causes a callback on_buffer to be called every time a frame is emitted from the camera:

     
    def on_buffer(sink: GstApp.AppSink, data: typing.Any) -> Gst.FlowReturn: # Sample will be a 800x900 byte array in a very frustrating YUV420 format sample = sink.emit("pull-sample") # Gst.Sample ... conversion to numpy array # rgb is now in a format that Pillow can easily work with # These two calls are what you compiled opencv for 1.5 hours for :-D rgb = cv2.cvtColor(img_arr, cv2.COLOR_YUV2BGR_I420) rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
    Once you have a nice pillow RGB image, it's easy to pass this into a Tensorflow model, and there is tons of material on the web for how you can do things like that. For fast but not so accurate detection, I used the ssdlite_mobilenet_v2_coco pretrained model, which can handle about 0.5 frames per second per core of the H2 Allwinner CPU.
     
    There are some problems I still have to work out. Occasionally the video stream stalls and I haven't figured out how to recover from this without restarting the app completely. The way frame data is passed around tensorflow worker processes is probably not ideal and needs to be cleaned up, but it does allow me to get much better throughput using all four cores.
     
    For more details, including a detailed build script, the full source is here:
    https://github.com/atomic77/opilite-object-detect
  19. Like
    TRS-80 reacted to JMCC in AMD Threadripper 3990X Armbian Build Server Review   
    Okay, another use case. This one will bring some surprises.
     
    Let us imagine we want to compile natively armhf/arm64 binaries. Like, for example, making the new Armbian multimedia packages that we will announce very soon
     
    In this case, the Threadripper will be in clear disadvantage, since it needs to virtualize the ARM CPU through Qemu. But, will it be able to make up with core count and sheer processing power? Here are the numbers. We will compare the Threadripper with the Ampere ARM server, and with my highly optimized Odroid XU4 (good cooling and slight overclock).
     
    First, a single thread 7-zip bench (Decompressing MIPS, higher is better):
    $ 7z b -mmt1 Threadripper (native amd64): 4793 Threadripper (emulating armhf): 1529 Ampere ARM server (native armhf): 2889 Odroid XU4 (native armhf): 2160 As you can see, the single-core performance of the Threadripper is reduced to 1/3 of its natiive performance when emulating through Qemu, leaving it well below the Odroid XU4 and the Ampere.
     
    Now, a real-world use case: let us compile our customized version of Kodi for armhf (compilation time, lower is better):
    $ time cmake --build . -- -j$(nproc --all) Threadripper (emulating armhf): 18m9.696s Ampere (native armhf): 5m50.033s Odroid XU4 (native armhf): 45m50.711s The 32-core ARM server beats here the 64C/128T AMD server for more than three times shorter compile time. And Odroid XU4 gets just slightly above double the compile time of the AMD. If we factor in power consumption, it becomes very clear that compiling in an emulated environment is very suboptimal.
     
    Now, we must remember that for building Armbian images we don't emulate, but instead cross-compile. In that case, the AMD is working natively, and that is another story. In that case, the AMD has absolutely no match with the ARM server, or anything else I ever tested. We will probably post numbers about this in some other opportunity.
  20. Like
    TRS-80 reacted to akschu in Stability issues on 20.08.21   
    I'm been testing my Helios64 as well.  I'm running armbian 20.08.21 Focal, but I also downloaded the kernel builder script thingy from github and built linux-image-current-rockchip64-20.11.0-trunk which is a 5.9.9 kernel.  Installed that, then built openzfs 2.0.0-rc6.   I then proceeded to syncoid 2.15TB of snapshots to it also while doing a scrub and was able to get the load average up to 10+.  The machine ran through the night, so I think it might be stable.  A few more days testing will validate this.
     
    schu
  21. Like
    TRS-80 reacted to bozden in About eMMC's :)   
    Tesla MCU1 eMMC failure
    https://tesla-info.com/blog/tesla-mcu1-emmc-failure.php
     
     
  22. Like
    TRS-80 reacted to NicoD in AMD Threadripper 3990X Armbian Build Server Review   
    Hi all. 
    I again had the pleasure of working with an amazing server. This time the AMD Threadripper 3990X, 64-cores and 128 threads.
    After last week working on a 32-core ARM server I thought I had seen performance.

    This is again not comparable with anything before.
     
    I again got private SSH access. So I opened 3 terminals. One with HTop, another to check sensors. And the 3th to execute my benchmarks.
    First thing I saw were the 128-threads. Being used to seeing 6, this was almost unbelievable.

    With light loads it turbo's up to 4.3Ghz. All cores maxed out @ 3Ghz while consuming 400W.
    Reaching a single core 7zip decompression score of 4545MIPS @ 4.3Ghz.
    The Ampere 32-core ARM server at 3.3Ghz reached 2763.
    This again shows the Ampere server doesn't use high performance cores. It doesn't perform great per clock.

    Coming soon is a benchmark of an AWS server. This uses high performance cores based on the ARM N1 cores. A derivative of the A76.
    This reaches 3393. This clocked at only 2.5Ghz. So this does perform better per clock. Do know this is comparing peers with bananas(don't want to confuse with apples).

    And scoring 391809MIPS with 7zip multi-core decompression with default settings.

    Then with an overclock to 3.9Ghz all cores it consumed +600W. With a 7zip decompression score of 433702MIPS
    This is again so many levels better than the Ampere 32-core ARM server which got 85975MIPS. 32-cores of the AWS graviton2 does 110628.
    So this AMD server is up to 5 x more powerful when overclocked, than the Ampere 32-core server. Consuming 6 x as much. 
    With normal configuration they both perform almost as well in performance/watt.

    In idle the Threadripper sonsumed 100W, what is a lot for doing nothing.
    The 32-core ARM server only consumed a bit more than 100W maxed out. And about 20W in idle.

    The BMW Blender benchmark, which takes 29m23s on the fastest ARM SBC the Odroid N2+. The Ampere ARM server did it in 8m27s.
    For the Threadripper this was a way too light load, it did it in 30s. 

    Even when doing this render 10 x after each other it didn't raise the temperatures much. The maximum I've seen was 50C.

    To try a heavier load I downloaded the Barber Shop Blender render. This was 6912 tiles to render. But again the Threadripper wasn't impressed by this load. 2m18s79. The AWS with 32-cores (of 64) done this in 8m28s. So this ARM server does compete well per clock for a floating point task with TR.

    ARM may be great, but AMD is mighty. Intel does not have anything to compete with this. Certainly not performance/watt. 
    It was a pleasure benchmarking this server. 
    I learned a lot, like that I need to find better tools for these amazing machines.  
     
    The specs of this monster :
     
    ASRock Rack TRX40D8-2N2T AMD Ryzen Threadripper 3990x 256GB memory (8 x 32Gb) ECC 2 x 1TB PCI 4.0 Nvme SSD Water Cooling  
    The specs of the Threadripper 3990x
     
    64-cores 128-threads AMD64 Zen2 Matisse 2.9Ghz - 4.3Ghz 4-channel DDR4-3200 MHz 256GB RAM 88 lanes PCIe4 TSMC's 7nm process node 280W - +400W 32 KB L1 per core (64x) 64 x 512 KB L2 256 MB L3 cache shared
    You can see my full review video here, greetings.
    NicoD

     
  23. Like
    TRS-80 reacted to Werner in Slow write speeds. Helios64. (tested SATA and USB UAS DAS)   
    I don't know the reason for having NTFS on these disks. Maybe laziness, maybe concerns to simply take out the disk and put it back into your Windows PC for recovery etc...
    but just want to let you know that also for Windows exist driver to make real file systems like ext4 available. So no need to be afraid to travel into unknown territory and format them for ext4 or any other awesome fs like zfs for example
  24. Like
    TRS-80 reacted to gprovost in Slow write speeds. Helios64. (tested SATA and USB UAS DAS)   
    I didn't read properly your first message, and miss out a very important information :  your disk formatted in NTFS :-/  You know that there can be a big write performance penalty using NTFS partition on Linux since it is handle in userspace via libfuse.
     
    Can you do a local write test with dd to the NTFS partition to see the performance ?
     
    Honestly you should migrate to another file system than NTFS.
     
    I did some test again and I max out gigabit when writing from PC --> Helios64 over SMB or FTP.
  25. Like
    TRS-80 reacted to Werner in AllWinner H616 boards   
    Hopefully it gets into better shape once Armbian starts tinkering with it
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines