Jump to content

lanefu

Members
  • Posts

    1333
  • Joined

  • Last visited

Reputation Activity

  1. Like
    lanefu reacted to tparys in Help convert simple bash library to python class   
    Serious question.
     
    The sysfs gpio export interface is deprecated since 4.8, and you guys run bleeding edge kernels for your SBCs. Why do you want this?
     
    Also, the newer gpiod interface already has a native python interface. Is there need to reinvent the wheel here?
     
  2. Like
    lanefu reacted to MBC in Rockchip RK3566   
    Random trivia: apparently Pine64 owns 70% of all RK3566 SoCs on the market (March 2021)
  3. Like
    lanefu got a reaction from gounthar in Orangepi 3 h6 allwiner chip   
    agree with @NicoD opi3 is close but not quite.  Anything rk3399 is going to provide the best desktop experience.
  4. Like
    lanefu got a reaction from Werner in Orangepi 3 h6 allwiner chip   
    agree with @NicoD opi3 is close but not quite.  Anything rk3399 is going to provide the best desktop experience.
  5. Like
    lanefu got a reaction from gprovost in armbian-config RFC ideas   
    Opi Zero still insanely popular.  IMHO that would be users wanting to "do something" in a hurry... which is where armbian-config shines
     
    One could assume downloads with legacy kernel are being used in some sort of media or desktop way
     
     
     

  6. Like
    lanefu reacted to gprovost in armbian-config RFC ideas   
    @TRS-80 I think it's important once in a while to re-question the whole thing because it helps to clarify the real objective that we might sometimes forget... or justify that energy should be focused somewhere else.
     
    I think ultimately the life of a distro will always depends on the size of its user base and lets distinguish here community and user base. User base includes all the passive users that we never hear from, I have no clue how much that represent for Armbian. @Igor Maybe you have such numbers : number of d/l per image and number of active forum user ? Even though this will not necessary help to say if armband-config is useful, it will help to give a sense of proportion of user we never hear from which to my assumption are often users that don't hack/tinker too much with their boards.
     
    Then I think it could be useful to run a poll on the forum & twitter and simply ask what the people are using their SBC + Armbian for :
    1/ Hacking / Tinkering
    2/ Headless server
    3/ Set-top TV box
    4/ Work Desktop
     
    Such kind of poll could will help understand the audience usage. If the great majority vote for option 1, then it would back up @TRS-80 assumption. But if majority is 3 and 4, then clearly we are talking about a user based that is not really CLI oriented. As for option 2, is a bit 50 / 50.
     
    We should also be aware of what's happening out there.
    1/ DietPi user base is growing and I guess it's because of their approach of making a big eco system of 3rd party app available to user via an interface a bit similar to armbian-config.
    2/ Debian / Ubuntu install, even in headless mode, is actually not purely CLI so we all get use to a bit of GUI even if we are advanced CLI users.
     
    I agree with Igor that the strength of armbian-config is also a lot the configuration features (without forgetting nand-sata-install which also deserve its attention). So personally I see a big plus to have armbian-config around, because it can only help to widen the user base which in fine is beneficial to Armbian sustainability.
     
    Yes that's a lot of assumption I made, It's why I think the poll would be really a nice driver for this refactoring effort.
  7. Like
    lanefu reacted to beni0664 in cryptsetup - supporting no_read_workqueue/no_write_workqueue on SSDs   
    Dear Armbian community,
     
    although I'm using Armbian a lot, I never had to submit anything to this forum (fortunately, because it works so well :-)),
     
    On my PC I've experienced lags on heavy IO operations. After a short dig into available information,
    I found a useful Cloudflare article on Kernel queues together with dm-crypt.
     
    A good & short summary on possible actions for users can be found here:
     
    https://wiki.archlinux.org/index.php/Dm-crypt/Specialties#Disable_workqueue_for_increased_solid_state_drive_(SSD)_performance
     
    Enabling the no-read-workqueue & no-write-workqueue options helped a lot!
     
     
    As I'm using a RockPi4 with the NVMe SSD with encryption, I thought this should apply to my SBC as well.
     
    Unfortunately, Armbian/Debian Buster uses cryptsetup v2.1.0
     
    which does NOT support these options.
    According to the changelog, this option was introduced in v2.3.4:
    https://gitlab.com/cryptsetup/cryptsetup/-/blob/master/docs/v2.3.4-ReleaseNotes
     
    As Armbian uses a kernel > 5.9, the kernel infrastructure should be available.
     
    Fortunately, crypsetup v2.3.4 exists in the buster-backports repo:
    https://packages.debian.org/buster-backports/cryptsetup
     
    Solution:
     
    # sudo apt install cryptsetup/buster-backports
     
  8. Like
    lanefu reacted to TRS-80 in Your account has been locked for security reasons - We have detected 3 failed log in attempts to your account from [...]   
    I received an email notification same as title of this post 2 days ago.  Just now I changed my password.  But I thought I would make a post to see if anyone else might have been targeted (as I am a Moderator), especially any other Admins or Moderators.
     
    I am going to ping everyone I can think of (actually, I simply use the list here) below, but please add anyone else who I miss.
     
    @Igor @lanefu @Werner @TonyMac32 @SteeMan @soerenderfor @pfeerick @NicoD @JMCC @balbes150 @_r9
  9. Like
    lanefu got a reaction from hartraft in Regional Armbian Apt mirrors   
    Some more mirror updates:
    By default, geoip support will redirect you to the best regional mirror pool.   (it's not perfect) Armbian-config now supports mirror selection: depending on version of Armbian config you may need to first do code { font-family: Consolas,"courier new"; color: crimson; background-color: rgba(0, 0, 0, 0.2); padding: 2px; font-size: 105%; } apt update && apt install jq -y to install missing dependency   Armbian-config -> personal -> mirrors FOSSHOST mirror changes: location specific Fosshost mirror domains are deprecated: us.mirrors.fossho.st us.mirorrs.fosshost.org uk.mirrors.fossho.st uk.mirrors.fosshost.org FOSSHOST is now front-ending mirrors via the Fastly CDN see announcement from FOSSHOST.org mirrors.fosthost.org and mirrors.fossho.st are all that is needed to use their Fastly CDN Because Fastly is an on-demand caching CDN, performance will vary by your physical location.  Some may experience a positive improvement, or some may experience a negative improvement. The FOSSHOST apt mirrors are most effective for those in Regions with poor internet connectivity and outside of greater Europe Currently our automatic redirect has FOSSHOST  in the round-robin pool for North America and Asia.   Those in China may want to use a China Specific mirror.  Those in Asia outside of China will likely benefit for choosing the Fastly mirror I recommend using the existing Armbian EU mirrors for those within Greater EU with good internet connectivity. Since these Mirrors are caching, they work best when under high utilization. Ex: Great for apt indexes and common packages, extremely popular image downloads during release.   Less ideal for unpopular images which won't be cached.  
    We're very grateful for all our partners providing mirrors--and their extreme generosity in providing a vast amount of space.  We're always trying to optimize our package and image distribution.   Armbian's a unique project as our release output produces over 400 system images.   Seeding and distribution is significant.
     
    If you have any questions or thoughts.. please share on this thread!
     
    lanefu
     
     
  10. Like
    lanefu reacted to jeanrhum in Need Quick python enhancement for our dl-redirect   
    I tried something: https://github.com/armbian/dl-router/pull/13
  11. Like
    lanefu got a reaction from Igor in Regional Armbian Apt mirrors   
    Some more mirror updates:
    By default, geoip support will redirect you to the best regional mirror pool.   (it's not perfect) Armbian-config now supports mirror selection: depending on version of Armbian config you may need to first do code { font-family: Consolas,"courier new"; color: crimson; background-color: rgba(0, 0, 0, 0.2); padding: 2px; font-size: 105%; } apt update && apt install jq -y to install missing dependency   Armbian-config -> personal -> mirrors FOSSHOST mirror changes: location specific Fosshost mirror domains are deprecated: us.mirrors.fossho.st us.mirorrs.fosshost.org uk.mirrors.fossho.st uk.mirrors.fosshost.org FOSSHOST is now front-ending mirrors via the Fastly CDN see announcement from FOSSHOST.org mirrors.fosthost.org and mirrors.fossho.st are all that is needed to use their Fastly CDN Because Fastly is an on-demand caching CDN, performance will vary by your physical location.  Some may experience a positive improvement, or some may experience a negative improvement. The FOSSHOST apt mirrors are most effective for those in Regions with poor internet connectivity and outside of greater Europe Currently our automatic redirect has FOSSHOST  in the round-robin pool for North America and Asia.   Those in China may want to use a China Specific mirror.  Those in Asia outside of China will likely benefit for choosing the Fastly mirror I recommend using the existing Armbian EU mirrors for those within Greater EU with good internet connectivity. Since these Mirrors are caching, they work best when under high utilization. Ex: Great for apt indexes and common packages, extremely popular image downloads during release.   Less ideal for unpopular images which won't be cached.  
    We're very grateful for all our partners providing mirrors--and their extreme generosity in providing a vast amount of space.  We're always trying to optimize our package and image distribution.   Armbian's a unique project as our release output produces over 400 system images.   Seeding and distribution is significant.
     
    If you have any questions or thoughts.. please share on this thread!
     
    lanefu
     
     
  12. Like
    lanefu got a reaction from Werner in Regional Armbian Apt mirrors   
    Some more mirror updates:
    By default, geoip support will redirect you to the best regional mirror pool.   (it's not perfect) Armbian-config now supports mirror selection: depending on version of Armbian config you may need to first do code { font-family: Consolas,"courier new"; color: crimson; background-color: rgba(0, 0, 0, 0.2); padding: 2px; font-size: 105%; } apt update && apt install jq -y to install missing dependency   Armbian-config -> personal -> mirrors FOSSHOST mirror changes: location specific Fosshost mirror domains are deprecated: us.mirrors.fossho.st us.mirorrs.fosshost.org uk.mirrors.fossho.st uk.mirrors.fosshost.org FOSSHOST is now front-ending mirrors via the Fastly CDN see announcement from FOSSHOST.org mirrors.fosthost.org and mirrors.fossho.st are all that is needed to use their Fastly CDN Because Fastly is an on-demand caching CDN, performance will vary by your physical location.  Some may experience a positive improvement, or some may experience a negative improvement. The FOSSHOST apt mirrors are most effective for those in Regions with poor internet connectivity and outside of greater Europe Currently our automatic redirect has FOSSHOST  in the round-robin pool for North America and Asia.   Those in China may want to use a China Specific mirror.  Those in Asia outside of China will likely benefit for choosing the Fastly mirror I recommend using the existing Armbian EU mirrors for those within Greater EU with good internet connectivity. Since these Mirrors are caching, they work best when under high utilization. Ex: Great for apt indexes and common packages, extremely popular image downloads during release.   Less ideal for unpopular images which won't be cached.  
    We're very grateful for all our partners providing mirrors--and their extreme generosity in providing a vast amount of space.  We're always trying to optimize our package and image distribution.   Armbian's a unique project as our release output produces over 400 system images.   Seeding and distribution is significant.
     
    If you have any questions or thoughts.. please share on this thread!
     
    lanefu
     
     
  13. Like
    lanefu got a reaction from Werner in Req: Build my Dream Compute SBC   
    The firefly servers are quite ideal... although upfront cost is a little rough.   TV boxes is a little too inelegant.  My proposal lets you just have 1 wire into the board, and let you expand slowly.
  14. Like
    lanefu got a reaction from gounthar in Req: Build my Dream Compute SBC   
    Suggestion for a (near future) product.
     
    We're still lacking a good SBC out in the wild for small clusters.. Recent SoC performance is good.. Currently all the homelab and k8s nerds are just using RPI4s because they have a header for POE support.    We know there are better options.

    We had pitched this to orange pi but weren't receptive. Just sharing my idea here.
     
    A Lean SBC exclusively for server tasks.
    "The Ultimate Homelab SBC"

    Real 802.11at POE+ means only 1 cable for your compute node
    Use SPI flash for custom provisioning configuration
     
    Optimized for Compute
    "Ready for clustering and kubernetes"
    Has the Performance and Storage you need to easily deploy for clustered computing

    Suggested Reference Specs
    RK3399K or similar performant SoC Gigabit Ethernet 4G Dual Channel LPDDR4 16-32gig EMMC SPI flash 128mbit/16megabyte No Wifi No bluetooth No audio No CSI No HDMI USB 3 802.11af/at support or support for RPI4 style 6 pin POE hat All Ports in rear? holes for additional heatsink mounting options
  15. Like
    lanefu got a reaction from TRS-80 in Req: Build my Dream Compute SBC   
    I had to try 🤓
  16. Like
    lanefu reacted to Andy Sun in Req: Build my Dream Compute SBC   
    We also virtualize 10 Android cellphones in 1 RK3399 CPU, which means the Cluster server R2 can virtualize 720 Android phones.
     
     

  17. Like
    lanefu got a reaction from balbes150 in Req: Build my Dream Compute SBC   
    I understand your point, but I just want this to have 1 wire for my POE switch.  My cluster is pretty reliable for my nomad workloads at home.


  18. Like
    lanefu reacted to P.P.A. in Understanding Hardware-Accelerated Video Decoding   
    I've taken peeks at threads related to hardware decoding (of H.264 and HEVC, mainly) on Allwinner and Rockchip platforms on and off, sometimes dabbled in trying and failing to implement solutions recommended there. Being a complete amateur, I find the topic very opaque and confusing, with various different components that need to interface with each other, be patched in sync, and sometimes change drastically between kernel versions, etc. Today I sat down and read up on these subjects, scouring wikis, documentation, this forum, and assorted other sources to try and understand how this works. In this post I will attempt to compile what I've learned on the different software components involved, their relationships, their current status, and solutions to the problem. I hope people more knowledgeable will correct me when I get something wrong or cite outdated information. Stuff which I am highly uncertain of I will print in italics.
    (This post is going to focus on mainline implementations of Cedrus/Allwinner, I haven't looked into Hantro/Rkvdec/Rockchip specifics yet. I will speak only of H.264 and H.265/HEVC; I don't understand the high/low stuff and didn't pay attention to other codecs.)
     
    Components:
     
    Basics: Video codecs like H.264, H.265/HEVC, MPEG-2, etc. are standardised methods which serve to more efficiently encode and decode videos, reducing their filesize. Software en-/decoding is very CPU-intensive. Modern GPUS and ARM SoCs therefore contain specialised hardware (VPUs) to delegate these tasks to. Working hardware decoding is particularly important for underpowered ARM CPUs.
    Drivers: Topmost in the stack are the VPU drivers. These are Sunxi-Cedrus/Cedrus V4L2 M2M and Cedar [Is this the legacy one?] on Allwinner; Hantro and Rkvdec on Rockchip. These are all still in development, but Cedrus already fully supports H.264, and partially supports HEVC, and is already usable in the mainline kernel.
    APIs: In order for anything (userspace APIs, libraries) to make use of the VPU drivers, you need backends/APIs. For Cedrus, there is the unmaintained libva-v4l2-request backend which implements VA-API, the legacy VDPAU implementation libvdpau-sunxi, and as of kernel version 5.11, H.264 has been merged into the uAPI headers. Different applications may make recourse to one or another of these APIs.
    Libraries: FFmpeg and GStreamer. provide libraries and APIs of their own to other applications but can (importantly!) also output directly to the framebuffer. FFmpeg must be patched to access either libva-v4l2-request or the Cedrus driver headers. GStreamer directly accesses kernel headers since 1.18 (works on 5.9, not on 5.10; 1.20 will support 5.11.)
    Media players: mpv and depends on FFmpeg for hardware acceleration (and must be patched together with it). VLC can be set to access libva-v4l2-request directly. Kodi 19.0+ supports hardware acceleration out of the box without any out-of-tree patches.
    Display server: An additional complication is drawing the output of any of the above on screen. Most successful implementations I've seen bypass X11 and either output directly to the framebuffer or force a plane/display layer on top of any X windows. Wayland apparently makes this easier by allowing applications to use their own DRM planes, but this hasn't been explored much yet. Kodi 19.0 works with all three windowing systems (X, Wayland, and gbm).
     
    Codec status:
     
    Taken from the LibreElec thread (which reflects LibreElec's status and is ahead of what works elsewhere, but outlines hardware limitations):
     
     
    Approaches:
     
    Many people have managed to make it work on their machines using different approaches. Note that some of these solutions are one or two years old, and kernel developments since may have changed the situation. Ordered from newer to older:
     
    LibreElec – kernel + ffmpeg + Kodi: LibreElec is a Just-Enough-OS with the sole purpose of running Kodi, a media player. It's at the bleeding edge and usually implements codecs and features well before mainline or other distros. It achieves this by heavily patching everything up and down the stack, from the Linux kernel over FFmpeg to Kodi itself. These patches could all be applied to an Armbian build, but there are a lot of them, they're poorly documented, and you'd need to dig into their github to understand what they all do. LibreElec runs Kodi directly without a desktop. kodi-gbm is a package that can be installed on Armbian and functions similarly.
    Key contributors to the project are @jernej and @Kwiboo, who sometimes post about their work here (and have been very helpful with questions, thank you). @balbes150 includes some of LibreElec's patches in his Armbian-TV builds, but I don't know which.
     
    Kodi 19.0: 
     
    LibreElec patches + mpv:
     
     
    @megous – Kernel 5.11 + GStreamer: This implementation, done here on a PinePhone (A64), patches the 5.9 kernel and uses a recent version of GStreamer (1.18 and up), whose output is rendered directly to a DRM plane via kmssink. (No X or Wayland.)
    GStreamer 1.18 works with the 5.9 kernel. It does not work with 5.10, because of numerous changes to the kernel headers in this version. In 5.11 the H.264 headers were moved into the uAPI; the master branch of GStreamer reflects this, but there haven't been any releases with these patches yet. It'll probably be in repositories with GStreamer 1.20; until then you can build it from source.
     
    @Sash0k – patched libva-v4l2-request + VLC: This updates bootlin's libva-v4l2-request and follows the Sunxi wiki's instructions for enabling VLC to make use of it. It works on the desktop. This only works for H.264 and breaks HEVC. When I tried to replicate this approach on a recent Armbian build, I discovered that the h264.c files in the kernel (that libva-v4l2-request draws on) have changed considerably between 5.8 and 5.10, and I lack the understanding to reconcile libva-v4l2-request with them.
     
    @ubobrov – old kernel + libcedrus + libvdpau-sunxi + ffmpeg + mpv: This approach, which supports encoding decoding of H.264 uses the libvdpau-sunxi API and ports the legacy driver to mainline as a loadable kernel module and if I understand it correctly, ubobrov ported a legacy feature to mainline. In the post quoted below the kernel is 4.20, but the same method has been successfully applied to 5.7.8 by another user. It requires that the board's device tree be patched, as documented in ubobrov's github repository.
     
     
    The summary seems to be that none of the current implementations on Allwinner boards really play nice with X or desktop sessions, and it's best to output directly to the framebuffer. Kwiboo has forked FFmpeg and mpv to make good use of new and unstable kernel features/hardware acceleration which will take a while to make their way upstream. The recent 5.11 move of stateless H.264 out of staging and into the uAPI should facilitate further developments.
    I intend to try some of these things in the nearer future. Thanks to everyone who works on mainlining all of this VPU stuff, and to users here who contribute solutions and readily & patiently answer questions (Jernej especially). I hope I didn't post any falsehoods out of ignorance, and welcome any corrections.
     
    Other related threads here:
    https://forum.armbian.com/topic/11551-4kp30-video-on-orange-pi-lite-and-mainline-hardware-acceleration/
    https://forum.armbian.com/topic/16804-orange-pi-pc-h3-armbian-focal-5104-sunxi-av-tv-out-cvbs-enable/page/2/
    https://forum.armbian.com/topic/11184-hardware-graphicvideo-acceleration-in-h3-mainline/
    https://forum.armbian.com/topic/13622-mainline-vpu/
  19. Like
    lanefu reacted to P.P.A. in Understanding Hardware-Accelerated Video Decoding   
    Thanks to the very patient support from jernej and ndufresne on the linux-sunxi IRC channel, I could confirm that GStreamer 1.19+ works out of the box on the 5.11 kernel (sunxi-dev), tested with Hirsute and Bullseye, at least on the H3/Orange Pi PC. (I haven't been able to build or run a 5.11 image on a H6 device, so I couldn't test it there yet.)
     
    Kernel:
    Build 5.11.y with this patch, must be included as a userpatch. 5.11.6 includes it from the getgo, but below that you need to add it yourself:
    <jernej> PPA: kernel 5.11.6 is released with Cedrus patch  
    Requirements:
    sudo apt update sudo apt install meson ninja-build pkg-config libmount-dev libglib2.0-dev libgudev-1.0-dev libxml2-dev libasound2-dev  
    Building & installing GStreamer:
    git clone https://gitlab.freedesktop.org/gstreamer/gst-build.git cd gst-build meson -Dgst-plugins-bad:v4l2codecs=enabled -Dgst-plugins-base:playback=enabled -Dgst-plugins-base:alsa=enabled build ninja -C build cd build sudo meson install sudo /sbin/ldconfig -v  
    You can play videos from the command line to the framebuffer. At least on the OPiPC there is a problem where it doesn't automatically play on the correct DRM layer and the video is hidden. To fix this, it needs to be ran once per boot with “plane-properties=s,zpos=3”:
     
    gst-play-1.0 --use-playbin3 --videosink="kmssink plane-properties=s,zpos=3" video.file  
    Afterwards it should be fine with just gst-play-1.0 --use-playbin3 input.video (until the next reboot).
    h.264 (except for 10-bit, which the hardware cannot handle) decodes smoothly with CPU load across all cores around 2%–5%.
     
     
    Thanks; the OP was edited accordingly.
  20. Like
    lanefu got a reaction from manuti in Req: Build my Dream Compute SBC   
    Those are totally awesome... The data sheets don't have enough information about the networking.

    Does the built in switch support LAG/LACP and VLAN trunking?
  21. Like
    lanefu got a reaction from manuti in Req: Build my Dream Compute SBC   
    Suggestion for a (near future) product.
     
    We're still lacking a good SBC out in the wild for small clusters.. Recent SoC performance is good.. Currently all the homelab and k8s nerds are just using RPI4s because they have a header for POE support.    We know there are better options.

    We had pitched this to orange pi but weren't receptive. Just sharing my idea here.
     
    A Lean SBC exclusively for server tasks.
    "The Ultimate Homelab SBC"

    Real 802.11at POE+ means only 1 cable for your compute node
    Use SPI flash for custom provisioning configuration
     
    Optimized for Compute
    "Ready for clustering and kubernetes"
    Has the Performance and Storage you need to easily deploy for clustered computing

    Suggested Reference Specs
    RK3399K or similar performant SoC Gigabit Ethernet 4G Dual Channel LPDDR4 16-32gig EMMC SPI flash 128mbit/16megabyte No Wifi No bluetooth No audio No CSI No HDMI USB 3 802.11af/at support or support for RPI4 style 6 pin POE hat All Ports in rear? holes for additional heatsink mounting options
  22. Like
    lanefu reacted to balbes150 in Req: Build my Dream Compute SBC   
    If the cluster is needed for real productive work, and not for "try and practice". Need to look at the correct configuration of the entire complex. Required elements.
    1. High-quality metal housing for use in racks. The metal of the housing shields the interference, ensures proper cooling and stability of the parameters.
    2. Power supply unit. There is not even anything to discuss here, only a high-quality product and it is very desirable with hot replacement and redundancy. If the cluster is running, it is not allowed to interrupt it due to a power drop.
    3. Smart cooling system (passive radiators on the elements and a general active fan system with control sensors). It is a "system", and not a set of mismatched fans with difficult docking and control. Independent hardware management.
    4. Easy extensibility and scalability (different modules by performance and configuration from minimum to maximum). And the free build-up (addition) of modules without complex reconfiguration of the entire system.
    Firefly R1 R2 servers are well suited for these requirements. But this is for those who really understand that they need the maximum cost / result ratio from the cluster system. For those who only want to practice and learn the principles-you can use any model, up to the simplest TV boxes. 
  23. Like
    lanefu got a reaction from Werner in Req: Build my Dream Compute SBC   
    I had to try 🤓
  24. Like
    lanefu reacted to gprovost in armbian-config RFC ideas   
    I really like the idea to put the constrain that all 3rd party applications must only be supported through containers. That will clearly help to improve user experience thanks to well maintained container but mainly avoid app installation that would potentially mess up the OS. We need to think of a way to list in the menu the already installed containers (of course only list the container installed by armbian-config). Also the question will be do we want to make docker and docker-compose packages a dependency of new armbian-config ? I think we should, it will make thing easier... but of course also not a big deal to install them only when first 3rd app is being installed.
     
    I also like the fragmented approached made by @tparys clearly it will help a lot for maintenance and contribution. We will need to define a clear policy on how to handle local and global variables between the different chunk of scripts.
     
    On the CI topic, I unfortunately also don't see what could be really easily tested in an automated pipeline.
     
    Happy to participate in the refactoring effort.
     
     
  25. Like
    lanefu got a reaction from jeanrhum in Req: Build my Dream Compute SBC   
    Suggestion for a (near future) product.
     
    We're still lacking a good SBC out in the wild for small clusters.. Recent SoC performance is good.. Currently all the homelab and k8s nerds are just using RPI4s because they have a header for POE support.    We know there are better options.

    We had pitched this to orange pi but weren't receptive. Just sharing my idea here.
     
    A Lean SBC exclusively for server tasks.
    "The Ultimate Homelab SBC"

    Real 802.11at POE+ means only 1 cable for your compute node
    Use SPI flash for custom provisioning configuration
     
    Optimized for Compute
    "Ready for clustering and kubernetes"
    Has the Performance and Storage you need to easily deploy for clustered computing

    Suggested Reference Specs
    RK3399K or similar performant SoC Gigabit Ethernet 4G Dual Channel LPDDR4 16-32gig EMMC SPI flash 128mbit/16megabyte No Wifi No bluetooth No audio No CSI No HDMI USB 3 802.11af/at support or support for RPI4 style 6 pin POE hat All Ports in rear? holes for additional heatsink mounting options
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines