Jump to content

Req: Build my Dream Compute SBC


lanefu

Recommended Posts

Suggestion for a (near future) product.

 

We're still lacking a good SBC out in the wild for small clusters.. Recent SoC performance is good.. Currently all the homelab and k8s nerds are just using RPI4s because they have a header for POE support.    We know there are better options.

We had pitched this to orange pi but weren't receptive. Just sharing my idea here.
 

A Lean SBC exclusively for server tasks.

"The Ultimate Homelab SBC"


Real 802.11at POE+ means only 1 cable for your compute node

Use SPI flash for custom provisioning configuration

 

Optimized for Compute

"Ready for clustering and kubernetes"

Has the Performance and Storage you need to easily deploy for clustered computing


Suggested Reference Specs

  • RK3399K or similar performant SoC
  • Gigabit Ethernet
  • 4G Dual Channel LPDDR4
  • 16-32gig EMMC
  • SPI flash 128mbit/16megabyte
  • No Wifi
  • No bluetooth
  • No audio
  • No CSI
  • No HDMI
  • USB 3
  • 802.11af/at support or support for RPI4 style 6 pin POE hat
  • All Ports in rear?
  • holes for additional heatsink mounting options
Link to comment
Share on other sites

I don't follow the cluster stuff as much as you do lanefu (yet) but because of my recent interest in Pine64 I became aware they sell a CLUSTERBOARD with 7 slots for their SOPINE A64 COMPUTE MODULEs.  Now of course being A64 based these would be less powerful than what you are describing in OP (as well as having different form factor), however OTOH these are actual things you can buy in their store, right now.

 

Not sure how relevant PCIe lanes are to cluster computing, but I guess it depends what the work load is.  Especially if the workload is not CPU based, that could potentially be a killer feature.

 

I also cannot help but speculate it might be too niche a market to interest some mfr.  But what do I know.  A counter point, Pine64 are selling something already in this space, so... who knows?

 

EDIT: I was just watching some NicoD video and was reminded that ODROID N2+ might be even better for this.  The I/O is hobbled (shared USB) but has Gigabit Ethernet (not PoE though) but rates pretty high in his CPU/rendering type benchmarks.

 

Also, I know, it's a "dream" thread, so I hope you don't mind too much my mini review of current solutions (as a comparison, or in case anyone is looking now).  OK, back to "dreaming"...  :)

Edited by TRS-80
add last 2 paragraphs
Link to comment
Share on other sites

5 hours ago, TRS-80 said:

don't follow the cluster stuff as much as you do lanefu (yet) but because of my recent interest in Pine64 I became aware they sell a CLUSTERBOARD with 7 slots for their SOPINE A64 COMPUTE MODULEs.  Now of course being A64 based these would be less powerful than what you are describing in OP (as well as having different form factor), however OTOH these are actual things you can buy in their store, right now.


yeah just not enough gusto with A64 module... kind of DOA if you ask me...   If they had an updated module it would be worth taking a look at... Would also be constraints by the single gigabit interface to the cluster board itself and the rest of your compute.

When building out a big cluster of SBCs.. having POE is a HUGE win for cable management and sanity.... also you can reboot by just killing POE on the switch...

https://dev.to/awwsmm/building-a-raspberry-pi-hadoop-spark-cluster-8b2#hardware

N2+ is definitely top dog for brute force crunching.   But just probably not the ideal device for purchasing and installing many.

The RockPi 4C is probably the closest thing to ideal.  As it supports a PoE HAT an EMMC.    Would be nice to get the price point down by eliminating extra ports, and having POE and EMMC onboard.

Link to comment
Share on other sites

@lanefu Ahahah yeah it seems you are describing more a Cluster Node SBC than a NAS which is our main focus at Kobol ;-) Sorry... But yeah while for me PoE = Remote Power Supply for the big family of everything & nothing that we call IoT... I can clearly see the benefit of it for easy clustering of a small SBC compute node.

 

Maybe you should move the topic to a more general Rockchip thread, because honestly I don't think such design would be a priority for us right now... but who knows ;-)

Link to comment
Share on other sites

5 hours ago, gprovost said:

Maybe you should move the topic to a more general Rockchip thread, because honestly I don't think such design would be a priority for us right now... but who knows ;-)

I had to try 🤓

Link to comment
Share on other sites

If the cluster is needed for real productive work, and not for "try and practice". Need to look at the correct configuration of the entire complex. Required elements.

1. High-quality metal housing for use in racks. The metal of the housing shields the interference, ensures proper cooling and stability of the parameters.

2. Power supply unit. There is not even anything to discuss here, only a high-quality product and it is very desirable with hot replacement and redundancy. If the cluster is running, it is not allowed to interrupt it due to a power drop.

3. Smart cooling system (passive radiators on the elements and a general active fan system with control sensors). It is a "system", and not a set of mismatched fans with difficult docking and control. Independent hardware management.

4. Easy extensibility and scalability (different modules by performance and configuration from minimum to maximum). And the free build-up (addition) of modules without complex reconfiguration of the entire system.

Firefly R1 R2 servers are well suited for these requirements. But this is for those who really understand that they need the maximum cost / result ratio from the cluster system. For those who only want to practice and learn the principles-you can use any model, up to the simplest TV boxes.  :)

Link to comment
Share on other sites

On 3/4/2021 at 11:11 PM, lanefu said:

Suggestion for a (near future) product.

 

We're still lacking a good SBC out in the wild for small clusters.. Recent SoC performance is good.. Currently all the homelab and k8s nerds are just using RPI4s because they have a header for POE support.    We know there are better options.

We had pitched this to orange pi but weren't receptive. Just sharing my idea here.
 

A Lean SBC exclusively for server tasks.

"The Ultimate Homelab SBC"


Real 802.11at POE+ means only 1 cable for your compute node

Use SPI flash for custom provisioning configuration

 

Optimized for Compute

"Ready for clustering and kubernetes"

Has the Performance and Storage you need to easily deploy for clustered computing


Suggested Reference Specs

  • RK3399K or similar performant SoC
  • Gigabit Ethernet
  • 4G Dual Channel LPDDR4
  • 16-32gig EMMC
  • SPI flash 128mbit/16megabyte
  • No Wifi
  • No bluetooth
  • No audio
  • No CSI
  • No HDMI
  • USB 3
  • 802.11af/at support or support for RPI4 style 6 pin POE hat
  • All Ports in rear?
  • holes for additional heatsink mounting options

We have Cluster Server R1 with 11pcs of CPU(RK3399 and also compatible with other CPU too) clustered, Cluster server R2 with 73pcs of CPU(RK3399 and also compatible with other CPU too) clustered.

Cluster Server R1: http://en.t-firefly.com/product/clusterserver
Cluster server R2 : http://en.t-firefly.com/product/cluster/r2.html?theme=pc

Link to comment
Share on other sites

6 hours ago, balbes150 said:

Firefly R1 R2 servers are well suited for these requirements. But this is for those who really understand that they need the maximum cost / result ratio from the cluster system. For those who only want to practice and learn the principles-you can use any model, up to the simplest TV boxes.  :)


The firefly servers are quite ideal... although upfront cost is a little rough.   TV boxes is a little too inelegant.  My proposal lets you just have 1 wire into the board, and let you expand slowly.

Link to comment
Share on other sites

6 hours ago, Andy Sun said:

We have Cluster Server R1 with 11pcs of CPU(RK3399 and also compatible with other CPU too) clustered, Cluster server R2 with 73pcs of CPU(RK3399 and also compatible with other CPU too) clustered.

Cluster Server R1: http://en.t-firefly.com/product/clusterserver
Cluster server R2 : http://en.t-firefly.com/product/cluster/r2.html?theme=pc


Those are totally awesome... The data sheets don't have enough information about the networking.

Does the built in switch support LAG/LACP and VLAN trunking?

Link to comment
Share on other sites

1 час назад, lanefu сказал:

The firefly servers are quite ideal... although upfront cost is a little rough.   TV boxes is a little too inelegant.  My proposal lets you just have 1 wire into the board, and let you expand slowly.

In order to practice creating and working with a cluster, you can use any "junk". But for real work (when the result is more important than the cost), all the "constructors with one wire" assembled from mismatched components "on the knee", always cost more than a proven solution. It is important to set your priorities correctly. To create software (which will work in the cluster cells), a minimum developer fee is used with one module that is planned to be used in the cluster. The price of such a developer kit (fee + module) is not high and you can have several such development kits for all project participants (developer, tester, debugger, etc.). And a full server (a set of servers that can consist of more than one physical server) is used at the final stage of launch when the entire cluster system is put into operation.   ;)

Link to comment
Share on other sites

18 minutes ago, balbes150 said:

In order to practice creating and working with a cluster, you can use any "junk". But for real work (when the result is more important than the cost), all the "constructors with one wire" assembled from mismatched components "on the knee", always cost more than a proven solution. It is important to set your priorities correctly. To create software (which will work in the cluster cells), a minimum developer fee is used with one module that is planned to be used in the cluster. The price of such a developer kit (fee + module) is not high and you can have several such development kits for all project participants (developer, tester, debugger, etc.). And a full server (a set of servers that can consist of more than one physical server) is used at the final stage of launch when the entire cluster system is put into operation. 


I understand your point, but I just want this to have 1 wire for my POE switch.  My cluster is pretty reliable for my nomad workloads at home.

sbyb25ch.pngmjd0ta8j.png

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines