Jump to content

Recommended Posts

Posted

Hey guys.

 

I'm totally new to this, used to fiddle with the PI when it came out but not any real serious projects. Just recently I've gotten annoyed on how I play back media and where I have my storage solution so I figured it was time for an upgrade.

 

I've been reading this thread a lot, much respect goes to tkaiser for making such an amount of info available to us newbies. I actually ordered the Banana PI M3 until I read the wikis and this post which lead me to believe that the BPI M3 would make for a fantastically shitty NAS. I had actually ordered it but canceled the order before it was shipped. 

 

From those numbers I interpret that the Clearfog over SATA would be one of the ultimate for a somewhat cheap and good NAS solution, it might how ever be a bit better with a little bit more horse power if I want to use it as more than a NAS. This lead me to check out the ROCK64, which I ultimately bought. I figured that I would use it as a NAS backed by a USB3 Raid5 enclosure and use it for misc other stuff (FTP, backup, etc), but mainly as a NAS and Plex server.

 

So my question is, can I expect to saturate 1-2 users over my gigabit network with that setup and will it work well for my use case? From the numbers that I saw (the more detailed ones here) on the Clearfog I'm expecting to get around 120mb/s write and ~250mb/s read on my setup, would that be in the ballpark of what I can expect?

 

Anything else I need to keep in mind? The ROCK64 will not be running a DE.

 

Sorry if my questions are ignorant somehow.

Best regards

Stefan

Posted
1 hour ago, stefan.z.j said:

I actually ordered the Banana PI M3 until I read the wikis and this post which lead me to believe that the BPI M3 would make for a fantastically shitty NAS

 

The problem with this M3 is that situation there is even worse due to the used USB-to-SATA bridge: GL830 is slow as hell and also buggy: https://www.cnx-software.com/2017/03/16/suptronics-x800-2-5-sata-drive-expansion-board-and-cases-for-raspberry-pi-23-and-odroid-c2-boards/#comments

 

In my opinion boards with this GL830 should neither be bought nor supported by open source projects (like Armbian) but for whatever (historical) reasons Armbian supports Xunlong's Orange Pi Plus and 'Plus 2' which use the same crappy USB-to-SATA bridge (though there the USB situation is not as worse as with BPi M3 where the engineers forgot that they could've used the SoC's other USB port to connect the SATA bridge --> on BPi M3 all USB peripherals and a connected 'SATA' disk fight for bandwidth since all being behind the same internal USB2 hub while the SoC's second USB port is unused).

 

1 hour ago, stefan.z.j said:

NAS backed by a USB3 Raid5 enclosure

 

IMO a really bad idea. Those proprietary RAID boxes are just a huge single point of failure and I would never rely on such things (but I do storage for a living and only deal with RAID any more when it failed so I'm a bit biased here).

 

Wrt performance: ROCK64 is limited by Gigabit Ethernet, real world throughput with Windows Explorer or macOS Finder will be around or even above 100 MB/s (megabyte) given the USB3 attached storage is fast enough.

Posted
2 minutes ago, tkaiser said:

The problem with this M3 is that situation there is even worse due to the used USB-to-SATA bridge: GL830 is slow as hell and also buggy: https://www.cnx-software.com/2017/03/16/suptronics-x800-2-5-sata-drive-expansion-board-and-cases-for-raspberry-pi-23-and-odroid-c2-boards/#comments

 

In my opinion boards with this GL830 should neither be bought nor supported by open source projects (like Armbian) but for whatever (historical) reasons Armbian supports Xunlong's Orange Pi Plus and 'Plus 2' which use the same crappy USB-to-SATA bridge (though there the USB situation is not as worse as with BPi M3 where the engineers forgot that they could've used the SoC's other USB port to connect the SATA bridge --> on BPi M3 all USB peripherals and a connected 'SATA' disk fight for bandwidth since all being behind the same internal USB2 hub while the SoC's second USB port is unused).

This is the exact reason to why I canceled the order 4 hours after I placed it and went with a ROCK64 instead:) I have to say thank you because you are the main reason to why canceled the order, your posts here and on the Banana PI forums guided me in not wasting money.

 

5 minutes ago, tkaiser said:
1 hour ago, stefan.z.j said:

NAS backed by a USB3 Raid5 enclosure

 

IMO a really bad idea. Those proprietary RAID boxes are just a huge single point of failure and I would never rely on such things (but I do storage for a living and only deal with RAID any more when it failed so I'm a bit biased here).

May I ask what you would suggest? I haven't ordered the raid box just yet due to money issues, just going with a single enclosure USB disk and accepting the speed and redundancy loss.

 

And yet again man, thank you very very much for your posts. I've been here for a few hours and I've managed to get so much information for your work.

 

Best regards

Stefan

Posted

As far as the RAID5 on USB 3, I have to say I do agree with @tkaiser , even if I do run that exact setup.  An example: my RAID box now has a chassis supply in it and some wire hackery after the garbage factory supply failed and would have left me data-less.  USB3 connectors are crap, I've had to reset that several times over the years due to it just magically ceasing to work.  (It's a full-size A to B), those extra conductors in the top half are the culprit).  I bought a second drive to mirror the array as a doomsday backup, and the only utterly irreplaceable data (photos) I have incremental backup running to a cloud drive.  I have an XU4 serving that purpose for now.

Posted
50 minutes ago, stefan.z.j said:

May I ask what you would suggest? I haven't ordered the raid box just yet due to money issues, just going with a single enclosure USB disk and accepting the speed and redundancy loss.

 

I would suggest not using RAID at home since it sucks (it works as long as you won't need it but once a disk fails funny things happen like the whole array disappearing). Also the redudant RAID modes are not about data protection/safety but only about availability (who needs this at home?). The controllers in those external USB RAID boxes are almost always some crappy USB-to-SATA bridges combined with a port multiplier and then USB3 receptable/cable crappiness is always an issue.

 

Just some further readings:

BTW: If you already lack the money to buy such an USB RAID box what about backup? You need another box with 120% capacity to do backup correctly (not cloning stuff but keeping versions)

 

This all is rather off-topic here (maybe OMV forum is a better place). To remain on-topic: ROCK64 is currently the most inexpensive performant single disk NAS thingie at least I know of. Performance is just with any x64 NAS box (since Gigabit Ethernet is the bottleneck). I'm still hoping for another variant that avoids USB cables/connectors just like ODROID-HC1 but TL Lim's response left some room for interpretation (Pine Inc is just now starting to work on an Allwinner H6 board)

 

Posted

I keep pushing some questions about the Rock64 "What about port trunking with and extra usb gig eth"

I am asking those questions purely out of interest as if you can plug into the right network backbone I am thinking the Rock64 makes for quite a good workgroup server.
A $44.95 SoC could actually make a pretty capable server!!! :) That is just wild and love the idea.
With someone who has played around with container/virt systems and all in one servers mainly to pool the cost of hardware investment its really interesting just to see what it could do.

Home though many single disks can get near the true saturation point of Gig ethernet and the majority of homes that will be the best you get if your plugging direct into the router.
WiFi theoretically with the newer AC1900 routers in a perfect environment yeah the speeds could go up and fastest in that perfect environment I have seen a test on is the  Linksys Smart Wi-Fi Router AC 1900 (WRT1900AC) which manages 80MBps file transfers.
Also its £190 on Amazon which sort of knackers how impressed I am with the $44.95 price point of the Rock64.

So I really do get the argument for a single disk SoC arrangement in the home, the bottleneck is the Ethernet and its highly likely connects will be much less than 1gig.
In home usage patterns writing is much less then reading and with diversification (not all at the same time) that single disk arrangement is more enough for most of us.

I would like the Armbian guys to explore pushing the boundaries a bit more as the info they publish is absolute gold and top quality.
Like you can get a 12v to 5v convertor with a much higher quality 12v PSU, you can industrial grade screw lock cables and industrial hubs if you where daft enough to pay that price.
Type C is great and with the next generation employing them, the RK3399 already is.

Hugely complex mining rigs and clusters are already running off what some would say are extremely crappy USB & PSU combinations but its a matter of choice.
USB C has already surfaced and if you have to have a punt Sata could be a dead duck especially in the home.
NAS in its current centralized form could be a dead duck, NAS might just become NAB (Backup)

You guys should be testing and stressing Armbian & these SoCs to extract maximum performance and continuing to push the boundaries and future of Armbian.
Crappy cheap hardware is a matter of choice and if people employ it, it is there problem and not yours.
Even for testing you might use a cheap bit of kit with a disclaimer "Hey I used this, just to test, but for production or prolonged use I strongly suggest..."

 

5 hours ago, tkaiser said:

To remain on-topic: ROCK64 is currently the most inexpensive performant single disk NAS thingie


I think that is the most important thing you have said as thingy is highly likely what it will be. Decentralized spanning as its cost level brings up such a huge array of possibilities.
Each device could well have a single disk to create a Mesh Storage & Network area thingie, as that is my best punt also, as there are so many possibilities.

Armbian & the Rock64 is capable of pushing about a max of 1.7 gig and surely connectors & PSU's isn't part of the focus of interest?
Some SoHo and most small businesses do have wired networks that maybe could really take advantage of this price point if they wanted to.




 

  

Posted
10 hours ago, tkaiser said:

BTW: If you already lack the money to buy such an USB RAID box what about backup? You need another box with 120% capacity to do backup correctly (not cloning stuff but keeping versions)

That's exactly what I did...  I maintained some servers for fraternities when I was in college, I learned a few relevant tidbits.  The RAID box I have has eSATA as well as USB 3 and presents itself as a single drive rather than as a port replicator.  But yes, I'm sure that's not the best USB 3 to SATA bridge in there, it's on its fifth year, any trouble and I may simply go to a pair of mirrored redundant disks without all the overhead.

 

5 hours ago, Stuart Naylor said:

NAS in its current centralized form could be a dead duck, NAS might just become NAB (Backup)

 

My current setup is designed around the idea of highly disposable clients with permissions, so centralized staorage was key.  However, it is, to the fully functional clients, as you suppose:  they have full copies to reduce traffic on the network itself, synchronized for changes.

 

For the Rock64 I've seen their little hat with the fast ethernet and sound, I'd be more interested in a fast Ethernet to cover an HDHomerun in a dedicated fashion.

 

For Plex, and I can say this as I run it, I hate their transcoding scheme.  I think they should have that portion of their work wholly documented so interested parties can improve hardware support.  As it stands the Plex transcoder is 100% software on all platforms but select PC's (which need this advantage the least)  This is important for the next point:  The XU4 is capable of transcoding in software, at least with my really stupid cooling system (what throttling?), I am not sure the Rock64 will have that ability.  If the transcoder were truly open for development, it would be possible to have transcoders for these lightweight ARM devices.  This is, of course, unimportant if all of your devices support direct streaming of everything you want to watch, or you're not going to stream outside the home.  I've been watching Emby develop, I may need to try it out if I can get ffmpeg working properly on Rock64 and Tinker Board, having a more or less shared vcodec system. (Tinker is more primitive).  When I say "I" it's actually more likely @Myy, who is working to get it working on mainline while I cheer enthusiastically.

 

This board has a barrel jack, so that's a positive.  It is tiny though, so I can't be certain it has a very high current rating, often the miniature ones top out at 1 to 1.5 amps.  I have not done my normal power testing on the Rock64, but I doubt it will disappoint.

 

Type C will not be a solution for a few years yet, because of the saturation of microUSB and the availability of $1 for three adapters to USB Type C.  Sadly the reality will be properly designed boards being powered via a meter long AWG 24 cable at 5.0 Volts through a microUSB to Type C adapter. 

 

I have a wired home network, not for speed but rather for reliability, I have many neighbors, all with wireless, and several who think that Channels 2, 7, etc are valid for use. :angry:

Posted

When it comes to backup I am not sure we should be looking at any type of onsite service.
http://sia.tech/ & https://storj.io/ are completely private p2p blockchain storage methods where we just keep private keys.
Both offer interesting solutions that are disruptive technologies to current assumed storage methods.
If you download the 3GB blockchain you could become a suppler & a consumer, but they are merely an example of vastly different ideas in resource sharing.
Sia & Storj are maybe just the start in P2P storage where depending on timeslot, bandwidth, store size you vastly reduce costs by offering sections of free space.

Also I think trans-coding centrally is another dead duck as the RK3328 has and extremely capable VPU.
Why need the process power to transcode multiple streams? 4k is 66-85 Mbps, 1080p is 15 Mbps, source is chosen for devices employed but even 4k can be transcoded locally by extremely humble devices now.

Cables & Barrel jacks are not part of an OS, its seriously doesn't matter about physical connectors purely the bandwidth and manners to employ it.

Decentralised storage has physical storage in decentralized locations but is presented and acts exactly the same as a centralised store.
Because its decentralized bandwidth is partitioned over the amount of devices it compromises of and the connections that gives.

There are various FUSE / Union file systems that can use decentralized storage units and pool as a centralised store that with technology such as the RK3328 is more than capable for home requirements with the right balancing mechanisms.
Storage could be home P2P network swarms, http://www.xtreemfs.org/, it could be Aufs with individual files stored on network shares over a round robin policy.
Redundancy could be blockchain storage of Snapraid parity files who knows what crazy ideas might take place.
Those technologies are purely choice and all I am saying is if a device has the possibility of bandwidth then that should be explored to its maximum.

Armbian should always be tweaked so it can take full opportunity of maximum performance and bandwidth based on the limits of the best power supplies & cabling available.
You shouldn't be limiting your scope of involvement on the assumption of the cheapest imports available.

I will leave you with this to think about a RK3328 device could be a Falcongate router firewall appliance, could be a volumio appliance, kodi, webserver and many more.
Each appliance could have local storage that is shared across all devices and pooled with other device storage, that is backed up to a bockchain cloud that is subsided by your own free space.
Some devices might have higher bandwidth and connections that can be utilized, hence why I mentioned about a 2nd USB gig Ethernet  and port trunking.
There are switches for £50 and less that can do gigabit LACP, there is 30Gb 200MB/S SSD for £15, 60GB 300MB/S  for £25 as an example of cheap high bandwidth drives.
There are hybrid drives and raid units that have cost and bandwidth that fit with the maximums of the RK3328 and should be explored irrespective of possible defects and quality of source. 
Armbian just needs to be an tried and tested and optimized OS that can meet this demands and more.
A cheap usb ssd or raid0 stripe would be needed to match the network throughput and maybe should be tested and optimised irrespective of external concerns.
https://tahoe-lafs.org/trac/tahoe-lafshttps://www.resilio.comhttps://syncthing.net/https://librevault.com/ ...

Posted

Client side of course transcoding is irrelevant to the Rock64, but server side it is not.  I was able to call up videos while in central Mexico from my server in Michigan thanks to transcoding, firstly I have a 5Mbps uplink and secondly the distance and network quality were less than ideal, I certainly wouldn't call it a dead duck.

Posted

@TonyMac32 you just used transcoding as a delivery method it could of been IPFS where you have access to your file system.
You could of just downloaded the file with asynchronous playback and enjoyed it at full quality.
We might all be part of a P2P device network where location makes little difference :)

We may get devices that can cluster and transcode to many but really what we seeing with devices all the way down to IoT are distributed autonomous devices. 
You where able to call up a video on your server because its based on a model of centralized high value, where the central hardware limits the qty of possible parallel trans-codes.
Transcoding centrally for remote streams could be a dead duck but yeah it is still floating, the cheap silicon we are seeing might be the duck shoot.

Maybe trans-coding was pushing the boundaries a little to far but what I was trying highlight with the whole storage argument we could be looking at a very different model.
Processing be it trans-coding or not is adopting a decentralized model and these devices aggregate at much lower cost blur the lines of defined limits from centralized hardware.
Your home setup is based on client/server infrastructure for many reasons with the Rock64 its a node and that is why network i/o is of prime importance.

If you have a P2P network that maximizes infrastructure bandwidth then maybe you wouldn't need to transcode the Rock64 should not have any assumptions that it is a single gig Ethernet / single disk client device. 
It can offer a lot of bandwidth at low cost and for swarms that is gold and not using or testing that bandwidth is wasting it.

Shouldn't of mentioned any specific technologies and stayed with the 'Thingy' and that cheap connectors are not really the importance of what 'Thingies' can do now or how they are employed or in the future.





 

Posted

With regard to Rock64 as a NAS... As I had my share of struggles with RAID, I'm walking around with the idea to get started on the following project:

 

Instead of having a server with multiple disks, why not have multiple servers in fail-over configuration. With a cheap-ass Rock64, this will not cost much more...

Hardware, 2 identical sets consisting of the following: Rock64 with its 5V/3A PSU, USB3.0 storage (SSD or HDD), mSDcard for the root file system

Software, on each: Armbian, Samba, CTDB, DNS, DHCP, all in failover configuration with common GlusterFS shared storage.

Additionally, OpenVPN on each (no fail-over, just on 2 different WAN ports) to have remote access to my home network.

IMO, this should have the potential to work and achieve a reasonable performance, and with rclone to AWS S3 you would be disaster-proof as well...

 

What do you think, can it fly, should I re-consider, or should I bail-out ...

Posted

I would take a look at LizardFS as it is already in most mainline distro's and has a lot of support.
I did a tour of many distributed filesystems weighing up pro's and cons from xtremeFS, Ceph to GlusterFS and GlusterFS came in low on the ratings chart.

You can separate the chunk & metadata server into containers and start with a single device, it holds metadata in ram and is much faster than many.
Its also the easiest to setup and admin and it does have mainstream distro support.

The automation of failure and admin is provided by central commercial services of LizardFS where they  provide enterprise datacenter grade services.
I think its a prime candidate for a home / SoHo automated admin system that will not compete with LizardFS core commercial services.
Needs something like Wi-Fi Protected Setup™ (WPS) where a device can be introduced to the cluster & storage pool by pairing with a NAC (Network Area Controller) and is as simple in use.

The performance/cost ratio of SoC devices is unprecedented and singular monolithic server structures are dead in the water and the future is a distributed model.
A whole retake on storage and usage needs to be taken as small fast local storage will work in parallel with relatively high latency bulk storage and its highly likely files like database files will not be on the same service layer as say media files or office documents and if you think about it its sort of ridiculous that it is.

If its Kubernetes, Docker swarms matched with LizardFS, Ceph will depend on role of employment based on sector of use and likely to have different emphasis on how that sector uses devices & storage.
The idea SoC NAS & Media centers are going to be mini singular versions of there historical higher cost x86 brothers is fairly bat shit crazy mad.

We have hit a level of performance/cost in the sub $50 SoC which is just going to increase where extensible expansion is vertical in stacks and also horizontal as additional devices.
This is from home to SoHo and there will be distro's and systems that encompass this and extensible expansion will be a case of JAA (just add another)

From Kodi boxes, Alexa microphones, Home automation the modern home is full of devices which for a private lan distributed node count equates to performance.

The 1gig ethernet backbone and high speed storage of the current new crop of SoC's is fit purpose for individual ownership and it is a paradigm shift in infrastructure to what has gone before and we are going to start seeing systems now and they are very likely to be opensource.
    
  

Posted

Thanks Stuart for your elaborate response. I have no experience with containerised/virtualize servers, but in the case I would go for an N+1 clustered NAS, what form would be preferable for Samba? Are there easy to deploy/maintain solutions (free, open source) already out there?

Posted

I would have a look at the docker swarm modes and kubernetes as the pi has had a lot of work with them.
Not sure to be honest as used to be pretty on the ball with Samba4 but lost track if they have failover clustering sorted presume they will have.
Samba isn't the problem as that is just a share but the actual storage maybe DRBD https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device

Have a browse through this blog https://fghaas.wordpress.com/ but I am starting to think a FS such as LizardFS or Ceph is a better option with newer distributed SoCs.

https://medium.com/@bossjones/how-i-setup-a-raspberry-pi-3-cluster-using-the-new-docker-swarm-mode-in-29-minutes-aa0e4f3b1768

Posted

Looking at the various pointers and links provided, I'm now convinced that GlusterFS is not the way to go and the Docker Swarm mode looks really appealing. Thus for the distributed storage, probably Ceph RBD would be a good starting point, as it is provided as a standard volume driver for Docker.

 

As all these are new grounds for me, I have no clear picture on how to configure networking, dns and dhcp such all works reliably regardless which node is operational. I'll will first give Docker and Ceph a try on my two BPi and see what I bump in to....

Posted

I have been playing with Docker and Samba for a while now, and I'm not confident that this is the way to go. Samba is quite picky with regard to network configurations as it relies on broadcast messages. As Docker containers, and especially swarm services, have a dedicated internal network, the set-up becomes complicated and rigid. This looses many of the Docker benefits.

Thus, I'm back to my original idea of the Samba+CTDB combo. Mixed with Ceph, this may be an interesting little project, but first I'll need to get me some hardware...

I also found an article on this subject here.

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines