Jump to content

Recommended Posts

Posted
18 hours ago, count-doku said:

One other thing to check if the disks are faulty / just doing the clicking -> connect them to some computer. Listen there.

 

 

Indeed,

One other thing is yet to look if you are running NFS..

I had one problem with NFS , wen I mounted on the client, machine, just shutdown and start, the disks are always going down and start..

The disks power was cut abruptly when system went down every time.., and even with system up, they had that behaviour

 

Solution until now... not mounting NFS, until I understand what is causing that problem( That was NOT happening in Helios4, but in another ARM based system.. )

 

To me, his problem its or related with one of:

  •  park/unpark heads aggressively( Which he needs to fix APM modes and timers on each disk.. )
  • NFS or any other think, munted and is creating kernel panics or so..

So to check, I would deactivate nfs,samba, and so on.

Only OS running without application layers..

 

Then check if problem persists, if yes:

Mesure power cycles and load/unload heads, and see if it happening with frequency..

 

for blockid in {a..c};do smartctl --attributes /dev/sd${blockid}|grep -E "^ 12|^193"; done

I also Have Seagate disks, in APM mode 127, so they enter in mode 1( standby, after the timers defined period ), but they don't park and unpark like crazy, they work very nicely..

In the past I had WD REd, and that disks, were parking and unparking like crazy, I could only aliviate the problem encreasing the timmers to tnter in APM mode 1, but I couldn't do much more..

 

Does he, configured timers for APM modes?If you do that with small timers, heads will start parking/Unparking at that intervals defines, and need to be adjusted again..

Use OpenSeaChest Utilities to verify more information not present in SMART, since this tools are made for Seagate disks, and implement Seagate Technology..

Posted

Just so everyone following this thread is up to date: I have returned both disks and it seems the disks went bad.

Posted

I am getting this error almost daily:

 

Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; df -PT '/srv/dev-disk-by-label-easystore' 2>&1' with exit code '1': df: /srv/dev-disk-by-label-easystore: Transport endpoint is not connected

 

It will work again if I reboot Helios4, but then the next day it comes back again. 

 

This occurs to one or several file systems randomly. 

 

Any ideas how to fix this? Is this a hardware or software issue?

Posted

Hello @gprovost It happened again just now and here is the result of armbianmonitor -u:

 

System diagnosis information will now be uploaded to gzip: /var/log/armbian-hardware-monitor.log.1.gz: No such file or directory
http://ix.io/1CBs

 

 

Posted

@movanet Based on your 2 log files, your system is completely overloaded by way too many rclone and smbd instances which trigger Out-of-Memory killer (oom-killer) which in turn will start to kill processes to try to free memory... this will make the system unstable and unresponsive. It seems it's even generating some mmc (sdcard) error, maybe because of the continuous swapping. 

 

So you need to do some proper setting / tunning for rclone to insure it doesn't kill your system.

Posted

@gprovost that is very helpful, thanks a lot! 

 

I have tuned my rclone usage in crontab so that it doesn't run so often. But how can I tune my SMB instances? As a NAS, Helios4 is connected to up to 7-9 clients on my LAN. Should this be reduced?

Posted

@movanet You can first define deadtime = 10 in /etc/samba/smb.conf  which is recommended to close inactive connections and avoid exhausting server's resources.

 

Posted

Running benchmarks on my drive, HGST 2tb 7200rpm OMV with LUKS set up - Default Encryption (XTS), BTRFS, EXT4, XFS 

 

Copying a 30gb file over SMB, I average around 50-55MB/sec to and from the Helios4 from my windows10 desktop NVME system drive. Is everyone else getting similar speed? I expected transfer speed to be a bit higher, but I understand XTS may be slowing this down... 

Posted
18 hours ago, gprovost said:

@jimandroidpc Please refer to this thread where quite a bunch of benchmarking numbers and important remarks were shared : https://forum.armbian.com/topic/8486-helios4-cryptographic-engines-and-security-accelerator-cesa-benchmarking/

 

 

Did you used the right cipher : aes-cbc-essiv:sha256 ? Other cipher won't be accelerated by the CESA engine.

 

Anyway I encourage to follow up on the thread I shared.

No, I didnt use the CBC cipher - I was using OMV image with the XTS. Is there a way to make OMV use the other cipher?

Posted
1 hour ago, jimandroidpc said:

No, I didnt use the CBC cipher - I was using OMV image with the XTS. Is there a way to make OMV use the other cipher? 

 

I don't think the OMV interface allows you to specify which cipher. You need to create the encrypted device via command line.

Will need to check if we can tweak OMV in some way to make it use by default the cipher that can be accelerated by the CESA engines.

Posted
2 minutes ago, gprovost said:

@lanefu Humm yeah a bit tangential :P and honestly not really sure of the message you trying to share.

Oh... i just kinda think he helios4 is the board that achieved what I thought the espressobin was going to do.

Posted
Just now, lanefu said:

Oh... i just kinda think he helios4 is the board that achieved what I thought the espressobin was going to do.

Hahah cool to hear that. The idea of Helios4 was to fulfill one specific scope : NAS. There are a lot of awesome boards out there but often they are trying to do too many things.

For instance if we do a refresh of Helios4 with a new SoC that has some display output, I'm pretty sure we would decide to not expose it... in order to stick to a pure headless NAS server concept.

Well we hope we will carry on this path of "disruptive" DIY NAS solution with future project, but it's not tomorrow that we will steal the market share of entry level proprietary NAS :P

Posted

@jimandroidpc I attached a dirty patch to hard code the right cipher in OMV LUKS plug-in. So when you create an encrypted device under OMV it will use the right cipher : aes-cbc-essiv:sha256

It's far to be the ideal way, we should actually request to the developer of the OMV LUKS plug-in to add the possibility for user to choose the cipher.

 

To apply the patch

sudo patch -d/ -p0 < helios4-omv-luks.patch

 

helios4-omv-luks.patch

Posted
7 hours ago, gprovost said:

Hahah cool to hear that. The idea of Helios4 was to fulfill one specific scope : NAS. There are a lot of awesome boards out there but often they are trying to do too many things.

For instance if we do a refresh of Helios4 with a new SoC that has some display output, I'm pretty sure we would decide to not expose it... in order to stick to a pure headless NAS server concept.

Well we hope we will carry on this path of "disruptive" DIY NAS solution with future project, but it's not tomorrow that we will steal the market share of entry level proprietary NAS :P

Hello

 

If you do a refresh of Helios4, it would be nice to use a 64bits SoC, as filesystems are limited to 16 TB in Linux 32 bits.

 

However, as is, sure : Helios4 is the most suitable ARM card to build a NAS, and it is also nice to have an "all-in-one" package to build it (especially the case)

 

Frédéric

Posted
21 hours ago, gprovost said:

@jimandroidpc I attached a dirty patch to hard code the right cipher in OMV LUKS plug-in. So when you create an encrypted device under OMV it will use the right cipher : aes-cbc-essiv:sha256

It's far to be the ideal way, we should actually request to the developer of the OMV LUKS plug-in to add the possibility for user to choose the cipher.

 

To apply the patch


sudo patch -d/ -p0 < helios4-omv-luks.patch

 

helios4-omv-luks.patch

not working for me.. still used the XTS default in OMV - did I need to modify the file or command at all? I had it copied on a usb drive I mounted. 

Posted

@jimandroidpc Well does the patch get applied ? the patch just modifies 2 lines in /usr/share/php/openmediavault/system/storage/luks/container.inc to add -c aes-cbc-essiv:sha256 to the crypsetup luksFormat command.

Anyhow I just rise a change request on the OMV LUKS plugin and it seems the maintainer is going to make the improvement since this change could benefit many ARM SoC based board : https://forum.openmediavault.org/index.php/Thread/11592-LUKS-disk-encryption-plugin/?postID=198390#post198390

 

@fpabernard Well hopefully one day there will be a refresh revision of Helios4.

Posted

 

3 minutes ago, gprovost said:

@jimandroidpc Well does the patch get applied ? the patch just modifies 2 lines in /usr/share/php/openmediavault/system/storage/luks/container.inc to add -c aes-cbc-essiv:sha256 to the crypsetup luksFormat command.

Anyhow I just rise a change request on the OMV LUKS plugin and it seems the maintainer is going to make the improvement since this change could benefit many ARM SoC based board : https://forum.openmediavault.org/index.php/Thread/11592-LUKS-disk-encryption-plugin/?postID=198390#post198390

 

@fpabernard Well hopefully one day there will be a refresh revision of Helios4.

I cant tell if its applied or not 

Posted
1 minute ago, jimandroidpc said:

I cant tell if its applied or not 

I just told you what the patch does, so you should be able to figure if the target file was modified or not.

If you are unsure then better wait the next OMV LUKS plugin upgrade, hopefully this new feature might be added by then.

Posted
Just now, gprovost said:

I just told you what the patch does, so you should be able to figure if the target file was modified or not.

If you are unsure then better wait the next OMV LUKS plugin upgrade, hopefully this new feature might be added by then.

I checked a new encryption and it still says XTS - so from what i can tell its not applied. I may just wait, thanks for looking into it - if I can help id be happy to but im learning as I go

Posted

Hi, as a user of Helios4 I wanted to give my feedback as a user, and something I'd like to see in the next iteration of this board, if it ever happens.

 

I have been using my Helios4 for a while now as a backup (via rsync) NAS to my main Synology NAS. I have 2x 8TB drives + a 3TB drive + 2TB drive, configured with lvm to make a large JBOD volume. This is a backup NAS so any failures can be tolerated (I'll simply make a new volume and backup again) 

 

I absolutely love it, and love how cost effective it is. This is the perfect solution for my backup needs, I can use my left over consumer grade hard drives in JBOD and they spin down for 99% of the time.

 

There's just one thing that really annoys me about it, and that the CPU is 32-bit. When I tried to configure my LVM, I had a limitation of 16TB volumes maximum, and this is due to the use of a 32-bit CPU if I understand correctly. If I tried to do it, the entire system would simply hang during filesystem formatting. This was something I wasn't aware of when I purchased the thing, and now I'm stuck trying to figure out how to sort files into what volume. I would really like to see a future NAS board that uses a 64-bit CPU, for example ARM64.

 

I would also appreciate if something has a solution to getting volumes larger than 16TB.

 

 

Posted

@9a3eedi Thanks for your feedback. Sooner or later we will do a refresh of Helios4 with a 64bit SoC... but it's not going to happen so soon.

 

Have you tried MergerFS ? It could be used to union your different volumes into a single one. I'm not sure though if there might be some limitation because of the 32bit arch.

Posted
17 hours ago, 9a3eedi said:

Hi, as a user of Helios4 I wanted to give my feedback as a user, and something I'd like to see in the next iteration of this board, if it ever happens.

 

I have been using my Helios4 for a while now as a backup (via rsync) NAS to my main Synology NAS. I have 2x 8TB drives + a 3TB drive + 2TB drive, configured with lvm to make a large JBOD volume. This is a backup NAS so any failures can be tolerated (I'll simply make a new volume and backup again) 

 

I absolutely love it, and love how cost effective it is. This is the perfect solution for my backup needs, I can use my left over consumer grade hard drives in JBOD and they spin down for 99% of the time.

 

There's just one thing that really annoys me about it, and that the CPU is 32-bit. When I tried to configure my LVM, I had a limitation of 16TB volumes maximum, and this is due to the use of a 32-bit CPU if I understand correctly. If I tried to do it, the entire system would simply hang during filesystem formatting. This was something I wasn't aware of when I purchased the thing, and now I'm stuck trying to figure out how to sort files into what volume. I would really like to see a future NAS board that uses a 64-bit CPU, for example ARM64.

 

I would also appreciate if something has a solution to getting volumes larger than 16TB.

 

 

 

Hello,

 

I am exactly in the same scenario.

 

However, if you carefully read the man pages of rsync, you'll see they recommand using bind mounts. Refer also to the man pages of mount.

 

The caveats :

- you should choose a set of directories to be "bind mounted" according to the size of target filesystems underneath

- you can not move a file across real volumes underneath (you must copy and delete), so some move commands can fail

- you must set up /etc/fstab manually to make the bind mounts

 

But with that feature, you can rsync a filesystem bigger than 16TB on your source.

Posted
On 3/21/2019 at 5:21 AM, 9a3eedi said:

I would also appreciate if something has a solution to getting volumes larger than 16TB.

I haven't checked it,

But the limitation has to do with, the tools for partitioning, since they are using default C types..

I think that somewere it time was released a flag to so this problem -O  64bit

Check this

 

See the content of:

 cat /etc/mke2fs.conf

in ext4 is there a -O 64bit option,

but I think it only works if filesystem was created with 64 bits, from the beginning..

 

So a 

resize2fs -b /dev/sdx

       -b     Turns  on  the  64bit feature, resizes the group descriptors as necessary, and moves other metadata out of the way.

will only work in a previous 64 bits available fs..

But I have never tested it

 

Posted
30 minutes ago, gprovost said:

@tuxd3v Don't turn ON 64bit feature on partitions mounted on a 32bit system, this is completely wrong.

Probably not a good suggestion..

It needs to be created has a 64bit Filesystem, from the beginning, I think..

If turned ON, in a 32 bits one,

I think it will Damage the FileSystem..

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines