Jump to content

Helios4 Support


gprovost

Recommended Posts

Hello,

I am new to Helios4, just assembled my NAs, and brought 4 ironwolf 4 GB, for a Raid6..

 

I noticed that, after reading in the Wiki, that writing uboot, to Flash, only allows me to put the OS in the ...USB dongle, is this correct?

I was expecting to use my Raid setup, also for the OS?

 

Another thing that I noticed is that, the CPU heats a lot, even without work, sitting idle it gows trough more than 70C..

The good thing, 10 seconds with Fan 0 on and it decreases to ~ 66C.

In this situation the Disk close to the board suffers a lot with heat.. as it is so close to the board, they should be put higher away from CPU heat source..

 

Also, the Fan only stays on for some seconds, then shutoff..I don´ t know what process is doing that..

I am using last Debian Stretch from 20/01/2019

 

I am also developing a Fan tool( ATS ), but it doesn´ t support yet Helios4 dual fans..

 

Does any one has some hints to setup fans correctly, also for booting from Raid6 Sata disks?

 

Thanks in Advance

Regards

Link to comment
Share on other sites

Hi tuxd3v,

my experience is quite different.

When setting up my new box, CPU temp always was below 58 °C, so the fans never got started. (According to https://wiki.kobol.io/pwm/#configuration-file, fans only start at 70 °C CPU temp.)

The inactive fans quickly led to quite high disk temps (two WD RED 8TB) of 52 °C (the upper one) and 53 °C (the lower one, directly above the board) . This seems too high given the fact WD specifies operating temps of only 0-60 °C, so one of the first things I did was to modify  /etc/fancontrol to

# Helios4 PWM Fan Control Configuration
# Temp source : /dev/thermal-cpu
INTERVAL=10
FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input
MINTEMP=/dev/fan-j10/pwm1=70 /dev/fan-j17/pwm1=70
MAXTEMP=/dev/fan-j10/pwm1=90 /dev/fan-j17/pwm1=90
MINSTART=/dev/fan-j10/pwm1=75 /dev/fan-j17/pwm1=75
MINSTOP=/dev/fan-j10/pwm1=75 /dev/fan-j17/pwm1=75
MINPWM=/dev/fan-j10/pwm1=75 /dev/fan-j17/pwm1=75
#MINPWM=0

With these settings, my fans are always running slowly and silently, but efficiently cooling my disks to not more than approx. 33 °.

Link to comment
Share on other sites

@tuxd3v and @Harvey You right guys we overlooked HDD temp with the new fans. We addressed complain of batch 1 where the type of fan (Type A) we used were always spinning even at lowest PWM. We decided to use fan Type C for batch 2. Comparison between the 2 fans can be found here.

 

We changed fancontrol configuration to cover both fan type, but we didn't take in consideration properly HDD temp in the case of the new fan :-/ We will fix that in next Armbian release and post / communicate to people how to fix the configuration on already running system.

Link to comment
Share on other sites

16 hours ago, Harvey said:

Hi tuxd3v,

my experience is quite different.

When setting up my new box, CPU temp always was below 58 °C, so the fans never got started. (According to https://wiki.kobol.io/pwm/#configuration-file, fans only start at 70 °C CPU temp.)

With these settings, my fans are always running slowly and silently, but efficiently cooling my disks to not more than approx. 33 °.

Hello Harvey,

Thanks for your reply.

My fans, also never started from the beguining, but now I have fan0 active at low speed, and Fan1 Stopped..

 

I found one of the excessive heat temp problems..

Was the Scheduler,

CPU freq was set to 1.6Ghz, without cooling, it reached 73C, my lower disk got 24Hours on this condition.. :blink:

I don´ t know what to think of its state now,(  but, at least I made the right purchase as IronWolf disks are rated at 70C max.. )

cpufreq-set -g ondemand

Thanks for the explanation of fancontrol,

I configured it like this:

INTERVAL=10
FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input
MINTEMP=/dev/fan-j10/pwm1=55 /dev/fan-j17/pwm1=65
MAXTEMP=/dev/fan-j10/pwm1=65 /dev/fan-j17/pwm1=70
MINSTART=/dev/fan-j10/pwm1=20 /dev/fan-j17/pwm1=20
MINSTOP=/dev/fan-j10/pwm1=29 /dev/fan-j17/pwm1=29
#MINPWM=0

Temps right now bellow 57C

root@helios4:~# cat /dev/thermal-cpu/temp1_input
56550

its a better start :thumbup:

Link to comment
Share on other sites

7 hours ago, gprovost said:

@tuxd3v and @Harvey You right guys we overlooked HDD temp with the new fans. We addressed complain of batch 1 where the type of fan (Type A) we used were always spinning even at lowest PWM. We decided to use fan Type C for batch 2. Comparison between the 2 fans can be found here.

 

We changed fancontrol configuration to cover both fan type, but we didn't take in consideration properly HDD temp in the case of the new fan :-/ We will fix that in next Armbian release and post / communicate to people how to fix the configuration on already running system.

Hello gprovost,

Thanks for that.

 

I found one of the heat sources was the cpu fixed at 1.6Ghz, and the fans stopped.

I switched it to 'ondemand', and activated fanctontrol, for fan0, temps are starting to drop..

This fans seems to be of very good quality, I haven´t  checked yet in detail, but they seem to be verygood.

 

I was also naive,

In setting the Nas, with the discs inside from the beguining..

I should first connect everything without the sata disks there, and only in conditions of nice temps, put there the disks..

 

Anyway,

The Holes in the Nas Case assembly Acrylic, should be above of the point they are now,

I understand that in that situation the Sata cables are too short, but with a bigger ones, disks would be better protected from heat since they would be higher from the heat source..

 

But this opinion is to Helios4 team, nothing todo with Armbian I think.

 

Thanks for the clarification :)

Link to comment
Share on other sites

@tuxd3v Let me clarify something. The reported temperature of CPU is the temperature of CPU die (as for any CPU), it doesn't represent the ambient temperature of the case where your HDDs are, neither does it represent the temp of the heatsink on the top CPU. Therefore your HDD weren't exposed to 70C temperature at anytime, maybe at worst 50C ambient temp. So let's not be alarmist ;-)

 

The scaling governor is already set as 'on-demand' by default. It means the CPU freq will vary from 800MHz to 1.6GHz, but at the end it doesn't impact much the power consumption / CPU temperature. The Marvell Armada SoC (CPU) is designed to run at 100C and it's very common for such SoC to quickly reach very high temperature. The current board is designed to actually be 100% passive cooling, but a bit of active cooling is always best, mainly to not impact other component like the HDD in the case of Helios4. 

 

The case is designed to have a well contained horizontal air flow that cool the SoC heatsink and the HDD surfaces. It's a pretty simple but very efficient design.

 

What we overlooked is that basing the fan speed only on the CPU temp is not very ideal mainly with the new fan type we are using which stops spinning when PWM is around 20. The HDD themselves generate and build-up heat in the casing which seems to be the key issue here. So there is 2 approach to mitigate this :

1) monitor HDD temperature and control fan speed not only based on CPU temp but HDD temp also. Elegant but require some work and fine tuning.

2) make the fan always run (at very low speed) even when SoC temp is low. It's  like simulating the behavior of the old fan which always spin even at PWM 0.

 

So we will go with option 2 since it's pretty simple and guaranty that, even minimal, a constant air flow will help the HDD to dissipate energy and guaranty they work within a correct operating temperature.

 

We will post our updated fancontrol config file once it's ready.

Link to comment
Share on other sites

I found a nice shutdown script on high HDD temperature at https://www.cyberciti.biz/tips/howto-monitor-hard-drive-temperature.html.

 

I use a slightly enhanced version which is able to deal with sleeping drives:

#!/bin/bash
# Program: Monitor hard disk and shutdown system if temp >= 50
# http://www.cyberciti.biz/tips/howto-monitor-hard-drive-temperature.html
# Author: nixCraft < vivek @ nixCraft DOT com >
# Modified: <kramski @ web DOT de>
################################################
HDDS="/dev/sda /dev/sdb /dev/sdc" # my hdd
HDT=/usr/sbin/hddtemp
LOG=/usr/bin/logger
DOWN=/sbin/shutdown
ALERT_LEVEL=50
for disk in $HDDS
do
if [ -b "$disk" ]; then
    HDTEMP=$($HDT "$disk" | awk '{ print $4}' | awk -F '°' '{ print $1}')
    if [[ "$HDTEMP" =~ ^[0-9]+$ ]]; then
        # "$HDTEMP" is numeric, disk not sleeping
        if [ "$HDTEMP" -ge "$ALERT_LEVEL" ]; then
            $LOG "System going down as hard disk : $disk temperature $HDTEMP°C crossed its limit" 
            sync;sync
            # add email code, make sure you add 2m delay
            # mail -s "HDD temperature >= 50 @ $(hostname)" you@domain.com </dev/null
            # sleep 2m
            # going down 
            $DOWN -h 0
        fi
    fi
fi
done

 

Link to comment
Share on other sites

On 2/1/2019 at 4:26 PM, gprovost said:

@tuxd3v Let me clarify something. The reported temperature of CPU is the temperature of CPU die (as for any CPU), it doesn't represent the ambient temperature of the case where your HDDs are, neither does it represent the temp of the heatsink on the top CPU. Therefore your HDD weren't exposed to 70C temperature at anytime, maybe at worst 50C ambient temp. So let's not be alarmist ;-)

 

The scaling governor is already set as 'on-demand' by default. It means the CPU freq will vary from 800MHz to 1.6GHz, but at the end it doesn't impact much the power consumption / CPU temperature. The Marvell Armada SoC (CPU) is designed to run at 100C and it's very common for such SoC to quickly reach very high temperature. The current board is designed to actually be 100% passive cooling, but a bit of active cooling is always best, mainly to not impact other component like the HDD in the case of Helios4. 

 

The case is designed to have a well contained horizontal air flow that cool the SoC heatsink and the HDD surfaces. It's a pretty simple but very efficient design.

Hello gprovost,

You right, sorry if I made myself unclear( English is not my mother tongue ) :)

I made some tests without the disks, and the temps are a lot lower..

Even at the heatsink level,  it doesn´ t go very warm..

The Fans are now running nicely with the config help from Harvey( Thanks for that!  ), and they change pace( has expected with CPU processing.. )

Rebuilding the Raid:

root@helios4:~# hddtemp /dev/sd{a,b,c}
/dev/sda: ST4000VN008: 25°C
/dev/sdb: ST4000VN008: 29°C
/dev/sdc: ST4000VN008: 29°C

I didn´t  knew the cpu supported 100C, thats nice!

 

I noticed that the Scheduler is Configured as Ondemand,

But if I restart, after that CPU is fixed in 1.6Ghz( But Scheduller says Ondemand ),

if I set it Ondemand explicitly, then it goes 800Mhz..

It could be me.. til now ( forgetting my initial false findings.. ), I am very pleased with Helios4.

 

In the Raid5 reconstruction it is making around 170MB/s

Its the Only metric I have til now with the disks..

 

 

Does any one has some metrics, about read/writing speed, raid?

You know, new toy in the house... "even my pets get Jealous" :D

 

Thanks in Advance

 

Link to comment
Share on other sites

Hello,

Here is the Link for Seagate SeaChest Utilities Documentation:

Seagate SeaChest Utilities

 

In case any one has Seagate ironwolfs or another Seagate disks.

 

I have been kidding around with this tools..

With this base settings:

# Enable Power UP in Standby
# Its a persistent feature
# No Power Cycle required
${SEA_PATH}/openSeaChest_Configure --device /dev/sda --puisFeature enable
${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --puisFeature enable
${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --puisFeature enable

# Enable Low Current SpinUP
# Its a persistent Feature
# Requires a Power Cycle
${SEA_PATH}/openSeaChest_Configure --device /dev/sda --lowCurrentSpinup enable
${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --lowCurrentSpinup enable
${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --lowCurrentSpinup enable

Unfortunatly FreeFall feature is NOT supported at least in my IronWolfs 4TB,

Which would be a must to control aceleration levels ...( I already  killed a WD Red 3TB..with a freefall .. of 1 meter os so..)

This Feature could be present but doesn´t permit changing levels..at least it seems so..

root@helios4:~# openSeaChest_Configure --device /dev/sda --freeFall info|grep -E "dev|Free"
/dev/sg0 - ST4000VN008-XXXXXX - XXXXXXXX - ATA
Free Fall control feature is not supported on this device.

Could it be related with the last release of SeaChest tools? I don´ t know..

If someone knows how to tune acceleration levels, I appreciate..

 

In the mean time,

I made some basic tests with Models Seagate IronWolf 4TB ST4000VN008( 3 discs):

APM Mode Vs Power Comsuption( Power Measured on the Wall...it includes helios4 power consumption )

 

1) With CPU iddle at 800Mhz, and with disks at minimun APM power mode 1 .

    Power consumption is between 4.5-7 Watts, maybe average is around 6-6.5 Watts

    1.1) When Issuing a 'hdparm -Tt' on all disks,

           Power peaks at 30 Watts Max, but stays in  average at around 15-25 Watts,

    1.2) Then goes down to arround 6 - 6.5 Watts.. mode 1

 

2) With CPU iddle at 800Mhz, and with disks at APM power mode 127,

    Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 127 )

    2.1) When issuing a 'hdparm -Tt'  on all disks,

           Power peaks at 34 Watts, But it is around 13-25 Watts, with majority of time between 17-23 Watts..

    2.2) Then it enters in mode 1.

 

3) With CPU iddle at 800Mhz, and with disks at APM power mode 128,

    Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 128 )

    3.1) When issuing a 'hdparm -Tt'  on all disks,

          Power peaks at 34 Watts, but it is around 13-25 watts, with majority of time between 15-23 watts..

    3.2) Then goes down to around 15-17 Watts, for some time, ..majority of time ~ 16.5 Watts..

          I haven´ t checked its final state has only monitored by some 10-15 minutes

 

4) With CPU iddle at 800Mhz, and with disks at APM power mode 254( max performance )

    Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 254 )

    4.1) When issuing a 'hdparm -Tt'  on all disks,

           Power peaks at ~34 Watts, but it is around 13-28 watts, with majority of time between 17-23 watts..

    4.2) Then bounces between 13-20 Watts, for some time, maybe with average around 19Watts.. 

           I haven´ t checked its final state has only monitored by some 10-15 minutes

           I assume they enter in mode 128..

 

In my Opinion power mode 127 'its a best fit' for power consumption vs performance..

It penalize a bit the performance, but an in all, it seems the best fit, with the bonus of ~ 6.5Watts in Idle.

 

So I issue on reboot or start:

-- Enable APM level to 127
${SEA_PATH}/openSeaChest_PowerControl --device /dev/sda --setAPMLevel 127
${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdb --setAPMLevel 127
${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdc --setAPMLevel 127

 

Link to comment
Share on other sites

On 1/13/2019 at 4:13 PM, Harvey said:

Hi, (happy) new Helios4 owner here. I have a questing regarding power consumption:

 

I'm running a new, clean setup based on Armbian_5.68_Helios4_Debian_stretch_next_4.14.88.img.xz.

 

With no additional drives powered up, a cheap power meter shows the following results:

  • System running idle: 4-5 W
  • System halted after "poweroff": 6-7 W.

 

With two WD Red 8TB EFAX (still unconfigured) and one SSD 128GB (root fs) I get

  • System running idle, disks spinning: 23 W
  • System halted after "poweroff": 9 W.

 

Why does the halted system consume that much energy?

 

TIA

   Heinz

 

 

Hello Harvey,

Thanks for your stats :thumbup:

 

I have 3 IronWolfs 4TB( each ),

 

Measurements on the Wall:

1) System running idle, disks in Standby( configured in powermode 127 ): ~ 6.5W

2) System halted "running 'halt' command, power led off..": ~10-11W( Fans at full throttle, with my config above.. ).

    Maybe the disks here exit the powermode 1, and goes to a meadle stat, don´ t know, maybe if I mesure during more time, I will get a better picture..

 

with discs spinning it depends on what I am doing, I measured peaks of ~ 34 Watts, but very briefly.

I haven´ t done any stress tests yet,

But I configured discs to start in standby and to control current used during spining..

 

Link to comment
Share on other sites

Doubts about OS in Disk array,

 

Could we put OS in hard drives?

Or should we put it in a spare one/usb pendrive or let it in sdcard

 

Which method is the best..

I am afraid of putting it in the disk array, has it will increase the loud_count head cycles to park/unpark the heads...?!

root@helios4:~# smartctl -a /dev/sda|grep -E ^193
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       61
root@helios4:~# smartctl -a /dev/sdb|grep -E ^193
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       61
root@helios4:~# smartctl -a /dev/sdc|grep -E ^193
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       55

 

I appreciate if any  one, can help me on choosing the best option?

Thanks in Advance,

Best Regards

tux

 

Link to comment
Share on other sites

On 2/5/2019 at 8:23 PM, tuxd3v said:

Doubts about OS in Disk array,

 

Could we put OS in hard drives?

Or should we put it in a spare one/usb pendrive or let it in sdcard

 

Regarding OS on hard drive, it's possible as long as it does not reside on top of RAID. U-Boot does not understand MD so it could not load boot files from there.

You need to sacrifice 1 drive to be used as OS drive or maybe, since i haven't tried it, allocated a partition for OS and let MD use the rest of the disk capacity.

That being said, currently we don't support OS on hard drive.

 

Personally I opt for OS in sdcard, because it does not poke out/extrude, easier to update the bootloader and i have plenty of them. I admit those are not strong reason.

 

Small dimension reason can easily dismissed by some USB pendrive model that quite small such as these SanDisk Ultra Fit. 

IMG_20190208_231833.jpg.73e3f28e83c045916c3df01616e7cec0.jpg

Also there might be performance advantage using pendrive. SanDisk Ultra Fit stated that it has speed up to 130MB/s while SanDisk Extreme Pro *only* 100 MB/s.

 

I'm not sure how reliable the USB pendrive to run in 24/7. Early model of SanDisk Ultra Fit (right hand side on above photo),  quite hot after a while.

The current model, which is fully plastic, does not have such issue.

 

 

Link to comment
Share on other sites

1 hour ago, aprayoga said:

Regarding OS on hard drive, it's possible as long as it does not reside on top of RAID. U-Boot does not understand MD so it could not load boot files from there.

You need to sacrifice 1 drive to be used as OS drive or maybe, since i haven't tried it, allocated a partition for OS and let MD use the rest of the disk capacity.

That being said, currently we don't support OS on hard drive.

Hello aprayoga,

Thanks for your reply.

At beginning I was thinking in put the OS in a separated partition, with lvm managing it, and mdraid separated from that, also needed to be there a /boot partition..

But later I understood that, it will wakeup my drives a lot of time, as the OS was there and so, the load_cycles head park/unpark will increase dramatically, or disabling the idle APM power feature and been all the time with disks spinning and consuming lots of energy, even when you are not using the Nas...

 

I realized that, for having the OS in a Hard drive...better it to be a 2.5", so less power consumption..

But that also means penalize one disk for the mdRaid..

 

Its not a easy answer in fact,

A Solution could be having a small harddrive 2.5", attached to USB, with the OS, and logs..

Or have it, like its already..., in a SDCard, with logs in a Zram..

 

Right now after, your response and also after thinking better at it sdcard seems a easier way to go.

But could be nice to have there a USB boot option, for a pendrive or a usb attached disk.

 

In both of this options we could have the 4 disks for what mater the most... the mdraid :)

 

Thanks for your reply,

Regards

Link to comment
Share on other sites

8 hours ago, tuxd3v said:

 

I realized that, for having the OS in a Hard drive...better it to be a 2.5", so less power consumption..

But that also means penalize one disk for the mdRaid..

 

Its not a easy answer in fact,

A Solution could be having a small harddrive 2.5", attached to USB, with the OS, and logs..

Or have it, like its already..., in a SDCard, with logs in a Zram..

 

Right now after, your response and also after thinking better at it sdcard seems a easier way to go.

But could be nice to have there a USB boot option, for a pendrive or a usb attached disk.

 

Well, you could boot from USB but first you need to set Helios4 to boot from SPI NOR flash. You can check how to do it on our wiki.

In the end of the page there is Moving RootFS to Other Device section to transfer your existing OS in SDCard to USB Storage.

Or if you want to start fresh, you could also write/burn the Armbian image directly to USB pendrive instead of SDCard.

Link to comment
Share on other sites

20 hours ago, aprayoga said:

Well, you could boot from USB but first you need to set Helios4 to boot from SPI NOR flash. You can check how to do it on our wiki.

In the end of the page there is Moving RootFS to Other Device section to transfer your existing OS in SDCard to USB Storage.

Or if you want to start fresh, you could also write/burn the Armbian image directly to USB pendrive instead of SDCard.

Hello,

Thanks for your reply.

Can we write several times to the flash( I mean, if you need later to update uboot.. )

 

One thing that I was also thinking would be to expand the RingBuffers of the network card, if its possibe..

Right now they are:

Pre-set maximums:
RX:		128
RX Mini:	0
RX Jumbo:	0
TX:		532
Current hardware settings:
RX:		128
RX Mini:	0
RX Jumbo:	0
TX:		532

I don't know how to expand them, or if its possible.

 

Thanks in Advance, for your time.

Regards

Link to comment
Share on other sites

Finally found the time to set my Helios4 up (3 HDDs & 1 SSD) and I'm still a little bit underwhelmed. Mainly because I can't find reliable Information.

 

Is a constant temperature of 65°C (all drives in standby and nothing besides a ssh-session running - cpu runs at 1.6GHz but the governor is "ondemand") acceptable or should I adjust fancontrol to start at 50°C (or even lower)? I don't care if the fans run (as long as the power consumption doesn't increase significantly) because the Helios4 is in our basement.

 

How can I move the system to the SSD connected to the first SATA-port?

 

Right now I'm running the latest Debian-Image from here (4.14.98-mvebu).

Link to comment
Share on other sites

On 2/15/2019 at 4:21 PM, integros said:

Is a constant temperature of 65°C (all drives in standby and nothing besides a ssh-session running - cpu runs at 1.6GHz but the governor is "ondemand") acceptable or should I adjust fancontrol to start at 50°C (or even lower)? I don't care if the fans run (as long as the power consumption doesn't increase significantly) because the Helios4 is in our basement.

 

How can I move the system to the SSD connected to the first SATA-port?

 

Right now I'm running the latest Debian-Image from here (4.14.98-mvebu).

Hello,

Adjust your fancontrol settings,

I use this ones:

root@helios4:~# cat /etc/fancontrol 
# Helios4 PWM Fan Control Configuration
# Temp source : /dev/thermal-cpu
INTERVAL=10
FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input
MINTEMP=/dev/fan-j10/pwm1=60 /dev/fan-j17/pwm1=65
MAXTEMP=/dev/fan-j10/pwm1=65 /dev/fan-j17/pwm1=70
MINSTART=/dev/fan-j10/pwm1=20 /dev/fan-j17/pwm1=20
MINSTOP=/dev/fan-j10/pwm1=29 /dev/fan-j17/pwm1=29
#MINPWM=0

Yes, the CPU Scheduler..

The Option I found is putting into /etc/rc.local

# Force CPU sheduler Ondemand
cpufreq-set -g ondemand

It is indeed ondemand, but freq seems to be at 1.6 Ghz.. so this will put it ondemand again, and it seems to work :)

With both settings above, my disks ar at:

root@helios4:~# hddtemp /dev/sd{a,b,c}
/dev/sda: ST4000VN008: 21 C
/dev/sdb: ST4000VN008: 22 C
/dev/sdc: ST4000VN008: 22 C

And CPU( which seems to support 100C ) are at:

root@helios4:~# cat /sys/devices/virtual/hwmon/hwmon1/temp1_input
60359

reboot your Helios4 and check if everything is ok now

Link to comment
Share on other sites

We have updated the /etc/fancontrol configuration file in Armbian to address the above discussion. You can see the commit here or check the file directly here.

 

We will need to broadcast the info to all user of batch 2 on how to update their configuration manually since the fancontrol config file doesn't get updated automatically with debian package upgrade for now.

 

@tuxd3v Thanks for helping and sharing your tweaks with other users.

1. One more time, there is no need to change the governor. It is already ondemand by default and working as expected ;-)

2. SPI NOR Flash can be reflashed many time, you don't need to worry about number of write cycle.

3. We will need to investigate Ring Buffer tuning because effectively it seems fixed, doing for example ethtool -G eth0 rx 4096 tx 4096 the change won't get applied and I can the following kernel message

[ 2859.307488] mvneta f1070000.ethernet eth0: TX queue size set to 532 (requested 4096)

 

@Harvey Yes use armbian-config tool (or directly nand-sata-install tool) to move your rootfs from SDcard to SSD.

Take note that you will still need to keep the SDcard since U-Boot (the booloader) will still reside on it.

 

 

Beside that, good news : Helios4 3rd Campaign is ON !   You can check here.

Link to comment
Share on other sites

I'm not sure if this is related but I noticed it after updating to kernel 4.14.98-mvebu.

 

My Helios4 2nd batch is emitting a high pitched "buzzing" sound. I've heard the same sound from a defect power source. I cannot locate where the sound is coming from.

I also hear "clicking" sounds. I think these are coming from the hdd, because they sometimes get more frequent. Probably when reading/writing something.

I also hear something turning on and off at about a one second interval (not the fans).

 

I'm not sure if it started exactly after the kernel update, but I know that it is new, because it didn't make as much sounds before.

 

Link to comment
Share on other sites

1 hour ago, bramirez said:

I'm not sure if this is related but I noticed it after updating to kernel 4.14.98-mvebu.

 

My Helios4 2nd batch is emitting a high pitched "buzzing" sound. I've heard the same sound from a defect power source. I cannot locate where the sound is coming from.

I also hear "clicking" sounds. I think these are coming from the hdd, because they sometimes get more frequent. Probably when reading/writing something.

I also hear something turning on and off at about a one second interval (not the fans).

 

I'm not sure if it started exactly after the kernel update, but I know that it is new, because it didn't make as much sounds before.

 

 

Humm,

That sounds to me that your hdds are always parking/unparking heads like crazy...( which could limit its life cycle.. )

It could be a problem related with APM features..

 

Does you have OS in harddrives?

Try to mesure the output of this command, in time(lets say within 15 minutes intervals), and see if load/unload increase a lot:

for blockid in {a..c};do smartctl --attributes /dev/sd${blockid}|grep -E "^ 12|^193"; done

Regards

Link to comment
Share on other sites

@bramirez I would first try to narrow down for sure where the sound is coming from.

 

 

1. shutdown gracefully the system (sudo halt from the console) and then switch off the power.

 

2. Open the front case panel and disconnect all the SATA power cables from the disks.

 

3. Power-up your system and check if you are still hearing this “buzzing” sound you describes.

 

If you don’t hear anymore the sound we can conclude the problem comes from one of the HDDs. Otherwise the issue comes from something else, maybe faulty power. So please check the above first.

Link to comment
Share on other sites

54 minutes ago, gprovost said:

@bramirez I would first try to narrow down for sure where the sound is coming from.

 

 

1. shutdown gracefully the system (sudo halt from the console) and then switch off the power.

 

2. Open the front case panel and disconnect all the SATA power cables from the disks.

 

3. Power-up your system and check if you are still hearing this “buzzing” sound you describes.

 

If you don’t hear anymore the sound we can conclude the problem comes from one of the HDDs. Otherwise the issue comes from something else, maybe faulty power. So please check the above first.

 

The clicking and buzzing is gone when I unplug both disks. I have two disks and tried with both separately and the buzzing and clicking (way less if not writing) persists, even when they are not mounted. But since it happens on both disks, they probably aren't faulty. Downgrading kernel to 4.14.96-mvebu didn't help either

Link to comment
Share on other sites

8 hours ago, gprovost said:

We have updated the /etc/fancontrol configuration file in Armbian to address the above discussion. You can see the commit here or check the file directly here.

 

We will need to broadcast the info to all user of batch 2 on how to update their configuration manually since the fancontrol config file doesn't get updated automatically with debian package upgrade for now.

 

@tuxd3v Thanks for helping and sharing your tweaks with other users.

2. SPI NOR Flash can be reflashed many time, you don't need to worry about number of write cycle.

3. We will need to investigate Ring Buffer tuning because effectively it seems fixed, doing for example ethtool -G eth0 rx 4096 tx 4096 the change won't get applied and I can the following kernel message


[ 2859.307488] mvneta f1070000.ethernet eth0: TX queue size set to 532 (requested 4096)

Beside that, good news : Helios4 3rd Campaign is ON !   You can check here.

You welcome :)

Ok I was in the opinion that maybe we could but, your point 2., is the answer I needed ;)

 

thanks

Link to comment
Share on other sites

10 hours ago, bramirez said:

 

The clicking and buzzing is gone when I unplug both disks. I have two disks and tried with both separately and the buzzing and clicking (way less if not writing) persists, even when they are not mounted. But since it happens on both disks, they probably aren't faulty. Downgrading kernel to 4.14.96-mvebu didn't help either

 

Humm, sounds like some power issue, maybe something wrong with the 5V HDD rail on the Helios4 board. Would be the first time, but it could be a faulty buck converter.

Before jumping to conclusion, did you try with each SATA power cable, just to be sure it's not a cable issue ?

Link to comment
Share on other sites

1 hour ago, gprovost said:

 

Humm, sounds like some power issue, maybe something wrong with the 5V HDD rail on the Helios4 board. Would be the first time, but it could be a faulty buck converter.

Before jumping to conclusion, did you try with each SATA power cable, just to be sure it's not a cable issue ?

 

No I tried with the same power cable. I will try with another cable when I get home from work. If I can do anything to help debug the problem, please say. I don't have another HDD and both are from Seagate and ordered at the same time...

Link to comment
Share on other sites

@bramirez  well the next step would be to use a voltmeter (if you have ?) to the check the voltage of the 5V and 12V rails is correct. Even though it's pretty simple, you need to be sure of what you doing in order to not short circuit the board.

 

Let us know first if with the other cable you still have the issue.

Link to comment
Share on other sites

10 hours ago, gprovost said:

@bramirez  well the next step would be to use a voltmeter (if you have ?) to the check the voltage of the 5V and 12V rails is correct. Even though it's pretty simple, you need to be sure of what you doing in order to not short circuit the board.

 

Let us know first if with the other cable you still have the issue.

So I tried the second cable and it still buzzes and clicks. I think I have a cheap voltmeter somewhere, but I couldn't find it just now. I have never used it though and I wouldn't know what to do. Just to clarify: there are three main symptoms:

* buzzing: not very loud but constant and loud enough to hear in the room and even in the next room at night.

* sound of something spinning down and up again: very quiet (ear about 10cm away from hdd). One cycle is about 1-2 seconds.

* clicking: about one click every 2 seconds. I couldn't make out a pattern with the second point above.

 

If the disk is mounted it also makes loud hdd "chattering" noises when writing. This didn't happen before the buzzing etc. I didn't use the NAS very long, but I remember having doubts if it would be too loud and at the beginning it was very quiet.

 

Would I have to replace the whole board?

Link to comment
Share on other sites

@bramirez Ok, well unless you are very unlucky and got 2 drives which end up being faulty at the same time, I would say it’s an issue with the 5V HDD power rail on the carrier board. There is also the possibility the issue is coming from the AC adapter.

 

Anyway, if you don’t want to waste more time troubleshooting and us trying to guess the issue, please PM me and we will arrange to replace/fix the faulty part ;-)

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines