Jump to content

tuxd3v

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by tuxd3v

  1. Hello, I know that Kobol is out of Business.. But I am thinking...what if my helios4 motherboard die...? To mitigate that, I want to acquire new motherboard. Does any one knows how I could get one? Thanks in Advance
  2. Hello Heisath, Thanks a lot! I will be a beta tester Regards,
  3. Will be there a 5.4 LTS kernel for Helios4? Thanks in Advance, Best Regards, tux
  4. hello, I run 'armbianmonitor -u' http://ix.io/28AC I rebooted the OS and maybe apparently everything is nice..I thing, It just hanged some days ago around 5AM, maybe a peak on power grid or something.. Fancontrol has MINTEMP like this: root@helios4:~# grep MINTEMP /etc/fancontrol MINTEMP=/dev/fan-j10/pwm1=60 /dev/fan-j17/pwm=70
  5. Hello, Sumething nasty hapenned this night on my Helios4, that left the OS crippled.. I have been analysing the logs, and I find no reason for that, what I know is that ssh server received a SIGHUP( maybe a peak in the network grid.. ). In this process I am analysing logs, and checking the system.. I have lots of entries in '/var/log/syslog' related with fancontrol.. root@helios4:~# tail -n 2 /var/log/syslog Jan 24 21:48:25 localhost fancontrol[906]: /usr/sbin/fancontrol: line 477: let: mint=*1000: syntax error: operand expected (error token is "*1000") Jan 24 21:48:36 localhost fancontrol[906]: /usr/sbin/fancontrol: line 477: let: mint=*1000: syntax error: operand expected (error token is "*1000") Which means, by some reason, and in some situations '${AFCMINTEMP[$fcvcount]}' ir null : root@helios4:~# grep -ni "let mint" /usr/sbin/fancontrol 477: let mint="${AFCMINTEMP[$fcvcount]}*1000"
  6. Hello, At least for the tests done with 2 other colleagues, Seems that Armbian kernel is missing a .config option.. 'CONFIG_SENSORS_PWM_FAN=y' Which is present in 'ayufan' kernels work.. ATS needs sysfs to communicate with the kernel driver, and is not there the pwm ctl: '/sys/class/hwmon/hwmon0/pwm1'
  7. I am starting to think in adding 1 more :)
  8. Hello Guys, Happy Helios4, for some time, user here.. Mine is running 3 disks, iddle ~7 watts, working - 20 watts( with very very brief peaks of 50 watts initially.. when disks are in APM 0, and jump to APM 127..). Does any one is running Helios4, with 4 disks in the raid setup? thanks in advance, regards
  9. Probably not a good suggestion.. It needs to be created has a 64bit Filesystem, from the beginning, I think.. If turned ON, in a 32 bits one, I think it will Damage the FileSystem..
  10. I haven't checked it, But the limitation has to do with, the tools for partitioning, since they are using default C types.. I think that somewere it time was released a flag to so this problem -O 64bit Check this See the content of: cat /etc/mke2fs.conf in ext4 is there a -O 64bit option, but I think it only works if filesystem was created with 64 bits, from the beginning.. So a resize2fs -b /dev/sdx -b Turns on the 64bit feature, resizes the group descriptors as necessary, and moves other metadata out of the way. will only work in a previous 64 bits available fs.. But I have never tested it
  11. Indeed, One other thing is yet to look if you are running NFS.. I had one problem with NFS , wen I mounted on the client, machine, just shutdown and start, the disks are always going down and start.. The disks power was cut abruptly when system went down every time.., and even with system up, they had that behaviour Solution until now... not mounting NFS, until I understand what is causing that problem( That was NOT happening in Helios4, but in another ARM based system.. ) To me, his problem its or related with one of: park/unpark heads aggressively( Which he needs to fix APM modes and timers on each disk.. ) NFS or any other think, munted and is creating kernel panics or so.. So to check, I would deactivate nfs,samba, and so on. Only OS running without application layers.. Then check if problem persists, if yes: Mesure power cycles and load/unload heads, and see if it happening with frequency.. for blockid in {a..c};do smartctl --attributes /dev/sd${blockid}|grep -E "^ 12|^193"; done I also Have Seagate disks, in APM mode 127, so they enter in mode 1( standby, after the timers defined period ), but they don't park and unpark like crazy, they work very nicely.. In the past I had WD REd, and that disks, were parking and unparking like crazy, I could only aliviate the problem encreasing the timmers to tnter in APM mode 1, but I couldn't do much more.. Does he, configured timers for APM modes?If you do that with small timers, heads will start parking/Unparking at that intervals defines, and need to be adjusted again.. Use OpenSeaChest Utilities to verify more information not present in SMART, since this tools are made for Seagate disks, and implement Seagate Technology..
  12. You welcome Ok I was in the opinion that maybe we could but, your point 2., is the answer I needed ;) thanks
  13. Humm, That sounds to me that your hdds are always parking/unparking heads like crazy...( which could limit its life cycle.. ) It could be a problem related with APM features.. Does you have OS in harddrives? Try to mesure the output of this command, in time(lets say within 15 minutes intervals), and see if load/unload increase a lot: for blockid in {a..c};do smartctl --attributes /dev/sd${blockid}|grep -E "^ 12|^193"; done Regards
  14. Hello, Adjust your fancontrol settings, I use this ones: root@helios4:~# cat /etc/fancontrol # Helios4 PWM Fan Control Configuration # Temp source : /dev/thermal-cpu INTERVAL=10 FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input MINTEMP=/dev/fan-j10/pwm1=60 /dev/fan-j17/pwm1=65 MAXTEMP=/dev/fan-j10/pwm1=65 /dev/fan-j17/pwm1=70 MINSTART=/dev/fan-j10/pwm1=20 /dev/fan-j17/pwm1=20 MINSTOP=/dev/fan-j10/pwm1=29 /dev/fan-j17/pwm1=29 #MINPWM=0 Yes, the CPU Scheduler.. The Option I found is putting into /etc/rc.local # Force CPU sheduler Ondemand cpufreq-set -g ondemand It is indeed ondemand, but freq seems to be at 1.6 Ghz.. so this will put it ondemand again, and it seems to work With both settings above, my disks ar at: root@helios4:~# hddtemp /dev/sd{a,b,c} /dev/sda: ST4000VN008: 21 C /dev/sdb: ST4000VN008: 22 C /dev/sdc: ST4000VN008: 22 C And CPU( which seems to support 100C ) are at: root@helios4:~# cat /sys/devices/virtual/hwmon/hwmon1/temp1_input 60359 reboot your Helios4 and check if everything is ok now
  15. Hello, Thanks for your reply. Can we write several times to the flash( I mean, if you need later to update uboot.. ) One thing that I was also thinking would be to expand the RingBuffers of the network card, if its possibe.. Right now they are: Pre-set maximums: RX: 128 RX Mini: 0 RX Jumbo: 0 TX: 532 Current hardware settings: RX: 128 RX Mini: 0 RX Jumbo: 0 TX: 532 I don't know how to expand them, or if its possible. Thanks in Advance, for your time. Regards
  16. Hello aprayoga, Thanks for your reply. At beginning I was thinking in put the OS in a separated partition, with lvm managing it, and mdraid separated from that, also needed to be there a /boot partition.. But later I understood that, it will wakeup my drives a lot of time, as the OS was there and so, the load_cycles head park/unpark will increase dramatically, or disabling the idle APM power feature and been all the time with disks spinning and consuming lots of energy, even when you are not using the Nas... I realized that, for having the OS in a Hard drive...better it to be a 2.5", so less power consumption.. But that also means penalize one disk for the mdRaid.. Its not a easy answer in fact, A Solution could be having a small harddrive 2.5", attached to USB, with the OS, and logs.. Or have it, like its already..., in a SDCard, with logs in a Zram.. Right now after, your response and also after thinking better at it sdcard seems a easier way to go. But could be nice to have there a USB boot option, for a pendrive or a usb attached disk. In both of this options we could have the 4 disks for what mater the most... the mdraid Thanks for your reply, Regards
  17. Doubts about OS in Disk array, Could we put OS in hard drives? Or should we put it in a spare one/usb pendrive or let it in sdcard Which method is the best.. I am afraid of putting it in the disk array, has it will increase the loud_count head cycles to park/unpark the heads...?! root@helios4:~# smartctl -a /dev/sda|grep -E ^193 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 61 root@helios4:~# smartctl -a /dev/sdb|grep -E ^193 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 61 root@helios4:~# smartctl -a /dev/sdc|grep -E ^193 193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 55 I appreciate if any one, can help me on choosing the best option? Thanks in Advance, Best Regards tux
  18. Hello Harvey, Thanks for your stats I have 3 IronWolfs 4TB( each ), Measurements on the Wall: 1) System running idle, disks in Standby( configured in powermode 127 ): ~ 6.5W 2) System halted "running 'halt' command, power led off..": ~10-11W( Fans at full throttle, with my config above.. ). Maybe the disks here exit the powermode 1, and goes to a meadle stat, don´ t know, maybe if I mesure during more time, I will get a better picture.. with discs spinning it depends on what I am doing, I measured peaks of ~ 34 Watts, but very briefly. I haven´ t done any stress tests yet, But I configured discs to start in standby and to control current used during spining..
  19. Hello, Here is the Link for Seagate SeaChest Utilities Documentation: Seagate SeaChest Utilities In case any one has Seagate ironwolfs or another Seagate disks. I have been kidding around with this tools.. With this base settings: # Enable Power UP in Standby # Its a persistent feature # No Power Cycle required ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --puisFeature enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --puisFeature enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --puisFeature enable # Enable Low Current SpinUP # Its a persistent Feature # Requires a Power Cycle ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --lowCurrentSpinup enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --lowCurrentSpinup enable ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --lowCurrentSpinup enable Unfortunatly FreeFall feature is NOT supported at least in my IronWolfs 4TB, Which would be a must to control aceleration levels ...( I already killed a WD Red 3TB..with a freefall .. of 1 meter os so..) This Feature could be present but doesn´t permit changing levels..at least it seems so.. root@helios4:~# openSeaChest_Configure --device /dev/sda --freeFall info|grep -E "dev|Free" /dev/sg0 - ST4000VN008-XXXXXX - XXXXXXXX - ATA Free Fall control feature is not supported on this device. Could it be related with the last release of SeaChest tools? I don´ t know.. If someone knows how to tune acceleration levels, I appreciate.. In the mean time, I made some basic tests with Models Seagate IronWolf 4TB ST4000VN008( 3 discs): APM Mode Vs Power Comsuption( Power Measured on the Wall...it includes helios4 power consumption ) 1) With CPU iddle at 800Mhz, and with disks at minimun APM power mode 1 . Power consumption is between 4.5-7 Watts, maybe average is around 6-6.5 Watts 1.1) When Issuing a 'hdparm -Tt' on all disks, Power peaks at 30 Watts Max, but stays in average at around 15-25 Watts, 1.2) Then goes down to arround 6 - 6.5 Watts.. mode 1 2) With CPU iddle at 800Mhz, and with disks at APM power mode 127, Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 127 ) 2.1) When issuing a 'hdparm -Tt' on all disks, Power peaks at 34 Watts, But it is around 13-25 Watts, with majority of time between 17-23 Watts.. 2.2) Then it enters in mode 1. 3) With CPU iddle at 800Mhz, and with disks at APM power mode 128, Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 128 ) 3.1) When issuing a 'hdparm -Tt' on all disks, Power peaks at 34 Watts, but it is around 13-25 watts, with majority of time between 15-23 watts.. 3.2) Then goes down to around 15-17 Watts, for some time, ..majority of time ~ 16.5 Watts.. I haven´ t checked its final state has only monitored by some 10-15 minutes 4) With CPU iddle at 800Mhz, and with disks at APM power mode 254( max performance ) Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 254 ) 4.1) When issuing a 'hdparm -Tt' on all disks, Power peaks at ~34 Watts, but it is around 13-28 watts, with majority of time between 17-23 watts.. 4.2) Then bounces between 13-20 Watts, for some time, maybe with average around 19Watts.. I haven´ t checked its final state has only monitored by some 10-15 minutes I assume they enter in mode 128.. In my Opinion power mode 127 'its a best fit' for power consumption vs performance.. It penalize a bit the performance, but an in all, it seems the best fit, with the bonus of ~ 6.5Watts in Idle. So I issue on reboot or start: -- Enable APM level to 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sda --setAPMLevel 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdb --setAPMLevel 127 ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdc --setAPMLevel 127
  20. Hello gprovost, You right, sorry if I made myself unclear( English is not my mother tongue ) I made some tests without the disks, and the temps are a lot lower.. Even at the heatsink level, it doesn´ t go very warm.. The Fans are now running nicely with the config help from Harvey( Thanks for that! ), and they change pace( has expected with CPU processing.. ) Rebuilding the Raid: root@helios4:~# hddtemp /dev/sd{a,b,c} /dev/sda: ST4000VN008: 25°C /dev/sdb: ST4000VN008: 29°C /dev/sdc: ST4000VN008: 29°C I didn´t knew the cpu supported 100C, thats nice! I noticed that the Scheduler is Configured as Ondemand, But if I restart, after that CPU is fixed in 1.6Ghz( But Scheduller says Ondemand ), if I set it Ondemand explicitly, then it goes 800Mhz.. It could be me.. til now ( forgetting my initial false findings.. ), I am very pleased with Helios4. In the Raid5 reconstruction it is making around 170MB/s Its the Only metric I have til now with the disks.. Does any one has some metrics, about read/writing speed, raid? You know, new toy in the house... "even my pets get Jealous" Thanks in Advance
  21. Hello gprovost, Thanks for that. I found one of the heat sources was the cpu fixed at 1.6Ghz, and the fans stopped. I switched it to 'ondemand', and activated fanctontrol, for fan0, temps are starting to drop.. This fans seems to be of very good quality, I haven´t checked yet in detail, but they seem to be verygood. I was also naive, In setting the Nas, with the discs inside from the beguining.. I should first connect everything without the sata disks there, and only in conditions of nice temps, put there the disks.. Anyway, The Holes in the Nas Case assembly Acrylic, should be above of the point they are now, I understand that in that situation the Sata cables are too short, but with a bigger ones, disks would be better protected from heat since they would be higher from the heat source.. But this opinion is to Helios4 team, nothing todo with Armbian I think. Thanks for the clarification
  22. Hello Harvey, Thanks for your reply. My fans, also never started from the beguining, but now I have fan0 active at low speed, and Fan1 Stopped.. I found one of the excessive heat temp problems.. Was the Scheduler, CPU freq was set to 1.6Ghz, without cooling, it reached 73C, my lower disk got 24Hours on this condition.. I don´ t know what to think of its state now,( but, at least I made the right purchase as IronWolf disks are rated at 70C max.. ) cpufreq-set -g ondemand Thanks for the explanation of fancontrol, I configured it like this: INTERVAL=10 FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input MINTEMP=/dev/fan-j10/pwm1=55 /dev/fan-j17/pwm1=65 MAXTEMP=/dev/fan-j10/pwm1=65 /dev/fan-j17/pwm1=70 MINSTART=/dev/fan-j10/pwm1=20 /dev/fan-j17/pwm1=20 MINSTOP=/dev/fan-j10/pwm1=29 /dev/fan-j17/pwm1=29 #MINPWM=0 Temps right now bellow 57C root@helios4:~# cat /dev/thermal-cpu/temp1_input 56550 its a better start
  23. Hello, I am new to Helios4, just assembled my NAs, and brought 4 ironwolf 4 GB, for a Raid6.. I noticed that, after reading in the Wiki, that writing uboot, to Flash, only allows me to put the OS in the ...USB dongle, is this correct? I was expecting to use my Raid setup, also for the OS? Another thing that I noticed is that, the CPU heats a lot, even without work, sitting idle it gows trough more than 70C.. The good thing, 10 seconds with Fan 0 on and it decreases to ~ 66C. In this situation the Disk close to the board suffers a lot with heat.. as it is so close to the board, they should be put higher away from CPU heat source.. Also, the Fan only stays on for some seconds, then shutoff..I don´ t know what process is doing that.. I am using last Debian Stretch from 20/01/2019 I am also developing a Fan tool( ATS ), but it doesn´ t support yet Helios4 dual fans.. Does any one has some hints to setup fans correctly, also for booting from Raid6 Sata disks? Thanks in Advance Regards
  24. Hi Guys, I wanted a debian stretch based OS, and I followed Olimex consideration on Armbian. The I visited the Page of Lime2( not nand or emmc): https://www.armbian.com/olimex-lime-2/ The stretch Image that is there...is configured to boot from emmc :( I want a debian stretch based version. I checked sha256 checksums OK. Serial Debug Output: U-Boot SPL 2017.11-armbian (Jan 25 2018 - 01:01:01) DRAM: 1024 MiB CPU: 912000000Hz, AXI/AHB/APB: 3/2/2 Trying to boot from MMC1 mmc_init: -110, time 37 spl: mmc init failed with error: -110 SPL: failed to boot from all boot devices ### ERROR ### Please RESET the board ### How can I solve this situation? can I write a new uboot version? how to do it on Armbian_5.38_Lime2_Debian_stretch_next_4.14.14.img image? thanks in advance. Best Regards tux
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines