Jump to content

tuxd3v

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by tuxd3v

  1. On 4/13/2020 at 7:48 AM, Heisath said:

    @tuxd3v yes, 5.4 LTS kernel for all mvebu boards (including helios4) will be released with Armbian 20.05, coming in May.

     

    Once there are Release Candidates you can help us with testing them.

    Hello Heisath,

    Thanks a lot!

     

    I will  be a beta tester :)

     

    Regards,

  2. On 1/27/2020 at 6:43 AM, gprovost said:

     

    What do you mean exactly by crippled ? Can you share some logs (armbianmonitor -u)

     

    I need to look why you end up with the fancontrol issue, but at first glance looks like something is wrong with config file. Can you double check that /etc/fancontrol file is there and contains the line MINTEMP

    hello, I run 'armbianmonitor -u'

    http://ix.io/28AC

     

    I rebooted the OS and maybe apparently everything is nice..I thing,

    It just hanged some days ago around 5AM, maybe a peak on power grid or something..

     

    Fancontrol has MINTEMP like this:

    root@helios4:~# grep MINTEMP /etc/fancontrol
    MINTEMP=/dev/fan-j10/pwm1=60 /dev/fan-j17/pwm=70

     

     

     

     

  3. Hello,

    Sumething nasty hapenned this night on my Helios4, that left the OS crippled..

    I have been analysing the logs, and I find no reason for that, what I know is that ssh server received a SIGHUP( maybe a peak in the network grid.. ).

     

    In this process I am analysing logs, and checking the system..

    I have lots of entries in '/var/log/syslog' related with fancontrol..

    root@helios4:~# tail -n 2 /var/log/syslog 
    Jan 24 21:48:25 localhost fancontrol[906]: /usr/sbin/fancontrol: line 477: let: mint=*1000: syntax error: operand expected (error token is "*1000")
    Jan 24 21:48:36 localhost fancontrol[906]: /usr/sbin/fancontrol: line 477: let: mint=*1000: syntax error: operand expected (error token is "*1000")

    Which means, by some reason, and in some situations '${AFCMINTEMP[$fcvcount]}' ir null :

    root@helios4:~# grep -ni "let mint" /usr/sbin/fancontrol
    477:		let mint="${AFCMINTEMP[$fcvcount]}*1000"

     

  4. Hello,

    At least for the tests done with 2 other colleagues,

     

    Seems that Armbian kernel is missing a .config option..
    'CONFIG_SENSORS_PWM_FAN=y'

     

    Which is present in 'ayufan' kernels work..

     

    ATS needs sysfs to communicate with the kernel driver, and is not there the pwm ctl: '/sys/class/hwmon/hwmon0/pwm1'

  5. Hello Guys,
    Happy Helios4, for some time, user here..

    Mine is running 3 disks,

    iddle ~7 watts,

    working - 20 watts( with very very brief peaks of 50 watts initially.. when disks are in APM 0, and jump to APM 127..).

     

    Does any one is running Helios4, with 4 disks in the raid setup?


    thanks in advance,

    regards

  6. 30 minutes ago, gprovost said:

    @tuxd3v Don't turn ON 64bit feature on partitions mounted on a 32bit system, this is completely wrong.

    Probably not a good suggestion..

    It needs to be created has a 64bit Filesystem, from the beginning, I think..

    If turned ON, in a 32 bits one,

    I think it will Damage the FileSystem..

  7. On 3/21/2019 at 5:21 AM, 9a3eedi said:

    I would also appreciate if something has a solution to getting volumes larger than 16TB.

    I haven't checked it,

    But the limitation has to do with, the tools for partitioning, since they are using default C types..

    I think that somewere it time was released a flag to so this problem -O  64bit

    Check this

     

    See the content of:

     cat /etc/mke2fs.conf

    in ext4 is there a -O 64bit option,

    but I think it only works if filesystem was created with 64 bits, from the beginning..

     

    So a 

    resize2fs -b /dev/sdx

           -b     Turns  on  the  64bit feature, resizes the group descriptors as necessary, and moves other metadata out of the way.

    will only work in a previous 64 bits available fs..

    But I have never tested it

     

  8. 18 hours ago, count-doku said:

    One other thing to check if the disks are faulty / just doing the clicking -> connect them to some computer. Listen there.

     

     

    Indeed,

    One other thing is yet to look if you are running NFS..

    I had one problem with NFS , wen I mounted on the client, machine, just shutdown and start, the disks are always going down and start..

    The disks power was cut abruptly when system went down every time.., and even with system up, they had that behaviour

     

    Solution until now... not mounting NFS, until I understand what is causing that problem( That was NOT happening in Helios4, but in another ARM based system.. )

     

    To me, his problem its or related with one of:

    •  park/unpark heads aggressively( Which he needs to fix APM modes and timers on each disk.. )
    • NFS or any other think, munted and is creating kernel panics or so..

    So to check, I would deactivate nfs,samba, and so on.

    Only OS running without application layers..

     

    Then check if problem persists, if yes:

    Mesure power cycles and load/unload heads, and see if it happening with frequency..

     

    for blockid in {a..c};do smartctl --attributes /dev/sd${blockid}|grep -E "^ 12|^193"; done

    I also Have Seagate disks, in APM mode 127, so they enter in mode 1( standby, after the timers defined period ), but they don't park and unpark like crazy, they work very nicely..

    In the past I had WD REd, and that disks, were parking and unparking like crazy, I could only aliviate the problem encreasing the timmers to tnter in APM mode 1, but I couldn't do much more..

     

    Does he, configured timers for APM modes?If you do that with small timers, heads will start parking/Unparking at that intervals defines, and need to be adjusted again..

    Use OpenSeaChest Utilities to verify more information not present in SMART, since this tools are made for Seagate disks, and implement Seagate Technology..

  9. 8 hours ago, gprovost said:

    We have updated the /etc/fancontrol configuration file in Armbian to address the above discussion. You can see the commit here or check the file directly here.

     

    We will need to broadcast the info to all user of batch 2 on how to update their configuration manually since the fancontrol config file doesn't get updated automatically with debian package upgrade for now.

     

    @tuxd3v Thanks for helping and sharing your tweaks with other users.

    2. SPI NOR Flash can be reflashed many time, you don't need to worry about number of write cycle.

    3. We will need to investigate Ring Buffer tuning because effectively it seems fixed, doing for example ethtool -G eth0 rx 4096 tx 4096 the change won't get applied and I can the following kernel message

    
    [ 2859.307488] mvneta f1070000.ethernet eth0: TX queue size set to 532 (requested 4096)

    Beside that, good news : Helios4 3rd Campaign is ON !   You can check here.

    You welcome :)

    Ok I was in the opinion that maybe we could but, your point 2., is the answer I needed ;)

     

    thanks

  10. 1 hour ago, bramirez said:

    I'm not sure if this is related but I noticed it after updating to kernel 4.14.98-mvebu.

     

    My Helios4 2nd batch is emitting a high pitched "buzzing" sound. I've heard the same sound from a defect power source. I cannot locate where the sound is coming from.

    I also hear "clicking" sounds. I think these are coming from the hdd, because they sometimes get more frequent. Probably when reading/writing something.

    I also hear something turning on and off at about a one second interval (not the fans).

     

    I'm not sure if it started exactly after the kernel update, but I know that it is new, because it didn't make as much sounds before.

     

     

    Humm,

    That sounds to me that your hdds are always parking/unparking heads like crazy...( which could limit its life cycle.. )

    It could be a problem related with APM features..

     

    Does you have OS in harddrives?

    Try to mesure the output of this command, in time(lets say within 15 minutes intervals), and see if load/unload increase a lot:

    for blockid in {a..c};do smartctl --attributes /dev/sd${blockid}|grep -E "^ 12|^193"; done

    Regards

  11. On 2/15/2019 at 4:21 PM, integros said:

    Is a constant temperature of 65°C (all drives in standby and nothing besides a ssh-session running - cpu runs at 1.6GHz but the governor is "ondemand") acceptable or should I adjust fancontrol to start at 50°C (or even lower)? I don't care if the fans run (as long as the power consumption doesn't increase significantly) because the Helios4 is in our basement.

     

    How can I move the system to the SSD connected to the first SATA-port?

     

    Right now I'm running the latest Debian-Image from here (4.14.98-mvebu).

    Hello,

    Adjust your fancontrol settings,

    I use this ones:

    root@helios4:~# cat /etc/fancontrol 
    # Helios4 PWM Fan Control Configuration
    # Temp source : /dev/thermal-cpu
    INTERVAL=10
    FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input
    MINTEMP=/dev/fan-j10/pwm1=60 /dev/fan-j17/pwm1=65
    MAXTEMP=/dev/fan-j10/pwm1=65 /dev/fan-j17/pwm1=70
    MINSTART=/dev/fan-j10/pwm1=20 /dev/fan-j17/pwm1=20
    MINSTOP=/dev/fan-j10/pwm1=29 /dev/fan-j17/pwm1=29
    #MINPWM=0

    Yes, the CPU Scheduler..

    The Option I found is putting into /etc/rc.local

    # Force CPU sheduler Ondemand
    cpufreq-set -g ondemand

    It is indeed ondemand, but freq seems to be at 1.6 Ghz.. so this will put it ondemand again, and it seems to work :)

    With both settings above, my disks ar at:

    root@helios4:~# hddtemp /dev/sd{a,b,c}
    /dev/sda: ST4000VN008: 21 C
    /dev/sdb: ST4000VN008: 22 C
    /dev/sdc: ST4000VN008: 22 C

    And CPU( which seems to support 100C ) are at:

    root@helios4:~# cat /sys/devices/virtual/hwmon/hwmon1/temp1_input
    60359

    reboot your Helios4 and check if everything is ok now

  12. 20 hours ago, aprayoga said:

    Well, you could boot from USB but first you need to set Helios4 to boot from SPI NOR flash. You can check how to do it on our wiki.

    In the end of the page there is Moving RootFS to Other Device section to transfer your existing OS in SDCard to USB Storage.

    Or if you want to start fresh, you could also write/burn the Armbian image directly to USB pendrive instead of SDCard.

    Hello,

    Thanks for your reply.

    Can we write several times to the flash( I mean, if you need later to update uboot.. )

     

    One thing that I was also thinking would be to expand the RingBuffers of the network card, if its possibe..

    Right now they are:

    Pre-set maximums:
    RX:		128
    RX Mini:	0
    RX Jumbo:	0
    TX:		532
    Current hardware settings:
    RX:		128
    RX Mini:	0
    RX Jumbo:	0
    TX:		532

    I don't know how to expand them, or if its possible.

     

    Thanks in Advance, for your time.

    Regards

  13. 1 hour ago, aprayoga said:

    Regarding OS on hard drive, it's possible as long as it does not reside on top of RAID. U-Boot does not understand MD so it could not load boot files from there.

    You need to sacrifice 1 drive to be used as OS drive or maybe, since i haven't tried it, allocated a partition for OS and let MD use the rest of the disk capacity.

    That being said, currently we don't support OS on hard drive.

    Hello aprayoga,

    Thanks for your reply.

    At beginning I was thinking in put the OS in a separated partition, with lvm managing it, and mdraid separated from that, also needed to be there a /boot partition..

    But later I understood that, it will wakeup my drives a lot of time, as the OS was there and so, the load_cycles head park/unpark will increase dramatically, or disabling the idle APM power feature and been all the time with disks spinning and consuming lots of energy, even when you are not using the Nas...

     

    I realized that, for having the OS in a Hard drive...better it to be a 2.5", so less power consumption..

    But that also means penalize one disk for the mdRaid..

     

    Its not a easy answer in fact,

    A Solution could be having a small harddrive 2.5", attached to USB, with the OS, and logs..

    Or have it, like its already..., in a SDCard, with logs in a Zram..

     

    Right now after, your response and also after thinking better at it sdcard seems a easier way to go.

    But could be nice to have there a USB boot option, for a pendrive or a usb attached disk.

     

    In both of this options we could have the 4 disks for what mater the most... the mdraid :)

     

    Thanks for your reply,

    Regards

  14. Doubts about OS in Disk array,

     

    Could we put OS in hard drives?

    Or should we put it in a spare one/usb pendrive or let it in sdcard

     

    Which method is the best..

    I am afraid of putting it in the disk array, has it will increase the loud_count head cycles to park/unpark the heads...?!

    root@helios4:~# smartctl -a /dev/sda|grep -E ^193
    193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       61
    root@helios4:~# smartctl -a /dev/sdb|grep -E ^193
    193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       61
    root@helios4:~# smartctl -a /dev/sdc|grep -E ^193
    193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       55

     

    I appreciate if any  one, can help me on choosing the best option?

    Thanks in Advance,

    Best Regards

    tux

     

  15. On 1/13/2019 at 4:13 PM, Harvey said:

    Hi, (happy) new Helios4 owner here. I have a questing regarding power consumption:

     

    I'm running a new, clean setup based on Armbian_5.68_Helios4_Debian_stretch_next_4.14.88.img.xz.

     

    With no additional drives powered up, a cheap power meter shows the following results:

    • System running idle: 4-5 W
    • System halted after "poweroff": 6-7 W.

     

    With two WD Red 8TB EFAX (still unconfigured) and one SSD 128GB (root fs) I get

    • System running idle, disks spinning: 23 W
    • System halted after "poweroff": 9 W.

     

    Why does the halted system consume that much energy?

     

    TIA

       Heinz

     

     

    Hello Harvey,

    Thanks for your stats :thumbup:

     

    I have 3 IronWolfs 4TB( each ),

     

    Measurements on the Wall:

    1) System running idle, disks in Standby( configured in powermode 127 ): ~ 6.5W

    2) System halted "running 'halt' command, power led off..": ~10-11W( Fans at full throttle, with my config above.. ).

        Maybe the disks here exit the powermode 1, and goes to a meadle stat, don´ t know, maybe if I mesure during more time, I will get a better picture..

     

    with discs spinning it depends on what I am doing, I measured peaks of ~ 34 Watts, but very briefly.

    I haven´ t done any stress tests yet,

    But I configured discs to start in standby and to control current used during spining..

     

  16. Hello,

    Here is the Link for Seagate SeaChest Utilities Documentation:

    Seagate SeaChest Utilities

     

    In case any one has Seagate ironwolfs or another Seagate disks.

     

    I have been kidding around with this tools..

    With this base settings:

    # Enable Power UP in Standby
    # Its a persistent feature
    # No Power Cycle required
    ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --puisFeature enable
    ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --puisFeature enable
    ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --puisFeature enable
    
    # Enable Low Current SpinUP
    # Its a persistent Feature
    # Requires a Power Cycle
    ${SEA_PATH}/openSeaChest_Configure --device /dev/sda --lowCurrentSpinup enable
    ${SEA_PATH}/openSeaChest_Configure --device /dev/sdb --lowCurrentSpinup enable
    ${SEA_PATH}/openSeaChest_Configure --device /dev/sdc --lowCurrentSpinup enable

    Unfortunatly FreeFall feature is NOT supported at least in my IronWolfs 4TB,

    Which would be a must to control aceleration levels ...( I already  killed a WD Red 3TB..with a freefall .. of 1 meter os so..)

    This Feature could be present but doesn´t permit changing levels..at least it seems so..

    root@helios4:~# openSeaChest_Configure --device /dev/sda --freeFall info|grep -E "dev|Free"
    /dev/sg0 - ST4000VN008-XXXXXX - XXXXXXXX - ATA
    Free Fall control feature is not supported on this device.

    Could it be related with the last release of SeaChest tools? I don´ t know..

    If someone knows how to tune acceleration levels, I appreciate..

     

    In the mean time,

    I made some basic tests with Models Seagate IronWolf 4TB ST4000VN008( 3 discs):

    APM Mode Vs Power Comsuption( Power Measured on the Wall...it includes helios4 power consumption )

     

    1) With CPU iddle at 800Mhz, and with disks at minimun APM power mode 1 .

        Power consumption is between 4.5-7 Watts, maybe average is around 6-6.5 Watts

        1.1) When Issuing a 'hdparm -Tt' on all disks,

               Power peaks at 30 Watts Max, but stays in  average at around 15-25 Watts,

        1.2) Then goes down to arround 6 - 6.5 Watts.. mode 1

     

    2) With CPU iddle at 800Mhz, and with disks at APM power mode 127,

        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 127 )

        2.1) When issuing a 'hdparm -Tt'  on all disks,

               Power peaks at 34 Watts, But it is around 13-25 Watts, with majority of time between 17-23 Watts..

        2.2) Then it enters in mode 1.

     

    3) With CPU iddle at 800Mhz, and with disks at APM power mode 128,

        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 128 )

        3.1) When issuing a 'hdparm -Tt'  on all disks,

              Power peaks at 34 Watts, but it is around 13-25 watts, with majority of time between 15-23 watts..

        3.2) Then goes down to around 15-17 Watts, for some time, ..majority of time ~ 16.5 Watts..

              I haven´ t checked its final state has only monitored by some 10-15 minutes

     

    4) With CPU iddle at 800Mhz, and with disks at APM power mode 254( max performance )

        Power consumption average is around 6-6.5 Watts( changing level from mode 1 to 254 )

        4.1) When issuing a 'hdparm -Tt'  on all disks,

               Power peaks at ~34 Watts, but it is around 13-28 watts, with majority of time between 17-23 watts..

        4.2) Then bounces between 13-20 Watts, for some time, maybe with average around 19Watts.. 

               I haven´ t checked its final state has only monitored by some 10-15 minutes

               I assume they enter in mode 128..

     

    In my Opinion power mode 127 'its a best fit' for power consumption vs performance..

    It penalize a bit the performance, but an in all, it seems the best fit, with the bonus of ~ 6.5Watts in Idle.

     

    So I issue on reboot or start:

    -- Enable APM level to 127
    ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sda --setAPMLevel 127
    ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdb --setAPMLevel 127
    ${SEA_PATH}/openSeaChest_PowerControl --device /dev/sdc --setAPMLevel 127

     

  17. On 2/1/2019 at 4:26 PM, gprovost said:

    @tuxd3v Let me clarify something. The reported temperature of CPU is the temperature of CPU die (as for any CPU), it doesn't represent the ambient temperature of the case where your HDDs are, neither does it represent the temp of the heatsink on the top CPU. Therefore your HDD weren't exposed to 70C temperature at anytime, maybe at worst 50C ambient temp. So let's not be alarmist ;-)

     

    The scaling governor is already set as 'on-demand' by default. It means the CPU freq will vary from 800MHz to 1.6GHz, but at the end it doesn't impact much the power consumption / CPU temperature. The Marvell Armada SoC (CPU) is designed to run at 100C and it's very common for such SoC to quickly reach very high temperature. The current board is designed to actually be 100% passive cooling, but a bit of active cooling is always best, mainly to not impact other component like the HDD in the case of Helios4. 

     

    The case is designed to have a well contained horizontal air flow that cool the SoC heatsink and the HDD surfaces. It's a pretty simple but very efficient design.

    Hello gprovost,

    You right, sorry if I made myself unclear( English is not my mother tongue ) :)

    I made some tests without the disks, and the temps are a lot lower..

    Even at the heatsink level,  it doesn´ t go very warm..

    The Fans are now running nicely with the config help from Harvey( Thanks for that!  ), and they change pace( has expected with CPU processing.. )

    Rebuilding the Raid:

    root@helios4:~# hddtemp /dev/sd{a,b,c}
    /dev/sda: ST4000VN008: 25°C
    /dev/sdb: ST4000VN008: 29°C
    /dev/sdc: ST4000VN008: 29°C

    I didn´t  knew the cpu supported 100C, thats nice!

     

    I noticed that the Scheduler is Configured as Ondemand,

    But if I restart, after that CPU is fixed in 1.6Ghz( But Scheduller says Ondemand ),

    if I set it Ondemand explicitly, then it goes 800Mhz..

    It could be me.. til now ( forgetting my initial false findings.. ), I am very pleased with Helios4.

     

    In the Raid5 reconstruction it is making around 170MB/s

    Its the Only metric I have til now with the disks..

     

     

    Does any one has some metrics, about read/writing speed, raid?

    You know, new toy in the house... "even my pets get Jealous" :D

     

    Thanks in Advance

     

  18. 7 hours ago, gprovost said:

    @tuxd3v and @Harvey You right guys we overlooked HDD temp with the new fans. We addressed complain of batch 1 where the type of fan (Type A) we used were always spinning even at lowest PWM. We decided to use fan Type C for batch 2. Comparison between the 2 fans can be found here.

     

    We changed fancontrol configuration to cover both fan type, but we didn't take in consideration properly HDD temp in the case of the new fan :-/ We will fix that in next Armbian release and post / communicate to people how to fix the configuration on already running system.

    Hello gprovost,

    Thanks for that.

     

    I found one of the heat sources was the cpu fixed at 1.6Ghz, and the fans stopped.

    I switched it to 'ondemand', and activated fanctontrol, for fan0, temps are starting to drop..

    This fans seems to be of very good quality, I haven´t  checked yet in detail, but they seem to be verygood.

     

    I was also naive,

    In setting the Nas, with the discs inside from the beguining..

    I should first connect everything without the sata disks there, and only in conditions of nice temps, put there the disks..

     

    Anyway,

    The Holes in the Nas Case assembly Acrylic, should be above of the point they are now,

    I understand that in that situation the Sata cables are too short, but with a bigger ones, disks would be better protected from heat since they would be higher from the heat source..

     

    But this opinion is to Helios4 team, nothing todo with Armbian I think.

     

    Thanks for the clarification :)

  19. 16 hours ago, Harvey said:

    Hi tuxd3v,

    my experience is quite different.

    When setting up my new box, CPU temp always was below 58 °C, so the fans never got started. (According to https://wiki.kobol.io/pwm/#configuration-file, fans only start at 70 °C CPU temp.)

    With these settings, my fans are always running slowly and silently, but efficiently cooling my disks to not more than approx. 33 °.

    Hello Harvey,

    Thanks for your reply.

    My fans, also never started from the beguining, but now I have fan0 active at low speed, and Fan1 Stopped..

     

    I found one of the excessive heat temp problems..

    Was the Scheduler,

    CPU freq was set to 1.6Ghz, without cooling, it reached 73C, my lower disk got 24Hours on this condition.. :blink:

    I don´ t know what to think of its state now,(  but, at least I made the right purchase as IronWolf disks are rated at 70C max.. )

    cpufreq-set -g ondemand

    Thanks for the explanation of fancontrol,

    I configured it like this:

    INTERVAL=10
    FCTEMPS=/dev/fan-j10/pwm1=/dev/thermal-cpu/temp1_input /dev/fan-j17/pwm1=/dev/thermal-cpu/temp1_input
    MINTEMP=/dev/fan-j10/pwm1=55 /dev/fan-j17/pwm1=65
    MAXTEMP=/dev/fan-j10/pwm1=65 /dev/fan-j17/pwm1=70
    MINSTART=/dev/fan-j10/pwm1=20 /dev/fan-j17/pwm1=20
    MINSTOP=/dev/fan-j10/pwm1=29 /dev/fan-j17/pwm1=29
    #MINPWM=0

    Temps right now bellow 57C

    root@helios4:~# cat /dev/thermal-cpu/temp1_input
    56550

    its a better start :thumbup:

  20. Hello,

    I am new to Helios4, just assembled my NAs, and brought 4 ironwolf 4 GB, for a Raid6..

     

    I noticed that, after reading in the Wiki, that writing uboot, to Flash, only allows me to put the OS in the ...USB dongle, is this correct?

    I was expecting to use my Raid setup, also for the OS?

     

    Another thing that I noticed is that, the CPU heats a lot, even without work, sitting idle it gows trough more than 70C..

    The good thing, 10 seconds with Fan 0 on and it decreases to ~ 66C.

    In this situation the Disk close to the board suffers a lot with heat.. as it is so close to the board, they should be put higher away from CPU heat source..

     

    Also, the Fan only stays on for some seconds, then shutoff..I don´ t know what process is doing that..

    I am using last Debian Stretch from 20/01/2019

     

    I am also developing a Fan tool( ATS ), but it doesn´ t support yet Helios4 dual fans..

     

    Does any one has some hints to setup fans correctly, also for booting from Raid6 Sata disks?

     

    Thanks in Advance

    Regards

  21. Hi Guys,

    I wanted a debian stretch based OS, and I followed Olimex consideration on Armbian.

    The I visited the Page of Lime2( not nand or emmc):

    https://www.armbian.com/olimex-lime-2/

     

    The stretch Image that is there...is configured to boot from emmc :(

    I want a debian stretch based version.

    I checked sha256 checksums OK.

     

    Serial Debug Output:

    U-Boot SPL 2017.11-armbian (Jan 25 2018 - 01:01:01)
    DRAM: 1024 MiB
    CPU: 912000000Hz, AXI/AHB/APB: 3/2/2
    Trying to boot from MMC1
    mmc_init: -110, time 37
    spl: mmc init failed with error: -110
    SPL: failed to boot from all boot devices
    ### ERROR ### Please RESET the board ###

     

    How can I solve this situation?

    can I write a new uboot version? how to do it on Armbian_5.38_Lime2_Debian_stretch_next_4.14.14.img image?

     

    thanks in advance.

     

    Best Regards

    tux

×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines