Pi Fan issues on Orange Pi PC


Recommended Posts

Greetings,
Firstly I would like to say that I really appreciate the work that has been done by this wonderful community of developers and users of Armbian.
Secondly I am sorry if the problems that I describe here have been already discussed. I have not found anything alike.
 
So I am a proud owner of the OPi PC board. Everything seems to be OK, except some fan issues.
 
What have I done until now:
 
1. Installed Armbian Jessie Server with RPi Monitor
2. Added a. heatsink on the SoC and RAM chips
                b. Pi Fan with the case
 
The problems:
 
1. The fan does not work if connected to the second and fourth pins BUT it works using pins 4 and 6.
 
   a. It is extremely loud (see the video), very, very loud :( Is it possible to decrease the rotations?

 

 

Because on idle the SoC temperature is between 24C and 26C while the ambiant temp is 20-23C (see the screenshot bellow) which is very good, but I would not mind decreasing the fan's RPM resulting with 36-40C on idle.
 
orangepipc_terminal.png
 
   b. After the board is powered off (sudo su poweroff) the fan continues to work. How to get it powered off as well?

 

If there is some more information that I can provide please let me know.

 

Thank you!

Link to post
Share on other sites
Donate and support the project!

1. The fan does not work if connected to the second and fourth pins BUT it works using pins 4 and 6.

 

   a. It is extremely loud (see the video), very, very loud :( Is it possible to decrease the rotations?

 

Pin 2/4 are 5V and 6 is GND so it can only work when using 2 or 4 together with 6 (or any of the other GND pins). Then regarding temperatures, heatsink and fan it's easy:

  1. H3 is a SoC made for OTT boxes, designed to being cramped into tiny enclosures with crappy thermal design. It's maximum working temperature by specs is 125°C -- Armbian starts to throttle way earlier so you're always save. This chip is designed to operate in a thermal range where touching it with a finger will hurt for sure (but it's a SoC and not a part of your body so relax and don't care :) )
  2. Thermal readouts when using Armbian are off by approx. 10°-15°C. This is just the comparison of running Armbian when booting with Allwinner's u-boot 2011.09 vs. using mainline u-boot which is our default. So expect the real temperatures to be 10°-15°C above
  3. Our throttling settings take care of this and provide enough safety headroom. Since throttling works you neither need a heatsink nor a fan (but then you've to expect that your board will run slower under constant full load)
  4. Adding a heatsink is useful if you plan to constantly operate your device under full load conditions since it avoids throttling or throttling starts later
  5. Adding just a fan (without heatsink) is never useful since even cheap heatsinks perform better than just a fan
  6. Adding an annoying fan to a heatsink is neither necessary nor useful as long as you use Armbian default settings (when you use OS images from Xunlong or loboris or others based on that then it's a different story). Still: since throttling prevents overheating you or let's better say the SoC is safe. A fan is only necessary if you want to do number crunching (then you chose the wrong device anyway) or do consumption/performance measurements like me
  7. Different heatsinks perform differently. The one you chose is unfortunately low price, low profile and low performance (tested the same on Pine64 just to remove it after an hour and throw it away)
  8. Heatsinks will always help a lot with heat dissipation but they work the better the more airflow is possible around. Use heatsinks with large spaces between the fins if you want to make use of convection, if the distance between fins is too small you would need additional airflow

That being said: Your temperature readouts are perfect having in mind that they are 15°C more in reality (H3 can cope with that, it's not a human being, it's just a chip designed to operate at high temperature in hot environments -- rated at up to 70°C ambient temperature!). You don't need the fan, try to remove it and see whether the additional airflow already helps. If that does not help then remove the whole enclosure and choose one with more space above the SoC.

 

You can use 'sudo armbianmonitor' with either the -m switch to get instant CLI based monitoring while -r will install RPi-Monitor. And by lurking through H3 forums you find many RPi-Monitor graphs that will show that there's nothing to care of as long as it's about SoC temperature. Our settings (voltage settings and thermal definitions) always protect you.

Link to post
Share on other sites

If you want the fan to stop the simplest way seems to supply it from an USB port.

But I think passive cooling is much more suited to that kind of device.

I agree with tkaiser on the fact that H3 is not well suited to number crunching.

Being interested in SDR I had a chance to compare it with a miniPC based on Intel Z8300

On the other hand it is better in UHD video decoding but my best results were with Android :)

For cooling those chips using thermal pads between the board and a metal plate is a good idea too.

It seems they were not designed to facilitate the use of any kind of radiator .

Link to post
Share on other sites

...

 

It seems they were not designed to facilitate the use of any kind of radiator .

That is also my feeling. So I would like to know if the cooling of odroid cards is good because it seems that the radiator is part of the design (and assembled in factory ?). And I wonder if it is possible to cool a nanopi neo (the model without ETH and USB connectors) by thermal pads stuck on an aluminum enclosure.

Link to post
Share on other sites

I agree with tkaiser on the fact that H3 is not well suited to number crunching.

 

Sorry, I meant: Using ANY Cortex-A7 CPU cores for number crunching is simply stupid. That's not related to H3.

 

Small ARM devices get interesting for HPC when the GPU engines can be used (mostly a matter of software support) but CPU cores are too weak and performace/watt ratio is too bad. It's a different story if these devices do other services, that are not related to number crunching, eg. serving static web pages. Then with a good setup and good settings performance/watt ratio might be awesome.

 

In other words: You need heatsink+fan only if plan to constantly run your H3 device on the upper limit. And then you simply did something wrong because you chose the wrong device. So save the fan and enjoy the silence.

 

BTW: I have now 13 H3 devices lying around, one of them being in semi-productive use. I bought already some acrylic shields to build a cluster out of the 12 (more or less just for fun or to demonstrate using aggregated cpuminer results how the efficiency of such a cluster can be increased by good settings and good cooling using two large and silent fans on top and bottom of the 'enclosure' where the boards run vertically mounted to ensure enough airflow above the heatsink's surfaces).

 

When I did initial calculations regarding consumption and looked at performance numbers of individual 'cluster nodes' I came to the conclusion to simply stop this because such a cluster is BS anyway. I already knew the results (on average I get 2 khash/s per H3 device which results in 24 khash/s for my 48 core cluster), I already knew based on experiments with Pine64 a few months ago that this cooling approach will work and I already knew that there will be no new insights since the H3 boards I use are too different. Such a test would be interesting with 12 OPi PC and 12 OPi One to get the idea how the different voltage regulators influence performance. But not with a mixture of both device types.

 

Anyway: compare such a H3 CPU cluster getting 24 khash/s with what you get when doing it right (use general purpose GPU processing) and you know what SBC clusters are for with regard to number crunching: Good for street credibility but for nothing else. If calculations can't run on the GPU cores, simply drop the idea.

 

That is also my feeling. So I would like to know if the cooling of odroid cards is good because it seems that the radiator is part of the design (and assembled in factory ?). And I wonder if it is possible to cool a nanopi neo (the model without ETH and USB connectors) by thermal pads stuck on an aluminum enclosure.

 

Everything answered in post #123 here. Adhesive heatsinks work (some better, some not that much), FA's heatsink using a thermal pad works better and so will the combination of a NEO + thermal pad + aluminium enclosure if the enclosure's surface is large enough and ideally uses fins to make use of convection.

 

Regarding the ODROIDs: Sure, their heat dissipation performance is better since they are designed for heavy uses and then a large radiator with a perfect mounting solution helps avoiding throttling. But these devices serve different use cases. Their customers buy the ODROIDs to run desktop environments on it (GPU acceleration which also adds a lot of heat) and such things aren't possible with the NEO anyway (based on my tests it pretty often simply deadlocks when running at full load and 1200 MHz enabled without active cooling so maybe this is due to other components on the PCB? Don't know, don't care, this is an IoT node for lightweight stuff)

Link to post
Share on other sites

...

But these devices serve different use cases.

Sure. And that is what I want to investigate : the devices are designed mainly for some use case with different hardware allocation. You have to choose the device that correspond at best to your hardware needs, but then you have also to take the thermal design into account if you need power.

 

This thermal design includes PCB layers (orange PI solution), PCB size (beelink), chip placement (nanopi neo), processor choice (64 bits), good heatsink (odroid), throttling (all ?), active cooling, and large enclosure. You can try to improve that basic design and "misuse a device" by adding heatsink, fans, hardware and software drivers for the fans, special enclosure. But it will cost a lot of money and work to get a good result - and then possibly invalidate the initial choice.

 

On the other hand, the thermal solution chosen by manufacturer may be more or less efficient : beelink seems a bargain with its large PCB, large case and heatsink included in the price, but as they say not that efficient. There are some x86 device designed for cooling by the enclosure like an audio booster (thick aluminum case with fins). But I dont know of any arm device like that and I think it adds to much to the price of cards build around a 5$ SOC - for what the customers are ready to pay. But it is a short view : a good heatsink is expensive and difficult to mount, a noiseless system needs slow huge fans AND airflow guidance, and a large case is expensive and will enclose the IO ports.

 

I like acrylic transparent enclosure. I have one for a BPI M1 and happy with it because A20 dont need special cooling precaution. I enclose another one with a HDD in a plastic case (a pity bpi R1/lanobo is too expensive - as it is designed to enclose a 2.5 inch drive). In the present case - improving and controlling ventilation of an H3 in a small and pretty enclosure, it does not make sense to add a 12 cm fan with a funnel on top of it, with wires everywhere, and it is simply impossible to mount a big heatsink.

 

I have assembled a lot of x86 PC for 15 years with minimal noise goal : on systems with 60W TDP, the assembly of the heatsink is always the more tricky part, and needs screws or fest springs in order to provide an important pressure (and of course a good heatsink and preferably silver paste). Of course we surely dont need such extreme solution for a 5W TDP soc (btw I dont know the real size of the dye - and I think it is an important factor). But it is good to remind of the meticulous assembly necessary for installing a heatsink, because that heatsink is useless or even worse than nothing if badly assembled. And if you dismount the heatsink, you need to change the paste or the pad ...

Link to post
Share on other sites

But it is good to remind of the meticulous assembly necessary for installing a heatsink, because that heatsink is useless or even worse than nothing if badly assembled. And if you dismount the heatsink, you need to change the paste or the pad ...

 

It gets even worse. On the ODROIDs removing the heatsink might destroy the SoC (insider talk, inside the Spreedbox are ODROIDs with heatsink removed since the milled aluminium enclosure works as a huge and way more effective heatsink than Hardkernel's own -- only problem: remove the vendor's heatsink without destroying the board or order large enough batches to get them w/o heatsink)

 

With H3 devices it's pretty easy: Either use a small adhesive heatsink like I always do or on the NEO use the board's design with SoC on the other PCB size, and a thermal pad that connects SoC (and maybe freaking hot U7 also) to a large metal plate, be it an enclosure or something else. And then simply use Armbian (we default to 912 MHz max cpufreq on the NEO for a reason, we implement throttling the right way, all you have to do is to either trust in or control efficiency of our settings -- armbianmonitor might help with the latter).

 

It's really that easy. These el cheapo heatsinks reduce SoC temp by ~10°-15°C under full load (got packages with 10 each for $2.79 a while ago, so it was $0.28 per heatsink), combining a thermal pad with a large metal surface might work even better as can be seen from the aforementioned heatsink test. When using legacy kernel throttling will do the job, when using vanilla kernel now you're at risk with the smaller H3 devices and should think about limiting max cpufreq to lower values in /etc/defaults/cpufrequtils.

 

Edit: There are tons of 14x14mm or 15x15mm heatsinks available that all perfectly fit on any H3 board: http://www.aliexpress.com/store/group/Heatsinks-Compound-Thermal-Paste/515717_258239137.html -- and combining a NEO, a thermal pad and something like this with an own 3D printed enclosure you will get perfect thermal behaviour for sure. But IMO it's not necessary -- limiting max cpufreq to 912 MHz on the NEO might impact performance by 25% but helps a lot reducing heat.

Link to post
Share on other sites

BTW how do you measure chip temperature ? As you said somewhere, the temp reported by sensors on CPU is underestimated and there are no sensors on RAM, network chips or DC converters. I tried to use an IR thermometer, but temp is highly underestimated compared with the sensor. And such thermometer can do measurement on reflexive surface (in particular aluminum).

Link to post
Share on other sites

BTW how do you measure chip temperature ?

 

I don't. We rely on readouts that are produced by a driver and the result of a sensor somewhere and an ADC converting values into bits.

 

When we started with Armbian on H3 boards it was me who realized that these readouts are lower than before (I added a warning to OPi One page back then and maybe you find discussions in linux-sunxi IRC log). Then I used an Armbian build with mainline u-boot, our thermal settings and our kernel and used a script to monitor temperatures reported in idle and under load and while iterating through 480MHz - 1296 MHz cpufreq. Then I used an image from loboris, exchanged the rootfs with ours, replaced script.bin with our settings, converted our kernel (zImage vs. uImage) and did the same again. Temperatures are off when using mainline u-boot (~15°C at the lower end, ~10°C at the upper).

 

That's it. H3 has a thermal sensor for a reason, the vendor has published specs and his budget_cooling stuff to make use of this sensor. We realized that something had changed (readouts too low) and we adjusted our settings accordingly to leave enough safety headroom. Whether these readouts are correct or not I simply do not care at all. Their original purpose is overheating protection with throttling and we use it that way. And whether/how surface temperatures on a chip are related to the internal readouts (engineers placed the thermal sensors somewhere inside for a reason and developers wrote drivers to make use of them) is of zero interest to me. :)

 

BTW: This is different compared to eg. A20 or any older sunxi SoCs. There also a thermal sensor is inside the SoC but more by accident than by design (part of the touch panel controller). Relying on this thermal sensor is somewhat weird (but it will be done) but on the H3 we can trust in the values we get after we apply some corrections (in form of safety headroom and starting throttling earlier than necessary).

 

BTW 2: I tortured a BPi M2+ for over 5 days running at 90°C (so it's most likely +100°C when booting with Allwinner's u-boot 2011.09), no problems, board serves still. 

Link to post
Share on other sites

...

I tortured a BPi M2+ for over 5 days running at 90°C (so it's most likely +100°C when booting with Allwinner's u-boot 2011.09), no problems, board serves still.

 

I am happy to read this as I own one of those ! I am not afraid of CPU overheating : for my current usage, throttling is the best solution. When I will want a SOC able to run at full speed, I will buy one design for that even if it cost more, or use huge anesthetic radiator. One has to know what he wants.

 

In fact, my current worry is more about enclosures. I would hate to spend time and money in installing a SOC in a nice box, and discover after that I should not ! I saw complains about chips very hot measured with a wet finger ! I had a look on PMU datawheet : they are designed to support very high temperatures. DRAM is another problem : I dont know the specs, dont know how to measure temp, I am not sure we can have a good assembly with a radiator that cover multiple chips and dont know if it is a problem apart perhaps with graphical usage that may stress it.

 

We should do tests like jet engine manufacturer : destructive test. Any volunteer here ? (just kidding)

 

(BTW, those how are worried by the possible damage of their card should record the brand new SOC temp with well known sotware/hardware activity, as a reference in case of doubt later).

Link to post
Share on other sites

Thank you tkaiser and FordPerfect for such detailed answers.

 

I bought the heatsink with the cooler because I am planning to use OpiPC as a home central point which will run gogs.io, lamp, torrent daemon with web ui, vpn, lan file sharing between linux/android machines so I thought it will be good to have the processor cooled enough. I would also like to make several of this services available to the internet.

To be honestly I did not consult this forum before on this matter. Anyway my initial thoughts were to cool the processor so that it just wouldnt literally burn while I am not at home so that I will avoid any kind of fire in the house. Maybe it is silly.

Link to post
Share on other sites

BTW how do you measure chip temperature ?

 

Put your finger on the chip. If you can still touch it, you are just fine without even using a heatsink. Ever since settings were adjusted to sane values in Armbian, OPI ONE/LITE boards run reliably without heatsink. As already mentioned, a cheap sink with some thermal paste adds a safe margin unless you plan to steam up your board in a closed plastic box.

Link to post
Share on other sites
Guest
This topic is now closed to further replies.