Jump to content

wtarreau

Members
  • Posts

    41
  • Joined

  • Last visited

Reputation Activity

  1. Like
    wtarreau got a reaction from TRS-80 in Armbian on "M8S Plus W" (905W without microSD)   
    It would have been nice at least to keep the mention that the device Balbes was talking about was the X96 Max, that's not specific to any store and it's just a board with a supported SoC (S905X3), otherwise his post becomes confusing or misleading since it's a different hardware from the one posted above.
  2. Like
    wtarreau got a reaction from NicoD in Daily (tech related) news diet   
    Please note, my build farm is used for cross-compiling only. I used to do native builds 20 years ago, and after having been hit several times by accidental dependencies on the build host, I stopped and am always cross-compiling nowadays, even when doing x86 on x86. That's why I can use whatever host is available for the build farm. My build farm at home is heterogeneous, it's made of the armv8 boards above, one armv7 board (odroid xu4) and sometimes some x86 hosts when the devices are up. So yes, I'm a huge proponent of cross-compiling.
  3. Like
    wtarreau got a reaction from datsuns in Quick review of NanoPi Fire3   
    Use sales@, I've used it several times and it works. Oh and put a link to the discussion in this forum so that they know you're not just doing random stuff but are actually using their boards fine. I wouldn't be surprised if they're used to receive an occasional complaint from people who just accidently erase their micro-SD and cry for help.
  4. Like
    wtarreau got a reaction from datsuns in Quick review of NanoPi Fire3   
    You should try to connect a serial adapter to its console port to see if it emits anything at boot. If it doesn't, it's probably dead. If it emits random errors which differ upon every boot, it could be the RAM which became defective. This happened to me on a few boards in the past, and on one of the MIQIs on my build farm. It is also possible that a solder joint has gone bad under a BGA chip. That's the most common failure cause in modern hardware (especially smartphones). It's even worse with RoHS and lead-free because lead-free tin is less elastic and breaks more easily. I've repaired a few such dead boards using a hot air gun and an infrared thermometer, but each time you know that you're probably killing it even more, so you have to be prepared to this. Note that it can also work in an oven if you want to try. Most of the time it doesn't work and can even make things worse but if the board is dead, there's nothing to lose to try. Just pre-heat your oven at 180°C without the board, make sure the temperature is stable (otherwise pre-heat it at 200 and stop it). Then place the board inside for 3-5 minutes. Don't touch it to extract it, open the oven to let it cool down enough, and pick the board once the temperature has dropped enough for the solders to be solid. Some plastic may get slightly damaged if the oven is too hot (e.g. reset button).
     
     
    The datasheet recommends -25 to 85°C operating temperature (both ambient and die), and -40 to 150 storage. In short it means that it must work flawlessly at 85, that beyond this it only will if you're lucky, and that at 150 you're definitely certain to fry it. Also running a chip past its maximum operating temperature risks to hang it and a hung chip may quickly get damaged. But below 85 you have zero risk for the device itself. If you're overclocking, the margin may be lower.
     
     
    My personal appreciation of "running pretty hard" is when the chip's temperature never has the opportunity to fall back into the recommended range ;-)
     
     
    Yep I agree. I wanted to do it as well but my copper plates are too thick. However since my board is inside a very tight cardboard "enclosure", I've added an aluminum plate on the other side above two efficient thermal pads. It allows to collect some of the heat from the other side and spread it through the cardboard. It was enough to make the temperature drop by 5-10°C here. But in general since aluminum has a moderate thermal resistance, you indeed want to improve its contact surface with the hot source. A copper plate (which is a much better thermal conductor) does this by spreading the heat all over the base of the heat sink and avoiding hot spots below it. Ideally you need to have a soft thermal pad covering the whole surface of the copper plate, except the small silicon part, so that the heat is extracted from anything below. I've long been wondering if it would be possible to achieve this using a good chunk of silicon glue to stick the copper plate on top of everything. But that would definitely destroy the board.
  5. Like
    wtarreau got a reaction from chwe in Quick review of NanoPi Fire3   
    Confirmed, from memory I just added a line with "1600000 1250000". Mine is ultra-stable even with 8 cores at full speed. It happens to throttle a little bit from time to time because the heat cannot escape easily from the cardboard enclosure, but that's all. From what I remember from the datasheet, the chip is designed to run at 1.6 GHz, that's why I picked this frequency.
    I also modified the thermal triple points because I don't want the machine to throttle quickly, I seeked the limits on mine and set the points slightly below. I can upload the file later if some want it. It's just that it's possible that my temp limits might cause issues to others so we need to be cautious and I don't want people to randomly download the file without reading the context, then complain about this board's stability when using armbian...
     
  6. Like
    wtarreau got a reaction from cmoski in Quick review of NanoPi Fire3   
    Confirmed, from memory I just added a line with "1600000 1250000". Mine is ultra-stable even with 8 cores at full speed. It happens to throttle a little bit from time to time because the heat cannot escape easily from the cardboard enclosure, but that's all. From what I remember from the datasheet, the chip is designed to run at 1.6 GHz, that's why I picked this frequency.
    I also modified the thermal triple points because I don't want the machine to throttle quickly, I seeked the limits on mine and set the points slightly below. I can upload the file later if some want it. It's just that it's possible that my temp limits might cause issues to others so we need to be cautious and I don't want people to randomly download the file without reading the context, then complain about this board's stability when using armbian...
     
  7. Like
    wtarreau reacted to 5kft in Quick review of NanoPi Fire3   
    Hi @cmoski, in case it is helpful to you:  If you are using the "nanopifire3" board configuration ("next" target), you can build the kernel, specifying the "CREATE_PATCHES=yes" option, and then modify the generated DT in "cache/sources/linux-4.14.y/arch/arm64/boot/dts/nexell/s5p6818-nanopi-fire3.dts".  Look for "dvfs-tables" in this DT file and edit/add the higher frequency:
    dvfs:dynamic-freq@bb000 { supply_name = "vdd_arm_spu"; supply_optional = "vdd_arm_axp"; vdd_arm_spu-supply = <&VCC1P1_ARM_SPU>; vdd_arm_axp-supply = <&VCC1P1_ARM_PMIC>; status = "okay"; dvfs-tables = < 1400000 1200000 1300000 1140000 1200000 1100000 1100000 1040000 1000000 1040000 900000 1000000 800000 1000000 700000 980000 600000 980000 500000 940000 400000 940000>; }; In this table the first column is the frequency, and the second is the voltage provided by the SPU1705 regulator for that frequency.  Note that the maximum voltage that this regulator can output is 1.265v (please see ".../drivers/regulator/spu1705.c" for details).
     
    My Fire3 could run fine at 1.5GHz using 1.265v, but 1.6GHz was too unstable (random crashes).  The original FriendlyELEC kernel tree sets the maximum to 1.4GHz at 1.2v, so this is what I brought over to Armbian.
     
    I hope this helps...!
     
  8. Like
    wtarreau reacted to mindee in NanoPi NEO4   
    Just a little bit list, more detail would be done on wiki soon.
    1. NEO4 board size is 45 x 56mm, but M4 is 85 x 56mm
    2. NEO4 has 1GB DDR3 RAM with single chanel, But M4 has two version 2GB DDR3 RAM/4G LPDDR3 RAM with Dual Chanel.
    3. NEO4  will use AP6212 wireless module with single antenna , but M4 use AP6356S dual-band module, and use 2x2 MIMO and 2 real antennas. 
    4. NEO4 has one MIPI-CSI, M4 has two MIPI-CSI
    5. NEO4 has USB3.0  x1 & USB 2.0  x1, but M4 has USB 3.0 x4 behind a VL817 internal hub.
    6. NEO4 use 1.27mm pitch SMD connector for GPIO-40 pinout,  M4 is same with RPi3 40pin GPIO.
     
    Both have:
    1. PCIe x2 pin-out
    2. eMMC module connector
    3. GigE port.
    4. TypeC is for power supply and OTG.
    5. HDMI-A & MicroSD slot.
    6. Big CNC heat sink, with two side 1/4 screw hole
  9. Like
    wtarreau reacted to hjc in NanoPi NEO4   
    @wtarreau There's currently only a wiki page for M4 but not NEO4.
  10. Like
    wtarreau got a reaction from tkaiser in NanoPi NEO4   
    I thought they were two different names for the same upcoming board, with various intermediary designs. But you're right, the NEO4 is even smaller than the M4! So yes that makes sense, it's a single channel RAM. Then I think I'm more interested in the M4. However if they made a complete aluminum enclosure like they recently did with the NEO/NEO2 with the buttons and OLED, it could be very tempting to get one as well for about everything you can do with a machine lying in your computer bag!
  11. Like
    wtarreau got a reaction from tkaiser in Benchmarking CPUs   
    What you can do is increase the 2nd argument, it's the number of loops you want to run. At 1000 you can miss some precision. I tend to use 100000 on medium-power boards like nanopis. On the clearfog at 2 GHz, "mhz 3 100000" takes 150ms. This can be much for your use case. It reports 1999 MHz. With 1000 it has a slightly larger variation (1996 to 2000). Well, it's probably OK at 10000. I took bad habits on x86 with intel_pstate taking a while to start.
     
    Maybe you should always take a small and a large count in your tests. This would more easily show if there's some automatic frequency adjustment : the larger count would report a significantly higher frequency in this case because part of the loop would run at a higher frequency. Just an idea.
     
    Or probably that you should have two distinct tools : "sbc-bench" and "sbc-diag". The former would report measured values over short periods, an the latter would be used with deeper tests to try to figure whats wrong when the first values look suspicious.
     
  12. Like
    wtarreau got a reaction from tkaiser in Benchmarking CPUs (not yet a research article)   
    Please note that the operating points is usually fed via the DT while the operating frequency is defined by the jumpers on the board. It's very possible that the DT doesn't reference the correct frequencies here. From what I've apparently seen till now, the Armada 38x has limited ability to do frequency scaling, something like full speed or half speed possibly. When I was running mine at 1.6 GHz, I remember seeing only 1600 or 800 being effectively used. I didn't check since I upgraded to 2 GHz (well 1.992 to be precise) but I suspect I'm now doing either 2000 or 1000 and nothing else. Thus if you have a smaller number of operating points it would be possible that they are incorrectly mapped. Just my two cents :-)
  13. Like
    wtarreau got a reaction from Tido in Amlogic still cheating with clockspeeds   
    This is exactly why there are people like us who dissect products and push them to their limits so that end users don't have to rely on marketing nor salesmen but on real numbers reliably measured by third party who don't have any interest in cheating.
     
    Regarding your point about Tj and lifetime, you're totally right, and in general it's not a problem for people who overclock because if they want to get a bit more speed they won't keep the device for too long once it's obsolete.
    Look at my build farm made out of MiQi boards (RK3288). The Rockchip kernel by default limits the frequency to 1.6 GHz (thus the Tinker board is likely limited to this as well). But the CPU is designed for 1.8. In the end, with a large enough heat sink and with throttling limits pushed further, it runs perfectly well at 2.0 GHz. For a build farm, this 25% frequency increase directly results in 25% lower build time. Do I care about the shortened lifetime ? Not at all. Even if a board dies, the remaining ones are faster together than all of them at stock frequency. And these boards will be rendered obsolete long before they die.
     
    I remember a friend, when I was a kid, telling me about an Intel article explaining that their CPU's lifetime are halved for every 10 degrees Celsius above 80 or so, and based on this I shouldn't overclock my Cyrix 133 MHz CPU to 150. I replied "because you imagine that I expect to keep using that power-hungry Cyrix for 50 years? For me it's more important that my audio processing experimentation can run in real time than protecting my CPU and having to perform them offline".
     
    However it's important that we are very clear about the fact that this has to be a choice. You definitely don't want companies to build products that will end up in sensitive domains and expected to run for over a decade using these unsafe limits. On the other hand, the experiments run by a few of us help figuring available margins and optimal heat dissipation, both of which play an important role in long term stability designs.
  14. Like
    wtarreau got a reaction from tkaiser in Amlogic still cheating with clockspeeds   
    This is exactly why there are people like us who dissect products and push them to their limits so that end users don't have to rely on marketing nor salesmen but on real numbers reliably measured by third party who don't have any interest in cheating.
     
    Regarding your point about Tj and lifetime, you're totally right, and in general it's not a problem for people who overclock because if they want to get a bit more speed they won't keep the device for too long once it's obsolete.
    Look at my build farm made out of MiQi boards (RK3288). The Rockchip kernel by default limits the frequency to 1.6 GHz (thus the Tinker board is likely limited to this as well). But the CPU is designed for 1.8. In the end, with a large enough heat sink and with throttling limits pushed further, it runs perfectly well at 2.0 GHz. For a build farm, this 25% frequency increase directly results in 25% lower build time. Do I care about the shortened lifetime ? Not at all. Even if a board dies, the remaining ones are faster together than all of them at stock frequency. And these boards will be rendered obsolete long before they die.
     
    I remember a friend, when I was a kid, telling me about an Intel article explaining that their CPU's lifetime are halved for every 10 degrees Celsius above 80 or so, and based on this I shouldn't overclock my Cyrix 133 MHz CPU to 150. I replied "because you imagine that I expect to keep using that power-hungry Cyrix for 50 years? For me it's more important that my audio processing experimentation can run in real time than protecting my CPU and having to perform them offline".
     
    However it's important that we are very clear about the fact that this has to be a choice. You definitely don't want companies to build products that will end up in sensitive domains and expected to run for over a decade using these unsafe limits. On the other hand, the experiments run by a few of us help figuring available margins and optimal heat dissipation, both of which play an important role in long term stability designs.
  15. Like
    wtarreau got a reaction from gounthar in Quick review of NanoPi Fire3   
    By the way if we start to be numerous to buy the board, it may finally become incentive for someone to design a 3D printed enclosure. I'd prefer a metal one with a thermal pad serving as a heat sink at the same time, but I'd be happy with anything better than cardboard+duct tape...
  16. Like
    wtarreau got a reaction from gounthar in Quick review of NanoPi Fire3   
    Any "correct" USB power supply delivering more than 1.5A under 5V will work, though you'll have to make you own cable or to solder the wires. But with good quality USB cables, it will also work via the micro-USB port, because the current drawn by this board is not *that* high. I even power mine from a USB3 connector of my laptop which delivers about 1.6A (it's over spec and that's great for this use case). You really need to test. Some reported 1.2A under 5V. It's only 33% higher than the regular USB3 limit (900mA) and may actually work fine with most PCs or chargers due to large enough margins in the design.
  17. Like
    wtarreau got a reaction from datsuns in Quick review of NanoPi Fire3   
    I'm pretty sure it depends on a number of parameters. Mine starts to throttle at 113 degrees C because I found that it works fine till 120 and I don't want it to throttle for no reason. In your case for a cluster it will be difficult to test all boards and check that they're running fine over time. But it can also be valuable. I seem to remember reading 90 degrees max in the datasheet so that could be a good start but it's very close to the existing limits. I don't know if the stability of your workloads is critical or if you can take the risk to see one board hang once in a while to find the limits. One other important factor to keep in mind is whether you're using the GPU or not. I am not, which is why I can trust the ability to throttle to cool it down. If you are not using it either, you could possibly decide to start with a limit at 105.
     
    I'm only concerned by temperatures getting close to the ones causing instability. For most of my hardware, when I focus on performance I don't care if it shortens its life since it will be obsolete before it dies.  That's why I searched the limits for my board. You need to keep a bit of margin because it takes some time for the temperature to be reported, then when the board starts to throttle it continues to heat a bit. However at very high temperatures it cools down very quickly. Mine throttles at 113 and it rarely reaches 115.
     
    I'm using the stock heat sink, and worse, the whole thing is packed into a cardboard "enclosure" so that it can safely lay in my computer bag. Basically there is no air flow around it, it only adds latency to the temperature raise, and spreads it all around in the cardboard. It's totally horrible, and when I leave it for too long on my desk, the desk gets hot under it :-)
     
    For my use cases (mostly network endpoint for development) it doesn't throttle at all. I've run some build tests, and I have enough time to compile for a few minutes before it starts to throttle, but even when it does, it doesn't for too long (it oscillates between 113 and 115 degrees).
     
    Oh I know what you're talking about, I also happen to hate fans for the same reason. I've installed a 12cm fan behind my MiQi build farm at work, which is powered by the central board's GPIO when the temperature gets too high. It's a 12V fan running on 5V so it probably rotates at less than 1000 RPM and I almost can't hear it. The one at home has much larger heat sinks and no fan. Small fans are noisy and inefficient, you should really pick a large and slow one for your whole cluster. That's what I'd do if I built one (I'd love to just for fun, it's just that I figured that I have no use case for a NanoPi cluster at the moment!).
  18. Like
    wtarreau got a reaction from gounthar in Quick review of NanoPi Fire3   
    Well, all these multi-port chargers never deliver up to the amount they claim. You can safely expect 50 to 66% though, which is not bad overall. I removed the current limit detection in mine to stabilize the output for the MiQi farm. That said, I never managed to pull more than 1.6A in peak from my Fire3 at 1.6 GHz under 1.25V, so you have some headroom I guess. You need to consider that when the board is hot, its DC-DC regulators' efficiency starts to drop and to turn the current into more heat. Thus it's more important to measure the current when the board is already hot if you want to be pessimistic (or realistic). That was the case for me at when I measured 1.6A. Quite frankly, you're worrying too much : if when loading all the boards it still works, that's fine. If you want to buy more boards, then buy them and plug them to your charger until you find the limit. The charger will either cut one port or completely shut down. Then you'll know how many more chargers you need to buy depending on the number of boards :-)
  19. Like
    wtarreau got a reaction from datsuns in Quick review of NanoPi Fire3   
    I tested a miner on it ("cpuminer" I think) to give numbers to a friend interested on the subject (he was impressed by the way). I didn't let it run for hours like this, but after several minutes it started to throttle down to 1 GHz then stabilized, but didn't stop (and keep in mind it's tightly enclosed in cardboard). It's certain that the modified DTB I'm using helps here with the higher temperature thresholds, but I'm suspecting you might have too weak a power supply or micro USB cable if it stopped. That's always the risk with DVFS : it consumes very little in idle but a lot under load. I discovered one bad cable in my stock using which the board would reboot in loops. @tkaiser could tell you hundreds of horror stories about micro-usb based power inputs :-)
     
    I find it really awesome and have been asking for it since I got my nanopi-fire2 about two years ago! I'm mostly interested in CPU and network, and this is the only board which comes with a CPU, some RAM, a gigabit connector and nothing else! I'm sure there's plenty of unexploited power in it and am willing to try to push it further!
     
    I'm attaching my modified DTB, it adds the 1.6 GHz frequency point and the 113,115,120 degrees critical points which work fine for me and considerably limit the throttling. Save yours before replacing it (variant "rev05"). I have no idea if my values will work on your board or will even kill it, use at your own risks! And please double-check the thermal contact between your heatsink and your CPU.
    s5p6818-nanopi3-rev05.1g6-1v25-113deg.dtb
  20. Like
    wtarreau got a reaction from datsuns in Quick review of NanoPi Fire3   
    The 3 critical points (in degrees celcius) for the thermal throttling and shutdown. I didn't understand the difference between the first two ones, as the CPU starts to throttle when the first value is reached. The second *seems* to do nothing, the 3rd one is for the forced shutdown.
     
     
    I seem to remember that it starts throttling at 80. IIRC the original values were something like 80, 85 and 105.
     
     
    It definitely is very informative to do so. Do not forget that such boards will heat much more in summer than in winter (you can more or less shift the high temperature by the difference of ambiant termperature). The most important is that your board remains 100% reliable even when it starts to throttle (the temperature can continue to rise a little bit at this point). A CPU's sensitivity to temperature may evolve over time, so keep a bit of margin. Also if you intend to use the GPU, it's not throttled and will definitely add to the thermal dissipation, this will require an extra margin.
     
     
    No the form factor is much smaller, and really well thought, but absolutely not compatible with RPi. Unfortunately there is no enclosure for these boards, it's really the only missing thing. You can stack many of them side by side vertically with just a rear cable for the power supply and a front cable for the network. The overall design is really nice for those who want high power densities.
     
  21. Like
    wtarreau got a reaction from datsuns in Quick review of NanoPi Fire3   
    I'd say around 2 weeks.
     
    The default heatsink is enough if you're not running at 100% CPU full-time. For my use cases, it's mostly a network endpoint and I can run it at 1 Gbps without problems even with the board confined in a cardboard made enclosure. But if you run with all CPUs saturated, you'll reach around 5W that need to be dissipated one way or another. The default heatsink and the PCB are not large enough to dissipate 5W at a low temperature. I significantly raised the temperature thresholds (113, 115, 120) to prevent it from throttling too early. Note that these thresholds are higher than the datasheet's (85°C in commercial ranges). But the thermal sensor supports up to 125°C so probably there are some industrial/military grade variants with higher ranges. For a personal project I'd say you have some headroom. For a commercial product, you probably don't want to play with this and you may have to use a small fan, or to place a thermal pad behind the board against a metal enclosure.
  22. Like
    wtarreau got a reaction from chwe in Quick review of NanoPi Fire3   
    I'd say around 2 weeks.
     
    The default heatsink is enough if you're not running at 100% CPU full-time. For my use cases, it's mostly a network endpoint and I can run it at 1 Gbps without problems even with the board confined in a cardboard made enclosure. But if you run with all CPUs saturated, you'll reach around 5W that need to be dissipated one way or another. The default heatsink and the PCB are not large enough to dissipate 5W at a low temperature. I significantly raised the temperature thresholds (113, 115, 120) to prevent it from throttling too early. Note that these thresholds are higher than the datasheet's (85°C in commercial ranges). But the thermal sensor supports up to 125°C so probably there are some industrial/military grade variants with higher ranges. For a personal project I'd say you have some headroom. For a commercial product, you probably don't want to play with this and you may have to use a small fan, or to place a thermal pad behind the board against a metal enclosure.
  23. Like
    wtarreau got a reaction from JMCC in Amlogic still cheating with clockspeeds   
    @tkaiser , in fact there is a small category of users like me who do care about frequency because they do know their workload depends on frequency. My libslz compressor does (almost scales linearly). Crypto almost does when using native ARMv8 crypto extensions. My progressive locks code does as well. Compilation partially does.
     
    The problem I'm facing these days is that when I hit a performance wall with a board, I start to get a clear idea of what I'd like to try, and the minimum number I want for each metric to expect any improvement. For example I'm not interested in 32-bit memory for my build farms, nor am I interested in A53 below 1.8 GHz. Thus when I want to experiment with something new, I have to watch sites such as cnx-soft to hope for something new. Then when this new stuff happens, I have to wonder whether the advertised numbers are reasonably trustable or not, and whether I'm willing to spend a bit of money for something which will possibly not work at all. Recently I've seen this H6 at 1.8 GHz. I'd be interested in testing it but I'm not expecting true 1.8 yet, thus I'm waiting for someone else to get trapped before I try.
     
    My experience so far is that I'm never going to buy anything Amlogic-based anymore unless I see proofs of the numbers in my use case. It's pointless, they lie all the time, and deploy any possible effort to cheat in benchmarks. I wouldn't be surprised to find heavy metal plates in some TV STBs made with these SoCs to delay thermal throttling so that benchmark tools have the time to complete at full speed before the CPU gets too hot...
     
    However I do trust certain companies like Hardkernel or FriendlyElec who are extremely open and transparent about their limitations, and who provide everything I need to pursue my experiments, thus I randomly buy something there "just to see" if it's not too expensive. I'm not expecting them to be failsafe but at least to have run some tests before me! I don't trust Allwinner devices at all but in this case it seems to be mostly the board vendors who don't play well. BananaPi / OrangePi's numbers really cannot be trusted at all, but they are very cheap, so sometimes I buy a board and try it. I remember my OpiPC2 not even having a bootable image available at all for a few weeks. I didn't care much, it was around $15 or $25, I don't remember. I used not to be much interested in Rockchip who used to limit their RK3288 to 1.6 GHz, but overall they've made progress, some of their CPUs are very good, and I think some of them are expensive enough for the board vendors to be careful.
     
    But all this to say that sometimes a specific set of characteristics are very hard to find together and you start to build hope in a specific device. When you pay a bit more than usual on such a device and wait for a long time, you really want it to match your expectations, eventhough you know pretty well you shouldn't. And when the device arrives it's a cold shower.
     
    Overall I agree with your point that real world performance matters more than numbers. The dual-A9 based Armada38x in my clearfog outperforms any of the much higher spec'd devices I own when it comes to I/O. The only problem is that in order to know what performance to expect, you need to have some metrics. And since people mostly post useless crap like Antutu, CPU-Z or Geekbench, the only way to know is either to wait for someone else to report real numbers (as a few of us do) or to buy one and run your own test.
     
    That's why we can't completely blame the users here. And that's sad because it maintains this dishonest system in place.
     
    I initially wanted to develop a suite of anti-cheat tools like the mhz.c and ramspeed.c programs I wrote. But I've already done that 25 years ago and am not seeing myself do that again. Plus these days they need to be eye-candy to gain adoption which I really am unable to achieve. Last point, there will always be users who prefer to publish the results from the tool showing the highest numbers because it makes them feel good, so the anti-cheat ones will not necessarily be published that often :-/
     
     
  24. Like
    wtarreau got a reaction from chwe in Amlogic still cheating with clockspeeds   
    @tkaiser , in fact there is a small category of users like me who do care about frequency because they do know their workload depends on frequency. My libslz compressor does (almost scales linearly). Crypto almost does when using native ARMv8 crypto extensions. My progressive locks code does as well. Compilation partially does.
     
    The problem I'm facing these days is that when I hit a performance wall with a board, I start to get a clear idea of what I'd like to try, and the minimum number I want for each metric to expect any improvement. For example I'm not interested in 32-bit memory for my build farms, nor am I interested in A53 below 1.8 GHz. Thus when I want to experiment with something new, I have to watch sites such as cnx-soft to hope for something new. Then when this new stuff happens, I have to wonder whether the advertised numbers are reasonably trustable or not, and whether I'm willing to spend a bit of money for something which will possibly not work at all. Recently I've seen this H6 at 1.8 GHz. I'd be interested in testing it but I'm not expecting true 1.8 yet, thus I'm waiting for someone else to get trapped before I try.
     
    My experience so far is that I'm never going to buy anything Amlogic-based anymore unless I see proofs of the numbers in my use case. It's pointless, they lie all the time, and deploy any possible effort to cheat in benchmarks. I wouldn't be surprised to find heavy metal plates in some TV STBs made with these SoCs to delay thermal throttling so that benchmark tools have the time to complete at full speed before the CPU gets too hot...
     
    However I do trust certain companies like Hardkernel or FriendlyElec who are extremely open and transparent about their limitations, and who provide everything I need to pursue my experiments, thus I randomly buy something there "just to see" if it's not too expensive. I'm not expecting them to be failsafe but at least to have run some tests before me! I don't trust Allwinner devices at all but in this case it seems to be mostly the board vendors who don't play well. BananaPi / OrangePi's numbers really cannot be trusted at all, but they are very cheap, so sometimes I buy a board and try it. I remember my OpiPC2 not even having a bootable image available at all for a few weeks. I didn't care much, it was around $15 or $25, I don't remember. I used not to be much interested in Rockchip who used to limit their RK3288 to 1.6 GHz, but overall they've made progress, some of their CPUs are very good, and I think some of them are expensive enough for the board vendors to be careful.
     
    But all this to say that sometimes a specific set of characteristics are very hard to find together and you start to build hope in a specific device. When you pay a bit more than usual on such a device and wait for a long time, you really want it to match your expectations, eventhough you know pretty well you shouldn't. And when the device arrives it's a cold shower.
     
    Overall I agree with your point that real world performance matters more than numbers. The dual-A9 based Armada38x in my clearfog outperforms any of the much higher spec'd devices I own when it comes to I/O. The only problem is that in order to know what performance to expect, you need to have some metrics. And since people mostly post useless crap like Antutu, CPU-Z or Geekbench, the only way to know is either to wait for someone else to report real numbers (as a few of us do) or to buy one and run your own test.
     
    That's why we can't completely blame the users here. And that's sad because it maintains this dishonest system in place.
     
    I initially wanted to develop a suite of anti-cheat tools like the mhz.c and ramspeed.c programs I wrote. But I've already done that 25 years ago and am not seeing myself do that again. Plus these days they need to be eye-candy to gain adoption which I really am unable to achieve. Last point, there will always be users who prefer to publish the results from the tool showing the highest numbers because it makes them feel good, so the anti-cheat ones will not necessarily be published that often :-/
     
     
  25. Like
    wtarreau got a reaction from tkaiser in Quick review of NanoPi Fire3   
    Thomas, I think it's problematic that you only have a watt-meter including the PSU because the PSU's efficiency depends on the consumption (usually it's optimal around 50% load). Using a USB power meter would tell you the volts, amps and watts, and would even allow to detect under-power when it happens.
     
    I managed to get my board to shut down only once, it was powered by my laptop's USB3 port (that's what I do all the time but I'm probably close to the limit). It never happened on a 5V/2A PSU however. Since it was not yet very hot, I suspect that it's the power controller instead which had shut it down rather than the temperature.
     
    I also had to play with the critical points to avoid needlessly throttling. I seem to remember having set them to 105, 110 and 112 degrees though I may be wrong since I ran many tests. Now I packed the board inside a cardboard made "enclosure" from which the heat hardly dissipates, and it can still throttle when reaching the first critical point, but that doesn't last long.
    When it happens, usually it's at 832/1024 of 1600 MHz = 1300 MHz, and more rarely it's 640/1024*1600 = 1000 MHz. I haven't run cpuburn yet though, I can if you're interested.
     
    Regarding the use cases, I think they are limited but the board is awesome when they can be met. For me, it's fantastic as a developer to test threaded code scalability on up to 8 cores, given that the L2 cache is shared between all cores. Usually you need a huge power hungry CPU to get the same, here I have this in may laptop's bag. I also want to see what performance level I can reach on HTTP compression using libslz. I'm pretty sure that making some content recompression farms using such boards could be very affordable. Also the CPU supports native CRC32 instructions which are missing on x86 and affect gzip's performance, so I'll have to improve my lib to benefit form this. Miners may like to exploit some algorithms which perform well on ARMv8 and exploit the native AES and SHA2 implementations (I'm a bit clueless in this area). Last, for computer assisted vision, you have 8 cores with NEON ;-)
     
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines