Jump to content

Amlogic still cheating with clockspeeds


tkaiser

Recommended Posts

It is inherently just a cortex binary, it could be "disassembled" by someone with enough arm assembly knowledge.  The question becomes how the cortex core is actually tied into the SoC, and how you handle the whole operation legally. 

Link to comment
Share on other sites

On 5/3/2018 at 9:48 AM, tkaiser said:

can you please answer my questions? Hash to identify which bl30.bin blob is used where and FULL sysbench output to get an idea why your numbers are better.

 

@Da Xue, may I remind you on this?

Link to comment
Share on other sites

@tkaiser , in fact there is a small category of users like me who do care about frequency because they do know their workload depends on frequency. My libslz compressor does (almost scales linearly). Crypto almost does when using native ARMv8 crypto extensions. My progressive locks code does as well. Compilation partially does.

 

The problem I'm facing these days is that when I hit a performance wall with a board, I start to get a clear idea of what I'd like to try, and the minimum number I want for each metric to expect any improvement. For example I'm not interested in 32-bit memory for my build farms, nor am I interested in A53 below 1.8 GHz. Thus when I want to experiment with something new, I have to watch sites such as cnx-soft to hope for something new. Then when this new stuff happens, I have to wonder whether the advertised numbers are reasonably trustable or not, and whether I'm willing to spend a bit of money for something which will possibly not work at all. Recently I've seen this H6 at 1.8 GHz. I'd be interested in testing it but I'm not expecting true 1.8 yet, thus I'm waiting for someone else to get trapped before I try.

 

My experience so far is that I'm never going to buy anything Amlogic-based anymore unless I see proofs of the numbers in my use case. It's pointless, they lie all the time, and deploy any possible effort to cheat in benchmarks. I wouldn't be surprised to find heavy metal plates in some TV STBs made with these SoCs to delay thermal throttling so that benchmark tools have the time to complete at full speed before the CPU gets too hot...

 

However I do trust certain companies like Hardkernel or FriendlyElec who are extremely open and transparent about their limitations, and who provide everything I need to pursue my experiments, thus I randomly buy something there "just to see" if it's not too expensive. I'm not expecting them to be failsafe but at least to have run some tests before me! I don't trust Allwinner devices at all but in this case it seems to be mostly the board vendors who don't play well. BananaPi / OrangePi's numbers really cannot be trusted at all, but they are very cheap, so sometimes I buy a board and try it. I remember my OpiPC2 not even having a bootable image available at all for a few weeks. I didn't care much, it was around $15 or $25, I don't remember. I used not to be much interested in Rockchip who used to limit their RK3288 to 1.6 GHz, but overall they've made progress, some of their CPUs are very good, and I think some of them are expensive enough for the board vendors to be careful.

 

But all this to say that sometimes a specific set of characteristics are very hard to find together and you start to build hope in a specific device. When you pay a bit more than usual on such a device and wait for a long time, you really want it to match your expectations, eventhough you know pretty well you shouldn't. And when the device arrives it's a cold shower.

 

Overall I agree with your point that real world performance matters more than numbers. The dual-A9 based Armada38x in my clearfog outperforms any of the much higher spec'd devices I own when it comes to I/O. The only problem is that in order to know what performance to expect, you need to have some metrics. And since people mostly post useless crap like Antutu, CPU-Z or Geekbench, the only way to know is either to wait for someone else to report real numbers (as a few of us do) or to buy one and run your own test.

 

That's why we can't completely blame the users here. And that's sad because it maintains this dishonest system in place.

 

I initially wanted to develop a suite of anti-cheat tools like the mhz.c and ramspeed.c programs I wrote. But I've already done that 25 years ago and am not seeing myself do that again. Plus these days they need to be eye-candy to gain adoption which I really am unable to achieve. Last point, there will always be users who prefer to publish the results from the tool showing the highest numbers because it makes them feel good, so the anti-cheat ones will not necessarily be published that often :-/

 

 

Link to comment
Share on other sites

5 hours ago, wtarreau said:

My experience so far is that I'm never going to buy anything Amlogic-based anymore unless I see proofs of the numbers in my use case. It's pointless, they lie all the time, and deploy any possible effort to cheat in benchmarks. I wouldn't be surprised to find heavy metal plates in some TV STBs made with these SoCs to delay thermal throttling so that benchmark tools have the time to complete at full speed before the CPU gets too hot...

I wasn't really surprised when I found a big chunk of metal in S912 based Beelink GT1. Though these devices were designed for specific multimedia use cases, and metal plates are added by device vendors (who may not have sources for firmware that controlls DVFS), so it's the STB vendors who decided to cheat in benchmark numbers (in addition to Amlogic who decided to cheat in reported clockspeeds).

 

5 hours ago, wtarreau said:

I don't trust Allwinner devices at all but in this case it seems to be mostly the board vendors who don't play well. BananaPi / OrangePi's numbers really cannot be trusted at all, but they are very cheap, so sometimes I buy a board and try it. I remember my OpiPC2 not even having a bootable image available at all for a few weeks. I didn't care much, it was around $15 or $25, I don't remember.

At least Allwinner DVFS (on mainline) is fully open source, though the reverse-engineered DRAM init code may create an artificial memory throughput bottleneck.

Link to comment
Share on other sites

On the oryginal hardkernel boot.ini Ubuntu Linux you can adjust amlogic S905 speed... up to 2.02 GHZ. But is not recommended. Default speed is 1540MHZ. on the way you have 1620, 1732 step i think up 28 or 56 Mhz...

Allwinner sometimes does not inform customers at all about speed... You will learn about it from cpu-z on android or cpu-speed xfce-goodies addon on panel. Amlogic S905 is a qiute good chip. It so pitty does not work with 4GB ram - only 2...

In my experience S905 is better then RK3328 at the same clock level.

Link to comment
Share on other sites

18 hours ago, constantius said:

On the oryginal hardkernel boot.ini Ubuntu Linux you can adjust amlogic S905 speed... up to 2.02 GHZ

 

Totally wrong. The original bl30.bin blob provided by Amlogic/Hardkernel for the C2 was just the usual cheating mess Amlogic is known for. Just read the thread you are posting to to get the details. Hardkernel is obviously the only vendor around able to provide a bl30.bin blob that does not cheat on users. All other Amlogic board vendors are not in that position.

 

On 5/19/2018 at 2:38 PM, wtarreau said:

Recently I've seen this H6 at 1.8 GHz

 

Confirmed by your mhz tool. It's 1.8 GHz even with full load on all CPU cores. With Allwinner's legacy kernel all that's needed is to understand their 'budget cooling' stuff to get the idea how to influence cpufreq limits imposed by thermal constraints. With mainline kernel there are no limitations whatsoever. Unlike with Amlogic cpufreq/dvfs is fully under our control (not talking about H6 here since code yet not ready, the above only applies to all other Allwinner SoCs so far)

Link to comment
Share on other sites

why wrong? I bought odroid c2 at the preoder. First img ubuntu developed by hard-kernel had bootini with section adjusting cpu speed from 1.536 to 2.02 MHZ. You was able to adjust many other speeds between 1.54 to 2.02. I remember this.... 

Link to comment
Share on other sites

53 minutes ago, constantius said:

why wrong? I bought odroid c2 at the preoder. First img ubuntu developed by hard-kernel had bootini with section adjusting cpu speed from 1.536 to 2.02 MHZ. You was able to adjust many other speeds between 1.54 to 2.02. I remember this.... 

I think you simply didn't read anything in this thread. The whole thread precisely is about Amlogic distributing to SBC vendors firmwares that *pretend* to run at these frequencies but do not. When it reports any frequency between 1.5 and 2 GHz, in fact it's still at 1.536 GHz. And the funny thing is that @tkaiser was precisely saying that it works since a lot of people don't verify and are absolutely convinced that the board runs at 2.02 GHz when it says so. So now thanks to you we have a perfect illustration of what he was saying ;-)

Link to comment
Share on other sites

1 hour ago, constantius said:

You was able to adjust many other speeds between 1.54 to 2.02. I remember this.... 

 

You could move the reported speed, yes, but the CPU never ran at that speed.  It said it was going that fast, it responded to commands telling it to go that fast, but it never, ever, ever went that fast.  Hence the use of the term "cheating", which means dishonestly misleading people in order to gain business through deception of the quality or capability of a product.  So, advertise or insinuate 2.0 GHz operation, make the hardware say "Why yes, I am doing that", but then have the hardware not actually do what it's saying it does.  Like a car with a top speed of 30 kph claiming it's doing 100, the speedometer says "I am going 100 kph", but you're being passed by a moped...

Link to comment
Share on other sites

I would imagine (random shot in the dark) that there's some current limits to the wiring bonding and fully loading the SoC would exceed certain reliability parameters. @tkaiser I haven't played around with the 16.04 images in forever and sysbench on 18.04 is completely different. It was using the bl30.bin from the openlinux 2016-06 if I recall correctly. @TonyMac32 sysbench numbers seem to fluctuate a lot more than what I observed before. When I build the next 16.04 image, I will revisit this.

Link to comment
Share on other sites

On 5/5/2018 at 11:40 AM, TonyMac32 said:

It is inherently just a cortex binary, it could be "disassembled" by someone with enough arm assembly knowledge.  The question becomes how the cortex core is actually tied into the SoC, and how you handle the whole operation legally. 

 

The Cortex-M3 is apart of the ARM SCPI (System and Control Power Interface). You will see this with the majority of Cortex-A(5x/7x) based SoCs. The M3 is known as SCP (System Control Processor). This handles DVFS, clocks/voltage, power states, etc... My cursory overview of the S905 appears to show that the TEE (OPTEE since it uses ARM Trusted Firmware?) takes the SCPI commands via SMC, which then passes them on to the SCP.

Link to comment
Share on other sites

@tkaiser ran this on the NanoPi K2 since I got it running again today, since it is A) running hotter than Le Potato and B ) has a different set of firmwares despite being from the same amlogic buildroot (gxb as opposed to gxl).

 

It advertises 1536 instead of 1512 as it's top CPU speed (s905 like the C2)

 

Run 1:

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          6.0123s
    total number of events:              10000
    total time taken by event execution: 24.0414
    per-request statistics:
         min:                                  2.38ms
         avg:                                  2.40ms
         max:                                 14.40ms
         approx.  95 percentile:               2.47ms

Threads fairness:
    events (avg/stddev):           2500.0000/28.91
    execution time (avg/stddev):   6.0103/0.00

Run 2:

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          6.0135s
    total number of events:              10000
    total time taken by event execution: 24.0320
    per-request statistics:
         min:                                  2.38ms
         avg:                                  2.40ms
         max:                                 14.47ms
         approx.  95 percentile:               2.47ms

Threads fairness:
    events (avg/stddev):           2500.0000/31.99
    execution time (avg/stddev):   6.0080/0.01

 

Run 3:

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          6.0238s
    total number of events:              10000
    total time taken by event execution: 24.0759
    per-request statistics:
         min:                                  2.38ms
         avg:                                  2.41ms
         max:                                 14.43ms
         approx.  95 percentile:               2.47ms

Threads fairness:
    events (avg/stddev):           2500.0000/30.04
    execution time (avg/stddev):   6.0190/0.01

As far as running hotter, it is sitting at 50 C at idle, Le potato will set around 38-40.  Notice execution time is significantly faster than Le Potato with S905X.

 

Something I noticed on both is that "ondemand" simply keeps the CPU at maximum frequency at all times.  This true of both boards I have, I'm going to change it to conservative since that's probably what's actually happening inside the S905X anyway.

 

According to @willmore's data:

That is 1536 MHz as reported.  Now mind you I am using the blobs from the same buildroot in both of these images, no special vendor stuff anywhere.  So why did Amlogic decide to screw up the GXL so bad?  Or did they "fix" the entire GXB platform because of the uproar over the C2?

 

[edit]  System reported the 1.7 and 2.0 GHz operating points, so I tried the 1.7 just for kicks, no change in performance.  ;-)

Link to comment
Share on other sites

@tkaiserOf course marketing focuses on raw speed and not actual performance numbers.  This isn't unique to the ARM or SoC areas; we've seen it many times in the x86 world as well.  Joe Public will go buy a computer with a higher clock speed but lower performance 95% of the time. (these same people will also pay $50 more for a computer with a $20 optical drive they'll never use, but that's a different rant)

 

@chwe Heat management isn't only about maximum performance.  It also has a marked impact on silicon longevity.

http://www.ti.com/lit/an/sprabx4a/sprabx4a.pdf

Most importantly, Fig 2 in section 4.  And this quote from section 4.1:

"It shows that if the processor runs at 90°C effective temperature instead of the 105°C, x2 increase is useful lifetime can be projected."

 And bear in mind this study is on silicon junction temperature, NOT temperature of the chip package.   The junction temperature will be considerably higher than the temperature measured on the package.  It is quite possible to have a junction temp of 105 or more with the package temperature at 50-70 and not throttling yet.

Most of these devices are too new for us to be seeing the results yet, but I predict the majority of these devices, left with "stock" (read: virtually nonexistent) cooling fail YEARS sooner than the same device where people added fans/heat sinks.

Link to comment
Share on other sites

2 minutes ago, subhuman said:

marketing focuses on raw speed and not actual performance numbers. 

Well the trouble here is it appears these processors never actually attain the clock speed they are marketed as having.  So let's say I bring a 1.5 GHz SoC to market and say it can run 2 GHz.  Oh wait, that's not a random example, that's actually what happened with the S905.  Were the S905 just a poor performer at 2.0 GHz it would be a very different discussion, it would sound more like the talk around the Broadcom whatever-RPi-stuffed-into-the-Pi3.

 

Now, for whatever reason the S905 images I have built seem to operate (at least mostly) at the frequency they are commanded to run.  The S905X images aren't even trying, it's completely transparent that the SoC is being throttled or running at rates with no knowable rhyme or reason.

 

Thank you for the input about junction temperatures and longevity, it will be good info for the general public, I believe most of the team would be well familiar with T_J vs the package temp, a good datasheet will also give you the necessary information to figure out (within reason) what the disparity is.  I have the 3288 throttling earlier than the vendor requires  due to that, especially considering the small heatsink on the Tinker, and lack of heatsink on the MiQi (at least mine as received)

Link to comment
Share on other sites

7 minutes ago, TonyMac32 said:

Well the trouble here is it appears these processors never actually attain the clock speed they are marketed as having. 

I got that.  It's crappy, but these are far from unique.

Remember dial-up modems?  v.90 modems marketed as 56k here in the US, even though the FCC limited them to a max of 53.3k (and many of them reported a connect rate of 56k regardless of the lower actual rate- much like we see here)

A multitude of 802.11n single-antenna devices that claim speeds over 150

Various HDD manufacturers being sued (unsuccessfully) ~20 years ago for advertising capacities in power of ten instead of powers of two (i.e. megabyte=1,000,000 vs megabyte=1,048,576)

 

 

How do you tell when a salesman is lying?

His lips are moving.

 

 

Link to comment
Share on other sites

3 hours ago, subhuman said:

How do you tell when a salesman is lying?

This is exactly why there are people like us who dissect products and push them to their limits so that end users don't have to rely on marketing nor salesmen but on real numbers reliably measured by third party who don't have any interest in cheating.

 

Regarding your point about Tj and lifetime, you're totally right, and in general it's not a problem for people who overclock because if they want to get a bit more speed they won't keep the device for too long once it's obsolete.

Look at my build farm made out of MiQi boards (RK3288). The Rockchip kernel by default limits the frequency to 1.6 GHz (thus the Tinker board is likely limited to this as well). But the CPU is designed for 1.8. In the end, with a large enough heat sink and with throttling limits pushed further, it runs perfectly well at 2.0 GHz. For a build farm, this 25% frequency increase directly results in 25% lower build time. Do I care about the shortened lifetime ? Not at all. Even if a board dies, the remaining ones are faster together than all of them at stock frequency. And these boards will be rendered obsolete long before they die.

 

I remember a friend, when I was a kid, telling me about an Intel article explaining that their CPU's lifetime are halved for every 10 degrees Celsius above 80 or so, and based on this I shouldn't overclock my Cyrix 133 MHz CPU to 150. I replied "because you imagine that I expect to keep using that power-hungry Cyrix for 50 years? For me it's more important that my audio processing experimentation can run in real time than protecting my CPU and having to perform them offline".

 

However it's important that we are very clear about the fact that this has to be a choice. You definitely don't want companies to build products that will end up in sensitive domains and expected to run for over a decade using these unsafe limits. On the other hand, the experiments run by a few of us help figuring available margins and optimal heat dissipation, both of which play an important role in long term stability designs.

Link to comment
Share on other sites

On 6/24/2018 at 12:40 AM, wtarreau said:

This is exactly why there are people like us who dissect products and push them to their limits so that end users don't have to rely on marketing nor salesmen but on real numbers reliably measured by third party who don't have any interest in cheating.

 

Regarding your point about Tj and lifetime, you're totally right, and in general it's not a problem for people who overclock because if they want to get a bit more speed they won't keep the device for too long once it's obsolete.

Look at my build farm made out of MiQi boards (RK3288). The Rockchip kernel by default limits the frequency to 1.6 GHz (thus the Tinker board is likely limited to this as well). But the CPU is designed for 1.8. In the end, with a large enough heat sink and with throttling limits pushed further, it runs perfectly well at 2.0 GHz. For a build farm, this 25% frequency increase directly results in 25% lower build time. Do I care about the shortened lifetime ? Not at all. Even if a board dies, the remaining ones are faster together than all of them at stock frequency. And these boards will be rendered obsolete long before they die.

 

I remember a friend, when I was a kid, telling me about an Intel article explaining that their CPU's lifetime are halved for every 10 degrees Celsius above 80 or so, and based on this I shouldn't overclock my Cyrix 133 MHz CPU to 150. I replied "because you imagine that I expect to keep using that power-hungry Cyrix for 50 years? For me it's more important that my audio processing experimentation can run in real time than protecting my CPU and having to perform them offline".

 

However it's important that we are very clear about the fact that this has to be a choice. You definitely don't want companies to build products that will end up in sensitive domains and expected to run for over a decade using these unsafe limits. On the other hand, the experiments run by a few of us help figuring available margins and optimal heat dissipation, both of which play an important role in long term stability designs.

Longevity is not the only concern. At higher frequencies, signal integrity drops. The higher the clock rate, the more likely of errors due to propagation delays, leaky transistors, and threshold voltages. The difference between 0% overclock and 25% overclock may mean the difference between 30 9's to 20 9's which is statistically significant for error propagation.

Link to comment
Share on other sites

14 minutes ago, Da Xue said:

Longevity is not the only concern. At higher frequencies, signal integrity drops. The higher the clock rate, the more likely of errors due to propagation delays, leaky transistors, and threshold voltages. The difference between 0% overclock and 25% overclock may mean the difference between 30 9's to 20 9's which is statistically significant for error propagation.

Absolutely, and this factor is impacted by current (derived from voltage) and temperature. That's why what matters for stability is to find the proper operation conditions. Overclocking in a place where the ambien temperature can vary by 10 degrees can cause big problems. Same for those who undervolt to reduce heat because signals raise softly and degrade as well, or who use too small a heatsink. That said with nowadays software quality you often face a software bug many more times than hardware bit flips :-)

Link to comment
Share on other sites

On 6/24/2018 at 2:21 AM, subhuman said:

 And bear in mind this study is on silicon junction temperature, NOT temperature of the chip package.   The junction temperature will be considerably higher than the temperature measured on the package.  It is quite possible to have a junction temp of 105 or more with the package temperature at 50-70 and not throttling yet.

My background is more in science (chemistry, mostly pharma related stuff). There's a nice one-liner (german): "Wer misst misst Mist" or "Wer Mist misst, misst Mist" (Anyone who measures measures crap/ Anyone who measures crap, measures crap - I really don't know if there's something similar in english). I guess and that's a guess measuring junction temp is somehow only possible with a 'indirect observable measurement'. I don't know how much those tests are standardized (could be interesting as soon as we don't talk about 'consumer grade' SoCs), due to pharmas famous history on this topic those tests get regulated more and more with similar standards worldwide (e.g. a FDA-, EMA- and PMDA-approvals don't 'differ much').  But as soon as we talk about longevity, it's not only the the SoC which has to be audited, it's the entire system and on consumer grade electronics, a 'slightly' over-clocked CPU wouldn't be my main concerns (could be that I'm wrong here, electronics and longevity of components isn't my field of expertise).

 

On 6/24/2018 at 2:21 AM, subhuman said:

Of course marketing focuses on raw speed and not actual performance numbers.

It's obvious that marketing has an other focus than the real use-case... :lol: That's not a 'electronics only issue'. It gets problematic as soon as the provided data is bogus. For me, it looks like the current behavior of the binaries don't allow a proper 'benchmarking'.  This ends in a crisis of trust. People will remember those issues and this will harm you in the future. E.g. Allwinner had massive GPL issues in the past, for me as a not involved person it looks like they do it better now. The first thing, many people say when it comes to AW devices: "Ah, those guys who give a f*ck about GPL?". I think, amlogic want to be remembered  as the 'bogus clock-speed guys'... So, why should you 'cheat' on something which is 'completely irrelevant'  on the main  purpose  for this SoC.  Remember this is a multimedia SoC, it has to be efficient in decoding (maybe encoding) stuff, being cheap and hopefully power-efficient.  The average Joe doesn't care about CPU freq. (as long as your marketing department does a good job explaining him why CPU freq. doesn't matter for him: our SoC has such a powerful VPU, the CPU can run at 600MHz and you still get 4k @60fps, this saves energy, help our planet by using our SoC over the ones from our competitors, they need 1.5GHz and double the energy to feed 4k @60fs :ph34r::lol:

So why report bogus here? If the SoC (thanks to its VPU) performs well for its main purpose, you don't have to lie on something which isn't 'important'. People who use it for other purposes (IMO every SoC you'll find on a SBC isn't made for this use-case), will sooner or later figure out that something is smelly here. Some of them have the power to report it that loud that it is recognized by the average joe. It doesn't harm amlogic if they lose the SBC market, it may bite their in their back when the small fraction of 'secondary usage'-people are loud enough to remember average joe that he don't get what he payed for (an octacore X.Y GHz capable SoC - replace X and Y with the currently 'reported' values).

 

On 6/24/2018 at 3:02 AM, subhuman said:

Remember dial-up modems?  v.90 modems marketed as 56k here in the US, even though the FCC limited them to a max of 53.3k (and many of them reported a connect rate of 56k regardless of the lower actual rate- much like we see here)

A multitude of 802.11n single-antenna devices that claim speeds over 150

Various HDD manufacturers being sued (unsuccessfully) ~20 years ago for advertising capacities in power of ten instead of powers of two (i.e. megabyte=1,000,000 vs megabyte=1,048,576)

Buzzword: That's pure Whataboutism.  :lol: Other's did it in the past wrong (and probably still do) so there's no hurry to change our behavior. At least in my area, you 'should' learn it in kindergarten that this is wrong. It might be harder to achieve it for adults but still valid. Things can go wrong. I'm more interested how a company reacts to an issue than how dumb the issue was in the beginning. And for me it looks like most PR departments fail horrible as soon as it comes to 'bad press'... Maybe you should let the engineers talk as soon as it comes to issues - can't go much worse.. :lol: (I'm sure there are legal reasons to avoid that, you might get sued for being honest in a non PR-speech manner).. 

 

On 6/24/2018 at 6:40 AM, wtarreau said:

Do I care about the shortened lifetime ? Not at all. Even if a board dies, the remaining ones are faster together than all of them at stock frequency. And these boards will be rendered obsolete long before they die.

I still use a slightly overclocked i5-2500K which is I guess  ~7 years under my desk (got a way more load since I use it to build armbian images... :D ). I still hope that it wont die that fast, not interested in buying a new desktop soon. But it's clear, as soon as you run your device out default/specs it can have a impact of the guaranteed lifetime (in case of the AW SoCs, I think armbians defaults will be beneficial for their lifetime... :lol:). I tried to 'dive into'  the lifetime, durability tests for electronics getting valid test-procedures seems to be a way harder than in pharma (might be related to my google-fu in both topics...). As soon as it comes to consumer electronics such tests seems to be either non available or non existing (e.g SD-cards, I only get marketing numbers how many write/read cycles they should survive, for which temperatures they're rated but no description of these tests). Am I right here or just to dumb to find this stuff? 

 

On 6/24/2018 at 2:21 AM, subhuman said:

Heat management isn't only about maximum performance.  It also has a marked impact on silicon longevity.

http://www.ti.com/lit/an/sprabx4a/sprabx4a.pdf

Thanks for the link, it's fun to see Arrhenius in something else than acid base theory or reaction kinetics (in fact aging could be described as a not wanted side-reaction in solid state chemistry :lol:)...  It's a nicely write up of the topic, problem I have with such 'Application Reports'.:

It's a 'unhealthy alliance' between the companies marketing resp. developers department (if you want a rabbit hole example read every leaflet for contraceptive pills where marketing, EMA, science and the legal department is involved in the write up - every teenager using it has now to read a f*cking booklet instead of a short leaflet  :lol:). Proper sources for their claims are missing. Where does the EA of 0.7 eV comes from? there's no 'methods & materials' discussion e.g. why they use Arrhenius, is it an industry standard? is it valid? Which sources describe that Arrhenius is valid to describe aging?  A proper peer reviewed paper must describe that, an application note not. Someone who's familiar with the topic knows how tho validate such claims. Me being familiar with scientific publications but not with this topic will avoid every application note to dive into the topic due to lack of such sources. It only gives me some 'buzzwords' to enhance my google-fu related to the topic. I see application notes more as a 'marketing paper' addressed to experienced people. Those people mostly don't need it and the clueless are put on 'party line'..

 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines