allen--smithee

Members
  • Posts

    33
  • Joined

  • Last visited

Reputation Activity

  1. Like
    allen--smithee reacted to Nemavio in Helios64 - No boot, No serial line, No fan, some LEDs   
    Hello,
    Thank you for your answer.
    For a reason I don't get, but I had to strap Jumper P10 and P11, and I got something on the serial line.
     
    https://wiki.kobol.io/helios64/jumper/

     
    The bootloader appears and linux starts after 2-3 minutes.
     
    We can say that it is solved but I don't understand why.
  2. Like
    allen--smithee reacted to Nemavio in Helios64 - No boot, No serial line, No fan, some LEDs   
    Hello,
    I received recently an Helios64 from a new friend,
    However, when I get home, I plug it in, turn it on (already I have to press the power button for 4 seconds to get some life) and I get the following LEDs lit up:
    - LED1
    - LED2
    - LED3
    - LED5
    - LED6
    i got nothing on the serial line (tested on MacOS and Windows), and no reaction when I do quick and long press on Recovery nor Reset.
     
    Is there some electronic schematic that could help me to debug hardware or procedure to get JTAG access to the RK3399 ?
     
    Regards,
  3. Like
    allen--smithee reacted to bunducafe in Does anyone actually have a stable system?   
    Since 4 weeks now I am runnign the helios64 together with OpenMediavault 6 with the full CPU capacity and ondemand governor.
    Before I had always problems when doing an scheduled task but with OMV6 the problem seems to be solved. I am quite pleased now with the performance and a rocksolid NAS.
     
  4. Like
    allen--smithee got a reaction from bunducafe in Kobol Helios64 on Sale   
    @bunducafe
    I think people want to recover the market value to get another market product (Black friday/Christmas/New gen) or/also else be completely overwhelmed by the dt notion.
     
    https://forum.armbian.com/subscriptions/
     
    it's a good start 5€ per month = 60€ per year,
    over the lifetime of the equipment estimated at several years.
    300€ = 5years.
     
    The financing of the support therefore depends on the quality of the product.
    If my Kobol Product dies and no other Board support by Armbian Community interests me, my Armbian participation will end.
    For me the two are linked.
     
    @Amix73
    I find your intervention very chivalrous, and I thank you for supporting the Armbian project which supports this great Board (with its known flaws) produced by Kobol.
  5. Like
    allen--smithee reacted to ebin-dev in Kobol Team is pulling the plug ;(   
    I can understand that you are disappointed.
     
    But I think that Kobol had to pull the plug, in the context of the current chip shortage - with limited (sometimes even no) availability of components and SOCs and rising prices. That is not the right environment for a small start-up to grow. - it is a rather toxic environemnt that will lead to insolvency, as profitable growth is impossible to achieve. Without growth (no new/further products to offer) you are just faced with fixed costs and without income. Not sustainable at all.
     
    All three of the founding members of Kobol deserve our full respect. They only drew a logical conclusion. Hopefully it can be reversed some time.
  6. Like
    allen--smithee reacted to registr123 in Does anyone actually have a stable system?   
    Solid as a rock. Running omv and a bunch of containers.
  7. Like
    allen--smithee reacted to Heisath in Kobol Helios64 on Sale   
    Added to watchlist. If it stays cheap enough, I'll buy it. 
     
    Can't guarantee you "Supported" flag, but I will have it running edge and notice problems
     
     
  8. Like
    allen--smithee got a reaction from NZJon in 5v 2.5amp available inside the Helios64...?   
    5v - 12v At your peril !  https://wiki.kobol.io/helios64/sata/#j8-pinout
    12v Maybe you can possibly bridged the input power to J16 https://wiki.kobol.io/helios64/hardware/#connector-interface-list
    3v - 5v but not 2.5a https://wiki.kobol.io/helios64/ gpio /# pinout
     
    Helios64 is not intended for this. the best would be to make a space size of your power cable on back cover of the case and plug it into a power source provided for this purpose.
  9. Like
    allen--smithee reacted to aprayoga in Kobol Team is pulling the plug ;(   
    Hi everyone, thank you for all of your support.
    I'll be still around but not in full time as before.
     
    I saw Helios64 has several issues in this new release. I will start looking into it.
  10. Like
    allen--smithee got a reaction from iav in Very noisy fans   
    @gprovost
    Is it possible to know the characteristics of the thermal bridge?
    (W.mK et thickness)
     
    @bunducafe
    You can use your helios64 without fan with a CPU limitation following :
    # 816Mhz and 1Ghz (eco)
    # 1.2Ghz CPU 100% Temp <82 ° (safe)
    # 1.41Ghz CPU 100% Temp <90 ° (Limit)
     
    not viable :
    # 1.6Ghz CPU100% Temp > 95 ° reboot and reset fancontrol config /etc and there I regret that it is not a K.
    Nevertheless, it is reassuring to see that the protection method works.
     
    From my own personal experience, it is not necessary to change the fans immediately
    and you will quickly agree thereafter that the noise of the fans is erased by the noise of a single rotating hard disk.
    But if you use it also as a desk with SSD like me, you would definitely want to use it without limitation.
    In that event you must familiarize yourself with Fancontrol otherwise you will also be disappointed with your Noctua or any fan.
  11. Like
    allen--smithee reacted to gprovost in Kobol Team is taking a short Break !   
    It’s been 3 months since we posted on our blog. While we have been pretty active on Armbian/Kobol forum for support and still working at improving the software support and stability, we have been developing in parallel the new iteration of Helios64 around the latest Rockchip SoC RK3568.
     
    However things haven’t been progressing as fast as we would have wished. Looking back, 2020 has been a very challenging year to deliver a new product and it took quite a toll on the small team we are. Our energy level is a bit low and we still haven’t really recovered. Now with electronic part prices surge and crazy lead time, it’s even harder to have business visibility in an already challenging market.
     
    In light of the above, we decided to go on a full break for the next 2 months, to recharge our battery away from Kobol and come back with a refocused strategy and pumped up energy.
     
    Until we are back, we hope you will understand that communication on the different channels (blog, wiki, forum, support email) will be kept to a minimum for the next 2 months.
     
    Thanks again all for your support.
  12. Like
    allen--smithee reacted to SIGSEGV in Feature / Changes requests for future Helios64 board or enclosure revisions   
    My comment might be late to the party - if there was a possibility to add an optional display and a few user-configurable buttons to the Front Panel, that would be great.
    I know it would mess a bit with the airflow, but it could be used for system monitoring and few other specific use cases.
  13. Like
    allen--smithee reacted to Polarisgeek in Does anyone actually have a stable system?   
    Just wanted to add my experience as maybe someone can get something semi-useful out of it. 
     
    I started out with 3 - 4TB drives with 2 of them in Raid 1 and the 3rd on it's own.  I installed OMV and CUPS on a Micro SD card.  Even from the very beginning, I always had trouble restarting the system.  Sometimes it would boot after 3 tries, sometimes it would take 20 tries.  I watched and it was never the same issue twice that caused the boot failure.  I removed drives, plugged in different combo's, several fixes on the forums, anything I could think of and nothing really seemed to make a difference.  The good news was that once it was up and running, it ran great.  I had 2 months of uptime with no issues.  Then i'd run an update and have a heck of a time getting it restarted.  Just figured that's the way it was and hoped things would get better as the software matured.  
     
    I'm now on my second iteration with 5 - 14TB (Thanks Uncle Joe) drives in Raid 6.  After 2 months, i've had 2 random crashes for unknown reasons, one at 3 days and the other after 7 days.  Aside from running updates and rebooting, it's been running solid for me.  I'm only running OMV and a CUPS server.    Getting to this point was a huge PITA.  I spent nearly 8 hours trying to get it up and running over the course of a Sunday.  It seemed like no matter what I did, i'd have a Kernel Panic during the setup process, and then the system wouldn't boot at all giving different errors every time.  So i'd re-download the image, and attempt to install from scratch again.  I played with installing different options, in different sequences (as much as possible), without much luck.  I verified the download every time.  I finally tried using USBImager, as an alternative to BalenaEtcher, and found that wring to the sdcard seemed better and the installation seemed better, although it did panic a couple times.  Once I finally got it running, I gave it some time, installed all the updates, reboot several times in between setting up OMV for testing and have been happy with it since.  The biggest difference now is that I don't worry about rebooting as it comes right up the first time instead of several tries like the first iteration.  
     
    Overall, i've been happy with mine.  I'm very new to the Linux environment so it's been fun learning a new thing.  Keep up the great work!
  14. Like
    allen--smithee got a reaction from hartraft in Helios64 - freeze whatever the kernel is.   
    @tionebrr
    Source : https://www.rockchip.fr/RK3399 datasheet V1.8.pdf
     
    1.2.1 Microprocessor
     Dual-core ARM Cortex-A72 MPCore processor and Quad-core ARM Cortex-A53MPCore processor, both are high-performance, low-power and cached application processor
     Two CPU clusters.Big cluster with dual-coreCortex-A72 is optimized for high-performance and little cluster with quad-core Cortex-A53 is optimized for low power.
    <... >
     PD_A72_B0: 1st Cortex-A72 + Neon + FPU + L1 I/D cache of big cluster
     PD_A72_B1: 2nd Cortex-A72+ Neon + FPU + L1 I/D cache of big cluster
    <... >
     PD_A53_L0: 1st Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L1: 2nd Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L2: 3rd Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L3: 4th Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
    <...>
     
    3.2 Recommended Operating Conditions
    The below table describes the recommended operating condition for every clock domain.
    Table 3-2 Recommended operating conditions
    Parameters Symbol Min Typ Max Units
    Supply voltage for Cortex A72 CPU BIGCPU_VDD 0.80 0.90 1.25 V
    Supply voltage for Cortex A53 CPU LITCPU_VDD 0.80 0.90 1.20 V
    Max frequency of Cortex A72 CPU 1.8 GHz
    Max frequency of Cortex A53 CPU 1.4 GHz
     
  15. Like
    allen--smithee got a reaction from tionebrr in Helios64 - freeze whatever the kernel is.   
    @tionebrr
    Source : https://www.rockchip.fr/RK3399 datasheet V1.8.pdf
     
    1.2.1 Microprocessor
     Dual-core ARM Cortex-A72 MPCore processor and Quad-core ARM Cortex-A53MPCore processor, both are high-performance, low-power and cached application processor
     Two CPU clusters.Big cluster with dual-coreCortex-A72 is optimized for high-performance and little cluster with quad-core Cortex-A53 is optimized for low power.
    <... >
     PD_A72_B0: 1st Cortex-A72 + Neon + FPU + L1 I/D cache of big cluster
     PD_A72_B1: 2nd Cortex-A72+ Neon + FPU + L1 I/D cache of big cluster
    <... >
     PD_A53_L0: 1st Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L1: 2nd Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L2: 3rd Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
     PD_A53_L3: 4th Cortex-A53 + Neon + FPU + L1 I/D Cache of little cluster
    <...>
     
    3.2 Recommended Operating Conditions
    The below table describes the recommended operating condition for every clock domain.
    Table 3-2 Recommended operating conditions
    Parameters Symbol Min Typ Max Units
    Supply voltage for Cortex A72 CPU BIGCPU_VDD 0.80 0.90 1.25 V
    Supply voltage for Cortex A53 CPU LITCPU_VDD 0.80 0.90 1.20 V
    Max frequency of Cortex A72 CPU 1.8 GHz
    Max frequency of Cortex A53 CPU 1.4 GHz
     
  16. Like
    allen--smithee reacted to gprovost in SATA issue, drive resets: ataX.00: failed command: READ FPDMA QUEUED   
    @Fred Fettinger Your errors are most likely due too a not stable HDD harness. You can contact us to support@kobol.io to see if replacement of the HDD wire harness is needed.
     
    @ShadowDance Thanks for the feedback and at least we can remove grounding issue from the list of possible root causes.
    Looks like that in the use case of very heavy HDD I/O load arising mainly during scrubbing operation too much noise is generated resulting in those HSM violation.
    Thanks to all your tests, the issue seem unfortunately to point to noise filtering issue. As previously mentioned, we will fix that in next revision.
    We will see if the new SATA controller firmware has any impact but we doubt that. I think the only solution for current revision is to limit ATA speed to 3Gbps when using btrfs or zfs.
  17. Like
    allen--smithee reacted to deanl3 in nand-sata-install to eMMC, won't start kernel   
    @gprovost Fantastic, that did the trick, thank you!
    Not sure how that got in there either. Strangely, those lines are present on the armbianEnv on my SD card and were not causing an issue.
     
    Thanks!
  18. Like
    allen--smithee reacted to NickS in Does anyone actually have a stable system?   
    Stable System: Absolutely
    I'm running with all latest maintenance & OMV5.
    Linux 5.10.21-rockchip64, OMV 5.6.2-1(Usul)
     
    ---- but I'm keeping things very simple...
    NO RAID
    4 x HDD ALL THE SAME
    NO ZFS
    BOOT FROM SD 
    AUTO SHUTDOWN & RESTART EACH 24 hours 
     
    -- until I get to the apps
    21 Docker containers running various apps & services.
     
    I've had one docker image fail to come up on a couple of occasions.
     
    So ... I'm a very happy customer.
    My Data resilience doesn't come from RAID (which I really don't like 'cos 8TB disks take forever to rebuild).
    I just backup critical data to my old trusty Helios4! 
     
  19. Like
    allen--smithee reacted to scottf007 in ZFS or normal Raid   
    Hi All, 

    I am on my Helios64 (already have a Helios4). 

    I am running 2 x 8TB, 1x 256 M2, OMV5 debian etc.
     
    I have ZFS running (from the wiki instructions) in a basic mirror format, and if I add a few drives I think it would still be a mirror, however I have the following question. It seems far more complicated than MDADM, and potentially more unstable (not in terms of the underlying technology, but in terms of changing these kernals etc every week). 

    Do you think for an average home user that this is really worth it or will constantly need work? To change it would mean starting again, but I do not want to change the setup every month and would intend to keep it stable for media, and backing up some stuff for the next few years. Or if it is working it should be fine for the long term?
     
    Cheers
    Scott
  20. Like
    allen--smithee reacted to Werner in Unable to boot   
    If you refuse to provide further information for debugging as mentioned above it is quite hard to help you. Noone here is a clairvoyant
     
  21. Like
    allen--smithee got a reaction from gprovost in Very noisy fans   
    The problem of this post is solved !!
    Your fans with an efficient thermal pad and SSD drives are 18w and 10db peak.
    I very rarely exceed 12w in a day and the unit is completely inaudible.
    Except for doing these tests which do not reflect the reality of normal use at all.
  22. Like
    allen--smithee reacted to Gareth Halfacree in Encrypted OpenZFS performance   
    Your MicroServer has either an Opteron or Ryzen processor in it, either one of which is considerably more powerful than the Arm-based RK3399.
     
    As a quick test, I ran OpenSSL benchmarks for AES-256-CBC on my Ryzen 2700X desktop, an older N54L MicroServer, and the Helios64, block size 8129 bytes.
     
    Helios64: 68411.39kB/s.
    N54L: 127620.44kB/s.
    Desktop: 211711.31kB/s.
     
    From that, you can see the Helios64 CPU is your bottleneck: 68,411.39kB/s is about 67MB/s, or within shouting distance of your 62MB/s real-world throughput - and that's just encryption, without the LZ4 compression overhead.
     
  23. Like
    allen--smithee reacted to SIGSEGV in Very noisy fans   
    @allen--smithee Thanks for the feedback.
  24. Like
    allen--smithee got a reaction from SIGSEGV in Very noisy fans   
    The problem of this post is solved !!
    Your fans with an efficient thermal pad and SSD drives are 18w and 10db peak.
    I very rarely exceed 12w in a day and the unit is completely inaudible.
    Except for doing these tests which do not reflect the reality of normal use at all.
  25. Like
    allen--smithee got a reaction from SIGSEGV in Very noisy fans   
    @ gprovost
    As we might expect, there is a little spread of the pads on the outer edges of the processor, but also on the edge of the radiator
    and so you were right, there is no need to leave the inductors bareheaded.
    Now I think 19x19 is enough for the dimensions of the cpu thermical pad with a 1.5mm. 
    I redid the separation between the CPU Pad and the rest of the board and put pieces of pad in the slots of the inductors. 
    You don't know if it will be possible so, nevertheless now we know what we need for our next blends at home ;).
     
    P6 and P7> PWM = 35
    P6 & P7> PWM = 50 is my new PWMMAX on Fancontrol
    P6 et P7> PWM = 255
     
     
    @SIGSEGV
    If you are interested in this solution.
    I recommend TP-VP04-C 1.5mm for your helios64 (2 plates of 90x50x1.5mm)